id
stringlengths
10
10
title
stringlengths
3
179
track
stringclasses
1 value
status
stringclasses
3 values
keywords
stringlengths
2
2.39k
primary_area
stringclasses
21 values
author
stringclasses
501 values
authorids
stringclasses
501 values
aff
stringclasses
1 value
aff_domain
stringclasses
1 value
position
stringclasses
1 value
rating
stringclasses
355 values
confidence
stringlengths
0
19
soundness
stringclasses
642 values
contribution
stringclasses
596 values
presentation
stringclasses
782 values
rating_avg
float64
0
9
confidence_avg
float64
0
5
soundness_avg
float64
0
4
contribution_avg
float64
0
4
presentation_avg
float64
0
4
corr_rating_confidence
float64
-1
1
project
stringclasses
1 value
github
stringclasses
1 value
Review
listlengths
2
10
2cF3f9t31y
SELECTFORMER: PRIVATE AND PRACTICAL DATA SE- LECTION FOR TRANSFORMERS
main
Active
Secure Multiparty Computation;Machine Learning;Efficiency;Transformer model
infrastructure, software libraries, hardware, systems, etc.
5;6;6;6
3;2;2;2
2;3;4;2
3;3;3;3
2;4;3;2
5.75
2.25
2.75
3
2.75
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) Can you share a plot of accuracy vs. delay for baselines and the variants of your method, including the variant with 1 phase and a dimension 2 MLP? \n\n2) Why is replacing one nonlinearity with a different nonlinearity useful for MPC? \n\n3) Can/should you compare to BOLT in the evaluation?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "-\tThe problem formulation is interesting and new\n-\tThe algorithm appears to cleverly use the available information\n-\tThe empirical performance and experiments are promising\n\nOverall, I think the paper is proposing an interesting new idea (both formulation and algorithm), and hence I gave it a positive score. However, I think the level of polish and writing is rather low, and I think the paper would need a significant clean-up prior to publication." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes SelectFormer, a method for privately selecting data for transformer finetuning using MPC. The setting is that a data holder is trying to sell data to a user, who wants to finetune a transformer on only a subset of the holder’s data. However, the data holder doesn’t want to expose all fo their data, and the data user doesn’t want to expose their model weights to the data holder. The main idea is to iteratively learn a sequence of models, each of which is used to privately rank points in the dataset in terms of their informativeness (entropy of the output distribution of the target model). These models replace nonlinear operations like softmax with an MLP with ReLU activation, where the MLP is trained by collecting input-output pairs from the intermediate model. The authors show that their method significantly reduces the time to select a good training dataset relative to prior private baselines, without sacrificing much in terms of quality (i.e. it selects data points that lead to a good downstream model)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tThe paper is not polished or easy to read/understand\n-\tIt’s not clear if the paper compares against relevant baselines\n-\tImprecise threat model and privacy guarantees. The threat model does not clearly specify *who* should not learn *what data* \n-\tHow does the MPC handle nonlinearities in the MLP? \n-\tThe level of formalism in the paper is very low \n\nIn your privacy guarantees, please specify *who* learns or does not learn what information. For instance, if the data holder knows the architecture of the model(s) and they know the ranked entropy of their samples, why can’t they train the model locally on the most informative samples to approximate the private model? This leakage was not explored in the paper, to my knowledge. \n\nThe paper’s evaluation considers time-accuracy tradeoffs. However, these are mostly provided implicitly in tables that do not clearly show the tradeoff. It may be helpful to show a plot with selection time on one axis and accuracy on the other, and then see different baselines, including variants of your method. Specifically, based on the results in the Appendix, I think the 1-phase variant with a dimension 2 MLP may do well enough relative to the other hyperparameter settings, especially if you consider its lower delay. But it’s hard to judge from the tables provided. Can you produce this plot and compare it to the suggested hyperparameter settings? \n\nA lot of notation seemed to be undefined or imprecisely defined. This made the paper difficult to read. For instance, in Sec. 4.1, “with a selectivity αi = Si/Si−1” if S_i is a dataset, do you mean the ratio of the *cardinalities* of S_i and S_{i-1}? In Sec 2.1 and 2.2 -- What is the difference between M_t and M_target? Why do you want to query all the samples in D on M_t instead of M_target? I thought M_t was the finetuned model, which is unknown until you finetune with the selected data? The notation $\\hat M_i$ was not defined, and the proxy model was previously defined as $M_p$. And what is $M_g$ in Figure 2? It is not defined until later in the paper, in Section 4.2. But the definition doesn’t match Figure 2 (is it the bottom K or L layers?) What is the difference between $W_i$ and $w_i$? \n\nIn the same vein, the paper does not seem to formally write out the MPC algorithm or prove any guarantees. \n\nThe paper is missing several relevant references, including one potentially important baseline for comparison: \n-\tPang et al, “Bolt: Privacy-Preserving, accurate, and efficient inference for transformers” (S&P 2024)\n\nTable 1 is hard to read. Please enlarge. Also what metric is it providing? Please make the table and caption self-contained. This is true also of the tables in the appendix, many of which don't specify what metric they are listing.\n\n\nMinor comments:\n-\tIntro: “As shown in ??,”\n-\tThe section “Workflow” is very difficult to follow—it’s not clear what you mean by ranking samples by entropy, for instance\n-\tThe model owner has a private, small validation set, on which she wants to maximize the test accuracy.  Do you mean validation accuracy?\n-\t“offline generate a random triple called Beaver triple Beaver (1992)” wrong type of citation typo\n\n* I am not an expert in MPC and may be missing relevant literature and requirements." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Is there a way to make the grid search practical? We probably do not have the luxury of trying MPC with a data vendor multiple times with different parameters in practice. Maybe we can do a grid search to see what parameters perform the best on a non-private test dataset, and then use these values in all actual interactions with data owners?\n\nCan you include a 'baseline' for Figure 1? i.e., show how long the corresponding operations take without using MPC and the memory requirement for each (communication rounds could be omitted, of course)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper studies a problem that has been of interest in past work. It proposes a method that is both very efficient and achieves high utility in the empirical studies. The authors can operate in a more general setting (where examples are unlabelled) than past work. It is clear from the presentation what techniques the authors' method uses to improve upon the past work, and the techniques are made understandable at a high-level even to someone who is not an expert in the area. Furthermore, techniques such as using MLPs for dimension-reduction and fusing multiple operations are themselves sufficiently different from past work and a possibly interesting contribution independent of the problem of data selection." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper considers the private data selection problem, i.e. the problem of a model trainer selecting data to purchase from a data owner without revealing the purchased data to the model trainer. The authors focus on the active learning framework, which targets examples which have the largest prediction entropy, i.e. which the model can learn more from, and does not require labelled examples. Ideally multi-party computation can be used to have the model trainer and data owner compute the examples' entropies (or, if entropies are themselves sensitive, a relative ordering of the examples' entropies) without revealing any other information about the examples themselves. For transformers, MPC is infeasible because transformers use high-dimensional, non-linear operations. Past work got around this issue by approximating the non-linear operations using linear operations. However, using MPC to compute these approximations remains expensive and the approximation comes at a cost in accuracy.\n\nThe authors instead propose a number of techniques to make MPC evaluation of transformers more efficient. First, they fuse multiple non-linear steps. Next, they use multi-layer perceptrons (MLPs) to approximate the fused steps and reduce their dimension (as opposed to past work, which use MLPs to reduce a single non-linear step without dimension reduction). Third, they employ a multi-phase selection protocol, where initially a small model is used to filter the initial dataset $S$ into $S_1$, and in each successive step a larger model is used to filter $S_i$ into $S_{i+1}$ until the final dataset is acquired. To find the MLP approximation and also construct the smaller models, the model trainer purchases a small arbitrary bootstrap dataset up front, and uses the inputs/outputs of the large model on this dataset to train the MLP to approximate layers of the large model. The number of layers and width of the layers, as well as the dimension of the MLPs, can be reduced to construct a smaller model.\n\nThe authors perform an empirical comparison of their data selection protocol to (1) picking data at random (2) an oracle which chooses the highest-entropy examples without MPC, and (3) MPCFormer. At varying dataset sizes, the authors' result is competitive with baseline (2) in terms of accuracy and achieves a ~200x speedup in runtime over (2), and has large accuracy improvements over (1) and (3), on a variety of empirical tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The MLP dimensions, proxy layers, and selectivity per round are chosen via grid search, which might limit the practicality of the method. It would be nice to have either \"standard\" guidance for choosing these parameters or a justification for grid search being practical. See Questions below.\n\nThe paper also needs some editing, there are some major typography errors, though I expect this is easy to handle in a revision. For example: \n* Line 82, citation to ??\n* Line 398 - \"under 3 compute parties\" appears twice\n* Line 498, $d_i$ is not formatted properly" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Have you considered alternatives to MPCFormer, such as THE-X (that you cite, Chen et al 2022) or other modern 2PC inference papers for transformers, especially if they use data-independent nonlinearity approximations? Or are there reasons why such baselines don't apply or are already outperformed by MPCFormer in a data selection setting where distillation data is scarce?\n- If I understand correctly, computing ReLUs is still a bottleneck in MPC. This might be why FFNs take most of the computation time in Fig 1. Yet, your method introduces even more ReLUs by adding MLPs. Have you considered other learnable approximations as an alternative to MLPs, such as polynomials with learnable coefficients, which could be more MPC-friendly? Why should we expect MLP approximation to be the best way to obtain MPC-friendly proxy models?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Main strengths:\n- SelectFormer shows strong improvements in delay and accuracy over baselines, which include two naive baselines (Oracle and Random) and one recent MPC inference paper (MPCFormer). \n- Multiphase selection is a nice technique. The key idea is that we can significantly reduce delay by selecting the first datapoints with coarse model proxies, and progressively use more accurate and slower proxies (trained on a now larger collection of purchased datapoints) to better select the following datapoints. It turns out that this technique also improves accuracy a bit.\n- Another useful contribution is MLP approximation of nonlinearities, where the MLPs are trained by generating a large synthetic dataset using metrics coming from the small number of datapoints already purchased.\n- The paper's evaluation is broad and strong, across both vision and NLP tasks. I appreciated the numerous experiments and very granular ablation studies, such as Fig 4 and Fig 6.\n\n\nOther strengths:\n* It is useful to compare the accuracy drop from different MLP approximations.\n- Handling imbalanced and unlabeled data is important.\n- Using computation/communication parallelism sounds intuitive, but it might not be done by other works, so it is valuable to evaluate it. This parallelism offers a nice gain in performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on private data selection for transformers over MPC. The authors consider a two-party setting, where a model owner wishes to purchase data points from a data owner. The model owner needs this data to fine-tune a target model. To find the most relevant data points without revealing the rest of the dataset, the two parties engage in a multiparty computation. \n\nComputational and communication overhead is a key challenge in making MPC practical for data selection. In an ideal world, the model owner could evaluate its target model on data points with MPC. However, this is not practical for large transformer models, which contain nonlinearities (such as layer norm or softmax) that are prohibitively expensive to compute with MPC. Thus, an alternative approach is to use a cheaper proxy model to approximate the target model efficiently while still selecting useful datapoints. \n\nThis paper proposes a new way of building such proxy models, by replacing costly nonlinearities by more MPC-efficient, trainable, multilayer perceptrons. The authors also introduce a multiphase selection approach where datapoints are selected progressively, thereby using previously selected datapoints to improve the selection of future datapoints. Delay can also be optimized by overlapping computation and communication. \n\nThe paper is evaluated on vision and NLP tasks, and shows significant improvements in both delay and accuracy compared to prior work." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Main weaknesses:\n- I don't know the literature extensively, but I wonder if MPCFormer is really the strongest baseline against which we should evaluate SelectFormer (the other baselines, Oracle and Random, are useful to set the delay/accuracy range but are not real alternatives to SelectFormer). Indeed, as the authors note, \"MPCFORMER’s model distillation approach is ill-suited to data selection\", and I wonder if this is the reason behind MPCFormer's particularly poor accuracy (Table 3). The paper mentions THE-X (Chen et al, 2022), which could be a stronger baseline. Another potentially relevant paper is Bolt, published at S&P 24: https://eprint.iacr.org/2023/1893. I don't know these works in detail, but they might be more amenable to data selection than MPCFormer if they do not rely on data distillation or data-dependent approximations. In short, I am worried that MPCFormer might be a strawman against which SelectFormer shines too easily.\n- Another concern is that the techniques proposed in this paper seem to only apply to a quite specific application, namely MPC data selection. Hence, depending on how widespread MPC data selection is, SelectFormer could have a pretty limited impact. Indeed, the authors note that their \"MLP approximation is specifically suitable for data selection while impractical for model inference directly\", which might limit the applicability of SelectFormer to other problems. \n- Finally, I am not an expert in MPC systems, but I am a bit skeptical of the blanket claim that \"No prior MPC systems exploit such parallelism\". I remember a similar idea being mentioned by Meta in a research whitepaper (https://research.facebook.com/publications/private-computation-framework-2-0/), and a cursory web search returned a preprint showing how \"it is possible to carefully orchestrate the computation and communication steps to overlap\" in ML MPC training and inference (https://arxiv.org/pdf/2209.13643). The paper might still be making valuable contributions in MPC parallelism, but highlighting such contributions might benefit from a more detailed comparison to prior work. \n\nMinor comments:\n* typo: \"Note that we cannot directly compare to PUMA Dong et al. (2023), which is designed under 3 compute parties. and three computing parties.\"\n* I can't find the data showing that MPS reduces total delay by 33% to 61%. Is this from the difference between PM and PMT in Figure 6? Maybe this would be easier to break down if there was a delay column in Table 4?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper addresses the important and challenging problem of efficient private data selection.\n2. The proposed hierarchical algorithm demonstrates thoughtful consideration of multiple components, including multi-phase selection and dimension optimization, to enhance efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an efficient approach to private data selection, leveraging multi-party computing (MPC) for data appraisal in complex models such as transformers and ViT. The authors propose using a proxy model that approximates the complex model by emulating non-linear modules with MLP for data appraisal. While experimental results demonstrate improved efficiency compared to random selection and MPCFormer, the paper would benefit from clearer algorithmic descriptions and more comprehensive experimental analysis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The pipeline of the algorithm is not clear. In multi phase selection, why do you need to query the candidate data? Whether or not to select this subset of data depends on the test data performance. It seems that when you train the MLP, except for the selected data, you also generate an Gaussian distribution for the layer, but how to use this distribution to supervise fine-tuning is not clear. In sum, I hope there will be an algorithm pipeline to clearly show each steps and which parameters are hyper-parameters.\n2. The results in Table 2 is surprising. Does it mean that MLP is enough and we can drop all non-linear layer since some of your results show that all emulations with MLP outperform no emulations. \n3. The gap between MLP and non-linear module is simply shown by the final accuracy of your task, which may contain much randomness. Could you explain the gap in embedding layer? Like how much embedding difference for different layer emulation.\n4. The experimental results are not clear. E.g., in Table 3, did you test the model under the same communication and computation budget? In Figure 5, what does 1 phase mean? How many phased do you use in the \"ours\"? Why not compare your delay with the baseline MPCFormer?\n5. Lack of analysis. As your work focus on improving the efficiency and keeping performance, it is important to specifically analyze the computation and communication costs." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Secure and fast data selection & appraisal over MPC (Multi-Party Computation), for training NLP/CV Transformer models" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024selectformer,\ntitle={{SELECTFORMER}: {PRIVATE} {AND} {PRACTICAL} {DATA} {SE}- {LECTION} {FOR} {TRANSFORMERS}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2cF3f9t31y},\nnote={under review}\n}" }, "abstract": { "value": "Critical to a free data market is $ \\textit{private data selection}$, i.e. the model owner selects and then appraises training data from the data owner before both parties commit to a transaction. To keep the data and model private, this process shall evaluate the target model to be trained over Multi-Party Computation (MPC). While prior work suggests that evaluating Transformer-based models over MPC is prohibitively expensive, this paper makes it practical for the purpose of data selection. Our contributions are three: (1) a new pipeline for private data selection over MPC; (2) emulating high-dimensional nonlinear operators with low-dimension MLPs, which are trained on a small sample of the data of interest; (3) scheduling MPC in a parallel, multiphase fashion. We evaluate our method on diverse Transformer models and NLP/CV benchmarks. Compared to directly evaluating the target model over MPC, our method reduces the delay from thousands of hours to tens of hours, while only seeing around 0.20% accuracy degradation from training with the selected data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Secure Multiparty Computation", "Machine Learning", "Efficiency", "Transformer model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c40d1c43ba28dc7a9d005bdb42e6c2c8d8a33bf2.pdf" }, "presentation": null, "primary_area": { "value": "infrastructure, software libraries, hardware, systems, etc." }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SELECTFORMER: PRIVATE AND PRACTICAL DATA SE- LECTION FOR TRANSFORMERS" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2d734s2WDb
VIBEID: A STRUCTURAL VIBRATION-BASED SOFT BIOMETRIC DATASET FOR HUMAN GAIT RECOGNITION
main
Active
Structural vibrations;Gait Recognition;Deep learning;Machine learning
datasets and benchmarks
3;3;5;5
3;4;1;5
3;2;2;3
2;2;3;2
3;3;3;3
4
3.25
2.5
2.25
3
-0.169031
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I would appreciate if the authors could address the concerns I raised in the weaknesses section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper presents a well-justified study, especially it clearly identifies current research gap for person identification. Overall it reads very well." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study introduces a benchmark for gait recognition utilizing a novel structural vibration sensing technique, the geophone. and the new benchmark comprised of in total 100 subjects, collected under indoor or outdoor settings. It invesitgated whether the novel sensing modality can encode identity related information, and what is the limitations or sensitivity of this technique. Although this work addresses an interesting topic, it may not yet provide the technical depth or extensive experimental validation expected for broader applicability. There are also several concerns regarding the experimental settings and presentation clarity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* [Sample size] Probably for the gait recognition task, we are more interest in how many subjects collected. The subject size, compared with current large dataset, especially the GaitSet, is still not that comparable. \n\n* [Technical contents] I am concerned that the technical content is somewhat limited, even for a benchmark paper, for ICLR. Please consider adding more experiments and tasks to thoroughly validate the usability of this dataset, such as Re-ID, gait event detection, and generalization across subpopulations... I encourage the authors to refer to established works like GaitSet for inspiration. Gait data is highly complex, influenced by factors such as age, gender, emotion, and health conditions. Reflecting on these factors in your experiments would enhance the depth of the study.\n\n* [Applicability] I am also concerned that this kind of ambient sensor can only be applied indoor or with relatively small distances, which might limit its application, compared to wearable data?\n\n* [Experiment setup] In the experiment settings, I noticed that there were no concurrent human activity when recording the data, this may be another issue that limits the usability of this study. Additionally, will the data be sensitive to the perspective of the sensor, as I know it is quite sensitive for vision based person identification. \n\n* [Gait event] gait event detection, is this be validated in terms of accuracy?\n\n* [Dataset details] Further elaboration on the dataset’s composition and subject split for person identification would be valuable, particularly for readers unfamiliar with this topic. \n\n* [Table clarity] Table 5 is not clearly illustrated, what is the performance comparison between structural vibration and camera? Very limited information is given in both the table and the associated texts. Expanding on this comparison would help readers understand the relative strengths of each technique.\n\n* [Minor - clarification] I assume the subjects of A2, A3, A4 are part of A1, correct? Please clarify this. \n\n* [Minor - citation] When citing a work which actually does not play any role in your sentence, please use (X et al., XXXX), rather than X et al. (XXXX)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Involving human data collection" }, "flag_for_ethics_review": { "value": [ "Yes, Privacy, security and safety" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Although the authors define the floor in different classes, the hardness might be a more reliable way to classify. Since different carpets' thicknesses may have different responses. And what is the distance range for the geophone, since 4 m is not far for camera sensor" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is clear and easy to follow, and the tables and figures are easy to understand\nThe proposed question is interesting, trying to build the connection between identity and walking vibration, a fine detail when a human is walking. And authors collect a relatively large dataset in multiple conditions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors built a dataset with 100 people using geophone to do multiple experiments based on the human's structural vibration. It consists of multiple covariances including floor types, and distances. The work tries to find a connection between the vibration and identities and builds multiple benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In real-life applications, it is hard to find a good condition to use a geophone to capture a human's gait with little noise.\n\nHow to control the noise in outdoor cases.\n\nCompared to a camera, the vibration-based method is restricted by the sensor and distance. \n\nThe protocol is not clear. How is the train and test set defined? For human identification, the identities appearing in the training set will not be present in the test set. It seems these experiments do not follow this setting" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What do you mean by events in Table 2?\n2. Line 407 - where is table 10? or did you mean 1?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The description of the data collection protocol is clearly written with sufficient details and clear explanations of the research motivation\n2. The introduction and related work sections contain important background information justifying the motivation for the introduced dataset." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The presented work introduces a new biometric dataset for human gait recognition based on structural vibrations. The dataset is applied to various tasks such as person identification, domain adaptation, and multi-modal scenarios combining vibration and vision-based identification methods. Experimental analysis includes verification of machine-learning and deep learning approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Is there any requirement for using a specific sensor type during the inference if trained on the presented dataset? I'm wondering about the practical implication of the proposed solution. \n2. The work indicates that the concurrent activity was not taken into account, however it's very possible to happen in real-life scenarios. Would the presented dataset be sufficient for handling such scenarios? How should one prepare for additional noise introduced in this way? \n3. It's not clear how filtering of potential noise was performed? Was the assumption that the data collection is performed in an isolated environment without any noise? You mentioned that there was environmental noise present, but how do you quantify its presence? If the assumption is that there is minimal or no noise, it again raises question around the practicality of the solution.\n4. It's not clear how the data was split for training and testing? Were the same subjects present in both subsets or did you ensure no overlap?\n5. One of the motivation behind introducing a new dataset is that other datasets contains a limited number of subjects. It's mentioned that there are 100 subjects in the proposed dataset but then only 30 and 40 subjects are used for floor types and distance measurements. Why not all 100 subjects were used for all of the scenarios?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 1 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. To what extent do walking speed and the carrying of objects impact recognition performance?\n2. Building on question 1, does abnormal gait pose significant challenges for re-identification?\n3. The VIBEID dataset studies an operating distance range of 1.5m to 4m, while vision-based gait recognition typically works at distances over 10m. What is the distance limit for vibration sensors to capture meaningful gait signals?\n4. If there are obstructions between subjects and the sensor, is reliable recognition still possible?\n5. How does the proposed vibration-based gait recognition handle scenarios with multiple pedestrians walking simultaneously?\n6. I recommend replacing Figure 3 with a clearer version.\n7. The evaluation protocol should be more detailed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Vibration-based gait recognition introduces a novel approach for human identification.\n- The proposed baseline methods are effective.\n- This work establishes the largest-scale vibration-based dataset to date." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel dataset termed VIBEID, designed for human gait recognition using structural vibration data. The dataset includes recordings of 100 subjects across various distances, floors, and environments. Experiments demonstrate that structural vibration can serve as a viable biometric trait across different scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The dataset has a limited number of subjects, although it includes over 88 hours of recorded data.\n- Compared to commonly used vision-based gait recognition, the operating distance remains relatively short.\n- The evaluation setup lacks clarity." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "VIBeID offers a dataset and benchmark for person identification using structural vibrations from walking on three surfaces and three distances from sensor, enabling comparison with video-based methods." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024vibeid,\ntitle={{VIBEID}: A {STRUCTURAL} {VIBRATION}-{BASED} {SOFT} {BIOMETRIC} {DATASET} {FOR} {HUMAN} {GAIT} {RECOGNITION}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2d734s2WDb},\nnote={under review}\n}" }, "abstract": { "value": "We present VIBeID, a dataset and benchmark designed for advancing non-invasive human gait recognition using structural vibration. Structural vibrations, produced by the rhythmic impact of the toe and heel on the ground, are distinct and can be used as a privacy-preserving and non-cooperative soft-biometric modality. We curated the largest dataset VIBeID consists of footfall generated structural vibrations of 100 subjects. Existing datasets in this field typically include around ten subjects and lack comprehensive exploration of domain adaptation. To thoroughly explore the domain adaptation aspect of this biometric approach, we recorded vibration data on three distinct floor types (wooden, carpet, and cement) and at three distances from the geophone sensor (1.5 m, 2.5 m, and 4.0 m), involving 40\nand 30 subjects, respectively. Additionally, we benchmarked our dataset against video recordings from 15 individuals in an outdoor setting. Beyond providing 88 hours of raw vibration data, VIBeID establishes a comprehensive benchmark for a) person identification: where the aim is to recognize individuals through their unique structural vibrations, b) domain adaptation: assessing model performance across different walking surfaces and sensor positions, and c) multi-modal comparison: comparing vibration-based and vision-based identification methods. Our experiments, using both machine learning and deep learning approaches, establish a baseline for future research in this field, and introduce a large-scale dataset for the broader machine learning community." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Structural vibrations", "Gait Recognition", "Deep learning", "Machine learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4dabcff589d64c0ea1808448edfe022eb765c2ca.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6d0a082519920bc4a9cf6f87a578e6de8fa5e77f.zip" }, "title": { "value": "VIBEID: A STRUCTURAL VIBRATION-BASED SOFT BIOMETRIC DATASET FOR HUMAN GAIT RECOGNITION" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2e4ECh0ikn
Talking Turns: Benchmarking Audio Foundation Models on Turn-Taking Dynamics
main
Active
Turn-taking;Conversation AI;Audio Foundation Models;Evaluation Metric;Evaluation Benchmark
datasets and benchmarks
3;5;6;6;6
3;3;3;3;3
2;3;2;3;3
1;2;3;3;3
2;3;3;3;3
5.2
3
2.6
2.4
2.8
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The Fisher dataset is a common dataset comparable to Switchboard. What is the performance of the supervised turn-taking prediction model on this dataset?\n2. How can your evaluation protocol be adapted or applied in scenarios where supervised datasets are not available for other languages, such as Chinese?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The originality of the work is commendable. The authors propose a novel evaluation protocol to assess the turn-taking capabilities of spoken dialog systems.\n2. The paper is well-written and provides sufficient experimental details in the Appendix.\n3. The authors plan to open source the evaluation platform." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new evaluation protocol to assess the spoken dialog system's turn-taking capabilities, i.e., the Moshi and Cascaded model. They use a supervised model as a judge which is trained to predict turn-tasking events in human-human conversation (i.e., Switchboard). The paper presents a comprehensive user study that evaluates the Moshi and Casaded model on their ability to perform turn-taking events, and it finds that they sometimes do not understand when to speak up, can interrupt too aggressively, and rarely backchannel. The main contributions are:\n1. A new evaluation protocol to assess the spoken dialog system's turn-taking capabilities.\n2. Some insight about existing spoken dialogue systems through user study.\n3. Additionally create a test benchmark using Switchboard dataset to evaluate SALMONN, Qwen2-audio-instruct, Qwen-audiochat, Whisper+GPT-4o on their ability to understand and predict turn-taking events." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The evaluation protocol is highly expensive, it needs a supervised dataset to train the judge model. It doesn't work if we do not have a supervised dataset in other languages, such as Chinese. \n2. The filler word set for backchannel detection is heuristic. It may ignore some backchannel case not in filler word set. \n\n1. The evaluation protocol is highly expensive, as it requires a supervised dataset to train the judge model. This approach is not feasible if we lack a supervised dataset in other languages, such as Chinese.\n2. The filler word set for backchannel detection is heuristic and may miss some backchannel cases that are not included in the filler word set." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "NA." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The definition of turn-taking is detailed and clear.\n\n2. The evaluation protocol proposed in this paper contributes to better assessing the performance of audio foundation models in dialogues, providing strong support for the development of voice dialogue systems.\n\n3. This paper reveals many issues existing in current AI dialogue systems when handling turn-taking, offering valuable references for future research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenges of evaluating the turn-taking capabilities of audio foundation models (FMs) in conversational settings.\nIt defines 6 types of Turn-Taking Events and evaluates the performance of end-to-end speech dialogue models as well as cascaded systems.\nThrough the results obtained from this study, the authors discovered numerous issues with existing AI dialogue systems in handling turn-taking, such as sometimes failing to intervene in conversations at appropriate times or excessively interrupting others. Furthermore, the authors conducted tests on multiple open-source and closed-source audio foundation models, revealing their limitations in understanding and predicting turn-taking events, and highlighting areas for improvement." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This study only tested a few open-source and closed-source audio FM models.\n\n2. There is a lack of comprehensive performance evaluation and summary." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Here are questions for the authors:\n - The thresholds in Sec. 4.4-4.8 seem arbitrary. Is there a specific reason for choosing these values? All units appear to represent likelihoods, yet they range from negative ($threshold_3$ = -0.45) to positive values ($threshold_2$ = 0.1).\n - There are concerns about the reliability of the judge model. Since all results are based on comparisons with this model, is there concrete evidence supporting its credibility? Specifically, the conclusion that Moshi[1] is \"too aggressive\" lacks persuasiveness if it relies solely on comparisons with the judge model.\n\n[1] Defossez et al. Moshi: a speech-text foundation model for real-time dialogue" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The strengths of this paper are as follows:\n 1. This paper provides an automated turn-taking protocol for audio foundation models\n 2. The evaluation platform will be open-sourced." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an evaluation protocol to measure the turn-taking capabilities of audio foundation models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The weaknesses of this paper are as follows:\n 1. The study aims to measure precise turn-taking, but the thresholds are set to arbitrary values.\n 2. The participants introduced in Sec 3 seem biased, consisting of the authors and related individuals.\n 3. The confidence in some evaluations (Fig. 3(b), (e)) appears high, but no explanation is provided." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My questions are listed above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper proposes a comprehensive evaluation protocol and well-designed metrics to assess the turn-taking capabilities of spoken dialogue systems. The evaluation framework and metrics are thoughtfully developed and provide valuable insights.\n2. The paper extends the evaluation of turn-taking capabilities of spoken dialogue systems from corpus-level statistics to a more granular assessment of the timing of turn-taking events. This fine-grained approach enables a more accurate reflection of a spoken dialogue system’s turn-taking capabilities.\n3. The proposed evaluation metrics provide insights into the limitations of current systems in achieving interactive and natural conversations, highlighting areas for potential improvement." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an evaluation protocol designed to assess the turn-taking capabilities of spoken dialogue systems. It evaluates the exact timing of these events using a supervised model trained to predict them. The experimental results reveal interesting insights about existing spoken dialogue systems and offer valuable suggestions for their future development." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In Metric (E), the judge labels show low consistency with human relevance judgments, indicating that this metric may have limited reliability in assessing the model's ability to handle user interruptions effectively.\n2. My primary concern is the relatively low agreement between the majority of judge labels and human judgments, with most falling below 80%. This raises questions about the strength of the claim that the proposed metrics maintain high consistency with human decisions.\n3. GPT-4o was not evaluated.\n\nIf my above concerns are resolved, I would consider increasing my rating." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the main difference between the proposed evaluation protocol and the previous approach by Ekstedt and Skantze (2022)? Is it impractical to apply the metrics from prior turn-taking evaluation methods to audio FMs?\n2. While the turn-taking prediction model has been evaluated on an out-of-domain task-oriented spoken dialogue corpus, could you evaluate it on additional non-task-oriented spoken dialogue datasets to assess the generalizability of the model?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The evaluation protocol is novel and well-motivated.\n2. The experimental analysis provides valuable insights into turn-taking capabilities of audio foundation models (FMs).\n3. The user study reveals noteworthy observations about current spoken dialogue systems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel evaluation framework for assessing turn-taking capabilities in audio foundation models (FMs). The authors first propose metrics for five core conversational abilities: determining when to speak up, backchannel, interrupt, convey turn-taking cues, and handle interruptions. They develop a supervised model trained on human-human conversations to serve as a judge for evaluating these turn-taking events. Using this framework, they conducted a user study with different spoken dialogue systems (full-duplex E2E spoken dialogue system Moshi and VAD-based cascade dialogue system) and evaluated them. They evaluate several open-source and proprietary audio FMs on their ability to understand and predict turn-taking events." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Turn-taking prediction models used in evaluation protocol require training, which limits scalability and applicability. \n2. The paper does not thoroughly address how its proposed evaluation protocol compares with previous turn-taking approaches, such as Ekstedt and Skantze (2022).\n\nReference\n* Ekstedt, Erik, and Gabriel Skantze. Voice activity projection: Self-supervised learning of turn-taking events. Interspeech 2022" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel evaluation protocol using a turn-taking judge model to automatically assess spoken dialog systems, providing valuable insights into their turn-taking capabilities." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024talking,\ntitle={Talking Turns: Benchmarking Audio Foundation Models on Turn-Taking Dynamics},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2e4ECh0ikn},\nnote={under review}\n}" }, "abstract": { "value": "The recent wave of audio foundation models (FMs) could provide new capabilities for conversational modeling. However, there have been limited efforts to evaluate these audio FMs comprehensively on their ability to have natural and interactive conversations. To engage in meaningful conversation with the end user, we would want the FMs to additionally perform a fluent succession of turns without too much overlapping speech or long stretches of silence. Inspired by this, we ask whether the recently proposed audio FMs can understand, predict, and perform turn-taking events? To answer this, we propose a novel evaluation protocol that can assess spoken dialog system's turn-taking capabilities using a supervised model as a judge that has been trained to predict turn-taking events in human-human conversations. Using this protocol, we present the first comprehensive user study that evaluates existing spoken dialogue systems on their ability to perform turn-taking events and reveal many interesting insights, such as they sometimes do not understand when to speak up, can interrupt too aggressively and rarely backchannel. We further evaluate multiple open-source and proprietary audio FMs accessible through APIs on carefully curated test benchmarks from Switchboard to measure their ability to understand and predict turn-taking events and identify significant room for improvement. We will open source our evaluation platform to promote the development of advanced conversational AI systems." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Turn-taking", "Conversation AI", "Audio Foundation Models", "Evaluation Metric", "Evaluation Benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/baecb289c6c4f3f86f79cf299d51809cc318126f.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Talking Turns: Benchmarking Audio Foundation Models on Turn-Taking Dynamics" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2eFq6S35iB
HiLo: A Learning Framework for Generalized Category Discovery Robust to Domain Shifts
main
Active
Generalized Category Discovery
unsupervised, self-supervised, semi-supervised, and supervised representation learning
5;6;6;6
4;4;3;3
3;2;2;2
2;2;2;3
3;3;2;3
5.75
3.5
2.25
2.25
2.75
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper presents a new, practically meaningful, and challenging setting, and constructs corresponding datasets.\n2. The domain-semantic disentangled design is well-reasoned, clearly aligning the motivation.\n3. The proposed approach demonstrates significant performance improvement on SSB-C.\n4. The writing is clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new challenge for Generalized Category Discovery, which requires model to categorize unlabeled data in the presence of domain shifts. Traditional GCD methods assume all images come from the same domain, which leads to a significant performance drop under domain shifts. The proposed HiLo framework explicitly disentangles semantics and domain, achieving domain adaptation in GCD through patchmix and curriculum learning. Experimental results show performance improvements, validating the effectiveness of the approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The performance gain on DomainNet is considerably smaller than on SSB-C, and the improvement over methods like SimGCD, which does not account for domain shift, is modest. This indicates limited robustness on various domain shifts and fails to highlight the advantages of the proposed approach.\n2. The method is sensitive to certain hyperparameters, and $r^{'}$ does not exhibit consistent performance across the original and new domains.\n3. The approach of decoupling domain and semantics is derived from [1], and the use of patchmix for domain adaptation is adapted from [2]. The curriculum learning strategy is also straightforward. Overall, the method seems to be an assembly of prior works, lacking substantial novelty.\n4. There is no analysis of the disentangled domain and semantic features, such as distribution visualizations. This would help illustrate the effectiveness of the disentanglement.\n5. In line 287, same representation loss $L^{rep}_s$ on both domain and semantic features is confusing. This approach may lead domain features to capture information beyond true domain characteristics. It would be valuable to see t-SNE visualizations of domain features, semantic features, and their combination. The author does not provide a corresponding discussion.\n6. Line 313 mentions using pre-trained DINO to obtain $z_d$, but previously $z_d$ is associated with a projection head. If the projection head is discarded, then $z_d$ will always be identical in different time steps. If it is retained, the term “pretrained” is confusing. This needs clarification.\n7. The ablation study is somewhat unclear. For instance, in row (5) where only deep features are used, does this mean all other designs related to the shallow feature $z_d$ are also omitted? This also needs clarification.\n\nReference\n\n[1] Learning deep representations by mutual information estimation and maximization\n\n[2] Patch-mix transformer for unsupervised domain adaptation: A game perspective" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the Weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The HiLo architecture extracts features from different layers of the vision Transformer and decouples domain and semantic features by minimizing mutual information. This feature processing method, based on the neural network hierarchical structure and information theory, provides a more effective feature representation for category discovery in the presence of domain shifts and avoids the problem of feature confusion in traditional methods.\n\n2. The PatchMix method is introduced into the GCD task and innovatively extended. By adjusting its objective function, it can adaptively utilize labeled and unlabeled data for training. This extension not only combines the advantages of data augmentation but also flexibly adjusts the learning process according to the nature of different data, enhancing the model's ability to learn data from different domains and categories.\n\n3. The curriculum learning method is employed, which dynamically adjusts the sampling probability weights according to the difficulty of samples and the unknown degree of domains. This strategy of gradually introducing samples from easy to difficult conforms to the learning law, enabling the model to better adapt to the challenges brought by domain shifts and improving the model's convergence speed and robustness to complex data distributions.\n\n4. In terms of method design, innovative technical architectures and learning strategies are used, as well as theoretical analyses to verify their effectiveness. From the theoretical derivation of the target error to the analysis of the roles of different components, a solid theoretical foundation is provided for the innovation of the method, demonstrating the advantage of the close integration of theory and practice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Generalized Category Discovery (GCD) is a challenging task where, given a partially labeled dataset, the model must classify all unlabeled instances. This paper introduces a new task and method to handle the GCD problem when the unlabeled data contains images from different domains. In terms of the method, the HiLo architecture and learning framework involves extracting \"low-level\" (early layers) and \"high-level\" (late layers) features from a vision Transformer and decoupling domain and semantic features by minimizing the mutual information between the two sets of features. The PatchMix contrastive learning method is introduced into the GCD task, with its objective function extended to enable the utilization of both labeled and unlabeled data for training. Curriculum learning is adopted, gradually increasing the sampling probability weight of samples predicted to be from unknown domains to enhance the model's robustness to domain shifts. Experiments are conducted on the DomainNet and the SSB-C benchmark datasets constructed based on the Semantic Shift Benchmark (SSB). The experimental results show that HiLo significantly outperforms existing category discovery models, validating the effectiveness of the method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In HiLo, features are disentangled by assuming that features from different layers represent domain and semantic information, respectively and minimizing the mutual information based on this assumption. However, this assumption may oversimplify the complexity of feature representation in neural networks. In fact, features from different layers may be a mixture of multiple types of information. Simply defining the early layers as domain features and the late layers as semantic features may not be entirely accurate, which may lead to incomplete feature disentanglement in some complex data distributions and affect the performance and generalization ability of the model.\n\n2. The introduction and extension of PatchMix in the GCD task is an innovation, but it also brings some problems. The adjustment of its objective function and its application on different data increases the complexity of the model. When dealing with data with large domain differences, it is a challenge to determine the mixing proportion and application method accurately. If not handled properly, it may introduce too much noise or incorrect information, which may instead interfere with the learning process of the model and reduce the classification performance.\n\n3. In the curriculum learning method, the adjustment parameters of the sampling probability weights need to be selected through the validation set, which increases the dependence of the model on specific datasets. Moreover, for different datasets and tasks, the optimal values of these parameters may vary greatly, and the model cannot adaptively determine these parameters. If these parameters cannot be correctly selected in a new dataset or task, curriculum learning may not be able to play its intended role. It may even have a negative impact on the learning of the model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. For the experimental results of the ORCA method, what is the backbone used by the authors?\n2. Were any curriculum learning alternatives considered, such as adaptive weighting based on difficulty or dynamic sample weighting? A brief discussion on these choices would clarify why the current approach was favored." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper presents an innovative GCD approach by combining mutual information minimization with domain-specific data augmentation and curriculum learning to handle domain shifts effectively.\n2. Extensive evaluation on both synthetic (SSB-C) and real-world (DomainNet) benchmarks demonstrates the model's robustness and its superiority over baseline GCD models, especially under domain-shifted conditions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the HiLo framework, a learning method aimed at tackling Generalized Category Discovery (GCD) under domain shifts. HiLo addresses challenges in categorizing both seen and unseen categories across distinct domains within partially labeled datasets, leveraging a multi-faceted approach: mutual information minimization to separate domain and semantic features, PatchMix for augmented domain adaptation, and a curriculum learning strategy. The proposed method is evaluated on synthetic and real-world domain-shift datasets, showing substantial improvements over existing GCD models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In the \"Problem statement,\" the following sentence is unclear: \"The objective of GCD is ... with singleton cardinalities for the latter.\" The author needs to differentiate between the GCD task setting and the domain shift GCD task setting, so this statement should be revised for clarity and precision.\n2. The font sizes of the tables are not standardized, and the font in table 2 is too small.\n3. I'm curious as to how many runs each of the authors' experimental results were derived from, and given that the differences in the results of the GCD benchmark tests can be very large, the authors should have listed the error lines generated by the three independent runs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please clarify the novelty of the proposed method, and include more comparisons with UinOT in the main results." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper proposes a new problem setting and proposes a HiLo, which combines multiple techniques from domain adaption and achieves better results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new problem setting: Generalized Category Discovery (GCD) with domain shift. The authors leverage techniques from domain adaptation and curriculum learning to propose a new method called HiLo. Comprehensive experiments on the proposed benchmark demonstrate substantial improvements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of the method appears limited, as it seems to combine various techniques from different domains.\n\n2. The comparison with UniOT should be included in the main results. Since the proposed setting is similar to universal domain adaptation, it is essential to compare methods from both domains in the main results.\n\nMinor: \n\nMissing citation for the following important paper\n\n[1] Rastegar et al. Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery. NeurIPS 2023.\n\n[2] Gu et al. Class-relation Knowledge Distillation for Novel Class Discovery. ICCV2023." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "we extracts domain and semantic information independently and minimize their mutual information while incorporating contrastive learning for robust representations with pseudo-labelling strategies" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024hilo,\ntitle={HiLo: A Learning Framework for Generalized Category Discovery Robust to Domain Shifts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2eFq6S35iB},\nnote={under review}\n}" }, "abstract": { "value": "Generalized Category Discovery (GCD) is a challenging task in which, given a partially labelled dataset, models must categorize all unlabelled instances, regardless of whether they come from labelled categories or from new ones. In this paper, we challenge a remaining assumption in this task: that all images share the same \\underline{domain}. Specifically, we introduce a new task and method to handle GCD when the unlabelled data also contains images from different domains to the labelled set. Our proposed `HiLo' networks extract High-level semantic and Low-level domain features, before minimizing the mutual information between the representations. Our intuition is that the clusterings based on domain information and semantic information should be independent. We further extend our method with a specialized domain augmentation tailored for the GCD task, as well as a curriculum learning approach. Finally, we construct a benchmark from corrupted fine-grained datasets as well as a large-scale evaluation on DomainNet with real-world domain shifts, reimplementing a number of GCD baselines in this setting. We demonstrate that HiLo outperforms SoTA category discovery models by a large margin on all evaluations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Generalized Category Discovery" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c6c6274e20c9ed743ebf95ea90ca9e108a82e0b3.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "HiLo: A Learning Framework for Generalized Category Discovery Robust to Domain Shifts" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ea5TNVR0c
Advancing LLM Reasoning Generalists with Preference Trees
main
Active
Reasoning;Alignment;Data
alignment, fairness, safety, privacy, and societal considerations
5;6;6;8
3;3;3;4
2;2;3;3
3;3;2;4
3;3;2;3
6.25
3.25
2.5
3
2.75
0.927173
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Did you conduct any comparative investigations over general conversational preference learning using your reward modelling objective? This would help to verify your intuition that this method is effective due to the unique features of reasoning tasks\n2. Would it be possible to use the Eurus reward model for PPO-based alignment? How would this perform in comparison to the existing finetuning methods" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. With regards to soundness, I feel that the necessary experiments have been run to validate the majority of claims, especially where those claims are with regards to methodological contributions. The authors have also taken pains to remove contaminated data from their work in order to make comparisons fair and meaningful, including when reporting others' work.\n2. The presented language models have strong performance, and the data and reward models are in and of themselves useful contributions to the research community, removing some of the limitations of scale and quality from prior works creating preference datasets and reward models\n3. The investigation surrounding the flaws of existing preference learning models is an original contribution.\n4. In my view the largest contribution is the rather detailed study of creating their ultra-instruct dataset albeit moreso as an engineering challenge.\n5. The experiments are run against meaningful baselines: models of similar scale, trained on similar data in similar ways." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors explore improving large language model reasoning through the curation of high quality training data for that reasoning.\nThis data (UltraInteract) consists in preference trees, with nodes splitting on correct/incorrect responses; critique and refinement of rejected responses; and uses different reasoning schemas/ actor models to increase training data diversity\nThe actor used to generate these trajectories is GPT3.5 Turbo, with GPT4 being used as a critique model with access to an interpreter/tools.\n\nThe authors then use this dataset (alongside others) to finetune 3 language models using the following process:\n1. SFT over the correct actions\n2. Preference learning over correct vs incorrect actions using off the shelf preference learning algorithms\n\nAdditionally the authors also use this to derive a reward model:\n3. Train a reward model, adding in terms for the difference in absolute rewards to the normal Bradley Terrey reward model.\n\nIn my view the key contributions of this paper are:\n* introduction and analysis of preference-tree based instruction following data, which is scalable and effective\n* introduction of improved objectives for training reward models" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As a minor point the spelling and grammar could be improved; for instance \"Is proprietary models\" (line 470) should be \"Are proprietary models\", and more generally things like \"Perference Learning\" (line 247). More substantially some of the references point to the wrong sections (e.g. the reference to section 5 (replaced with 6) (line 255) -- in this case harming readability (hence my review of the presentation...)\n2. I feel that the modification to the reward model could be better motivated in section 3, for instance by referencing other works that maximise a similar margin loss. At the least it should be explicitly linked to the discussion in section 4.2 that actually seems to motivate it. This might be aided to seperating out the reward modelling section from the finetuning section? Since it seems to follow on more logically from the finetuning investigations\n3. Section 6.1 doesn't really address the section title properly. While the performance itself does suggest that just training on open source data is sufficient (ignoring the instruction following benchmark); the body of the section just talks about mixing in this additional V2 data, and the ensuing performance gains. It would suffice to add a brief comment at the end of line 483 explaining the results of finetuning just on V2\n4. As a general comment I feel that this work feels like three distinct pieces of work rather than a single cohesive one. I.e. the proposal of a new training dataset; a set of models finetuned on this dataset alongside others; and more separetely a reward model trained on a combination of dataset including the one proposed here. One way of mitigating this would be to focus on the contribution of the dataset to the reward modelling phase (using the data from the ablation studies).\n5. Section 2. is a little bit confusing and could be rephrased to make it a little but clearer that it is all just an example." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "If the model is unable to effectively learn from vertical improvements, then it raises the question of why we want to synthesize the dataset with tree structure and why we are providing trajectories to the model." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Authors use a new method to synthesize a dataset for SFT and preference learning, which could potentially enhance model's reasoning abilities. The intuition behind the synthesis method is straightforward and easy to be understood. I think the dataset is cool and it could be a potential approach for model to learn how to improve the response. Plus, the insights on preference learning algorithm is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors emphasize the performance gap between open-source LLMs and the most advanced models, particularly in reasoning capabilities. They attribute this gap to two primary factors: (1) the lack of high-quality datasets and (2) the under-exploration of preference learning techniques. To address this gap, the authors introduce a novel dataset, ULTRAINTERACT, which features a multi-turn, tree-structured format designed to enhance reasoning abilities. Additionally, they offer new insights into preference algorithms and reward modeling. They argue that effective reward modeling should consider not only the margin between rewards but also the absolute value of the reward itself. Based on this insight, they propose a new reward model that combines two loss functions, L_{BT} and L_{DR}, demonstrating superior performance compared to existing models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1). I agree that providing trajectories to guide model improvements is a potential approach. However, during the training process, I believe that the vertical improvement information, sequential refinement across turns, may not be effectively learned. This is because current preference algorithms primarily focus on horizontal comparisons, assessing responses within the same turn. \n\n2). The reasons behind the better performance of EURES are hard to track and some studies will be necessary if authors want to claim that the proposed dataset is the reason. Because the baselines has different scales and training method, for example, their training dataset could have different size and their preference algorithm could be different, etc.. Plus if EURES can beat some larger model, the claim that the dataset is better will be more convincing.\n\n3). There may be some factors contributing to the value differences observed in reward modeling, especially given the varying formulations of alignment methods. It would be valuable for the authors to offer insights into the potential reasons for these differences in the value of rewards." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Is the reward model used in training the actor model?\n- L148 “the actor model first decomposes the input problem into several problems” How is this done?\n- L181 “we adopt more diverse reasoning patterns” How exactly is this done?\n- Is python the only tool used?\n- Typo in L263 reward notation\n- What is \"prompt level loose score\" in L282\n- I think the tables have too many numbers in them (tab3 has at least a hundred) and not sure if anyone will look at all of them. Instead, average scores can be put there and the detailed table can move to the appendix. This is only a suggestion though.\n- Which GPT-4 model is used? I think there are multiple versions. \n- How is the reward model performance compared to ArmoRM?\n- How is GPT-4 used as a reward model in tab4?\n- Why does self-consistency drop in fig1 left?\n- How is MCTS decoding done exactly in sec5.2?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is advancing open science by making the training data and model checkpoints public. Given the significant improvements in reasoning tasks, it is likely that these assets will be helpful to other researchers.\n- The paper also proposes a new way of training reward models that is better suited to reasoning tasks. In addition, the training datasets have multi-step attempts that contain mistakes and tool usage, which is unlike other preference datasets.\n- The experimental section is detailed and provides many interesting results, such as comparing three different preference optimization methods. There are many ablations provided, and evaluations are done on many tasks, which makes the results more convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper has several contributions. First, it builds a dataset on reasoning tasks that contain both correct and wrong steps. Second, it proposed a modified loss function for training a reward model that is better suited for reasoning tasks. Lastly, it trains a set of LLMs using the proposed dataset that have competitive performance on reasoning tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The heavy reliance on GPT responses makes me feel like this is more of distilling GPT. Also, it is not clear what are the usage limitations that will arise from using a proprietary model like GPT4. As shown in tab7, this was crucial for obtaining good performance.\n- The problem of the likelihood of chosen responses going down in reasoning is a known issue and studied prior work [1], which is not cited in the paper (the related work is quite short)\n- The term “multi-turn action” was confusing. It seems that all the tasks require only a single correct response. None of the tasks is truly multi-turn where the model has to do multiple actions. From reading the paper, it seems the term “multi-turn” is used to describe a process where a model can try again if it makes a mistake. Actually, it is not clear how this process works, especially when training the model and evaluating it. Also, the dataset contains observations and judgements, but are they also used when training the actor? What about the python executions? There is very little detail on how the agent is trained on these and evaluated. \n- As mentioned in the previous point, there are certain steps that are not well explained. See the questions for examples. Given that the motivation is to advance open-source LLMs, I think it is important to describe the process of training in more details." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In line 90-91, the statement is unclear to me. In 'a higher final reward often indicates a better reasoning capability', whose reasoning capability? Can you elaborate a bit more?\n\n2. About the result remove in Table 3 due to data contamination. For some of the model has data contamination issue, the table suggests the TheoryQA is leaked, what about the rest dataset? If the rest doesn't has data contamination issue, should the result be compared? Without TheoryQA number, OpenChat seems like still a strong candidate." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper introduces a novel dataset, ULTRAINTERACT, designed for complex reasoning tasks. It comprises instructions paired with preference trees, featuring reasoning chains, multi-turn interaction trajectories with feedback, and pairwise positive and negative responses. ULTRAINTERACT emphasizes complex and diverse reasoning patterns, encouraging models to break down problems into sub-problems and use tools to solve them. This dataset is a valuable contribution and can be useful for future research on LLM reasoning.\n\n2. The proposed EURUS models achieve state-of-the-art performance on several reasoning benchmarks, demonstrating the effectiveness of ULTRAINTERACT and the proposed training methods. Notably, the smaller EURUS models outperform much larger baselines, showcasing their efficiency.\n\n3. The paper provides valuable insights into preference learning for reasoning tasks. The analysis of reward patterns during training leads to a new reward modeling objective that improves performance, particularly on challenging problems. The authors highlight the importance of the absolute value of rewards in preference learning for reasoning, as opposed to just focusing on relative differences as in general conversation settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents EURUS, a new collection of large language models (LLMs) and a reward model (RM) designed to enhance reasoning capabilities. The authors develop ULTRAINTERACT, a dataset designed for complex reasoning tasks with 12 datasets spanning math, coding, and logical reasoning problems. ULTRAINTERACT employs preference trees, which pair each instruction with reasoning chains, interaction trajectories with feedback, and pairwise responses for preference learning.\n\nThe authors use ULTRAINTERACT to fine-tune several open-source LLMs, including Mistral-7B, Llama-3, and Mixtral-8x22B. They show that EURUS models achieve top performance on multiple reasoning benchmarks, including LeetCode and TheoremQA. EURUS-7B and LLAMA-3-EURUS-8B even surpass baselines 5 times their size, while EURUX-8X22B outperforms GPT-3.5 Turbo on 12 test sets.\n\nThey also create a reward model, EURUS-RM-7B, that excels on several reward modeling benchmarks and introduce a new reward modeling objective that merges the Bradley-Terry objective with an additional term to directly adjust the reward of chosen and rejected action" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the authors acknowledge the use of proprietary GPT models in data synthesis, they do not thoroughly analyze the limitations of relying on these models. It would be helpful to discuss the potential biases introduced by GPT models and explore alternative approaches for data generation that rely solely on open-source models. Though, it's worth noting that they attempt to address this by creating ULTRAINTERACT-v2 using only open-source models, which shows promising results.\n\n2. In the paper, a few preference learning algorithms, since the preference pairs are collected in the ULTRAINTERACT, not running RL with the data seems like a big miss." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present Eurus, state-of-the-art open LLM reasoning generalists and its recipe." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024advancing,\ntitle={Advancing {LLM} Reasoning Generalists with Preference Trees},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ea5TNVR0c},\nnote={under review}\n}" }, "abstract": { "value": "We introduce EURUS, a suite of large language models (LLMs) optimized for reasoning. Finetuned from Mistral-7B, Llama-3-8B, and Mixtral-8x22B, EURUS models achieve state-of-the-art results among open-source models on a diverse set of benchmarks covering mathematics, code generation, and logical reasoning problems. Notably, EURUX-8X22B outperforms GPT-3.5 Turbo in reasoning through a comprehensive benchmarking across 12 test sets covering five tasks. The strong performance of EURUS can be primarily attributed to ULTRAINTERACT, our newly-curated large-scale, high-quality training data dataset specifically designed for complex reasoning tasks. ULTRAINTERACT can be used in both supervised fine-tuning, preference learning, and reward modeling. It pairs each instruction with a preference tree consisting of (1) reasoning chains with diverse planning strategies in a unified format, (2) multi-turn interaction trajectories with the environment and the critique, and (3) pairwise positive and negative responses to facilitate preference learning. ULTRAINTERACT allows us to conduct an in-depth exploration of preference learning for reasoning tasks. Our investigation reveals that some well-established preference learning algorithms may be less suitable for reasoning tasks compared to their effectiveness in general conversations. The hypothesis is that in reasoning tasks, the space of correct answers is much smaller than that of incorrect ones, so it is necessary to explicitly increase the reward of chosen data. Therefore, in addition to increasing the reward margin as many preference learning algorithms do, the absolute values of positive responses’ rewards should be positive and may serve as a proxy for performance. Inspired by this, we derive a novel reward modeling objective and empirically that it leads to a stable reward modeling curve and better performance. Together with ULTRAINTERACT, we obtain a strong reward model." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Reasoning", "Alignment", "Data" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ae5353fffc7007a2f336e514219701cd46e7ff4e.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Advancing LLM Reasoning Generalists with Preference Trees" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2edigk8yoU
Looped Transformers for Length Generalization
main
Active
Transformers
unsupervised, self-supervised, semi-supervised, and supervised representation learning
6;6;6;8
4;3;4;4
4;3;4;3
2;3;3;3
3;3;4;3
6.5
3.75
3.5
2.75
3.25
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "**Q1. Question on the visualization in Figure 3**\n\n- Why don’t the illustrations in the figure contain any “#” (EOS) tokens? Is it due to the pre-processing?\n\n**Q2. Do the trained Looped Transformers simulate the $n$-RASP-L program?**\n\n- Although it might be difficult to reverse-engineer a trained transformer model to figure out what algorithm it actually simulates or implements, it might be interesting if we can observe any kind of similarity between it and the $n$-RASP-L program." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "S1. The paper is written and organized well. Overall, the presentation of the methodology and empirical results is clear and easy to follow.\n\nS2. The idea behind the proposed method is neat and plausible. It is natural to think about adaptively scaling the depth of the model according to the problem length or the problem complexity. This paper successfully implements this idea to solve various interesting algorithmic tasks with the power of Looped Transformers. Also, $n$-RASP-L is an interesting but intuitive generalization of the RASP-L framework by allowing the loops. \n\nS3. The proposed answer-generation framework called FAP is also an interesting component of this work. It might be of separate interest to study.\n\nS4. The paper presents extensive ablation studies on several components of the proposed method. Also, the empirical results (length generalization performances) are impressive enough to convince the readers about the proposed method’s efficacy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- This work studies the efficacy of Looped Transformers for Length Generalization of several algorithmic tasks whose computation complexity is known (as a function of the query length).\n- The paper proposes the definition of $n$-RASP-L, a generalization of the RASP-L computation model allowing the loop of RASP-L programs. It is shown, under a general framework called full-answer prediction (FAP), that some tasks (Copying binary sequence (allowing duplicates), Parity, and Binary Addition) have their own $n$-RASP-L program with a linear number of steps in problem length.\n- The authors propose training Looped Transformers (with input injection and curriculum learning) to learn $n$-RASP-L-programmable tasks, where the ground-truth number of steps is known for each task during training. They also propose two variants of inference methods: either we retain the knowledge about the number of steps at inference time (*Oracle*), or we adaptively decide the number of iterations based on the confidence of FAP (*Maximum confidence*).\n- The proposed method is tested on several algorithmic tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**W1. The definition of $n$-RASP-L (Definition 3.1) can be improved.**\n\n- I think the equation “$T(n): \\mathbb{N} \\rightarrow \\mathbb{N}$” should be corrected to “$T: \\mathbb{N} \\rightarrow \\mathbb{N}$” because $T$ (instead of $T(n)$) is a function of input length $n$ representing the number of steps inside a task-solving $n$-RASP-L program.\n- In (2), I guess $P’$ should be a RASP-L program, which is unspecified in the definition.\n- Should $P$ be decomposed to a sequential application of $P’$, i.e., $P = (P’)^{T(n)}$? I don’t think this is exactly true because there are pre-/post-processing parts inside the proposed $n$-RASP-L programs (in Appendix A). Can the same RASP-L program $P’$ handle such parts? (It might be true because of the experimental results, but I cannot fully understand this part.) If not, I guess the definition should be modified to include the pre-/post-processing parts. For example, $P = P_{\\tt pre} \\circ (P’)^{T(n)} \\circ P_{\\tt post}$.\n\n**W2. “Ground truth” number of steps?**\n\n- According to Definition 3.1, a program $P$ suffices to be an $n$-RASP-L if a corresponding $T(n)$ exists. Indeed, Propositions 3.2, 3.3, and 3.4 claim and prove the existence of $T(n)$ for the Parity, Copy (with duplicates), and Binary Addition tasks, respectively.\n- My question is about the uniqueness or optimality of such $T(n)$’s. There might be a clever way to construct another RASP-L program $\\tilde{P}$ so that $P$ can be implemented with $\\tilde{T}(n)$ steps of applying $\\tilde{P}$, where $\\tilde{T}(n)$ is much smaller than the previously known $T(n)$ (e.g., $\\tilde{T}(n) \\in o(T(n))$). It may happen since there is no uniqueness guarantee or lower bound result on $T(n)$.\n - If I venture a guess, I would say it might be possible to implement an $O(\\log n)$-step $n$-RASP-L solution for the Parity task by using the parallelism of the transformer architecture. Please correct me if I am wrong. Also, I understand if it is impossible to show whether this bold guess is true. If you are interested, there are some (probably) useful references about logarithmic-depth transformers [1,2].\n- However, the authors keep using the phrase “ground truth number of steps” throughout the paper, which may lead to misunderstanding that the only way to implement the given $n$-RASP-L program is by using a loop of length $T(n)$.\n- If two different $T(n)$’s can be applied to a single $n$-RASP-L-programmable task, it might be interesting to observe whether the model’s performance changes depending on the choice of $T(n)$.\n- Furthermore, if multiple choices of $T(n)$’s exist for a given task, does knowing only one of them suffice to train reasonably performant Looped Transformers? If we know more than one, how should we choose $T(n)$ when we train the model?\n\n**W3. Shouldn’t we consider the input injection when implementing an $n$-RASP-L program for the given task?**\n\n- The input injection seems to be an important component of their experiments. Since it changes the input vectors of each layer, I guess the task-solving algorithm under input injection might be different from that without it.\n- However, I can’t see that the $n$-RASP-L programs provided in Appendix A reflect the input injection. As I inspect inside the loop of each program, every iteration only reuses the calculation(s) from the previous iteration right before the current one.\n- Shouldn’t we consider the very first input sequence and the result from the previous iteration when implementing the loops? Or is it a valid implementation of input injection? Getting even further, Is there any way to embed the input injection into the $n$-RASP-L programs?\n\n**W4. The proposed training method requires prior knowledge of the task’s structure.**\n\n- The proposed method is limited in that it requires a prior understanding of the structure (e.g., $T(n)$) of the task where we want to train a model. This is because it hinders fully end-to-end training.\n- Are Looped Transformers still useful for achieving length generalization even when we don’t (or cannot) know the exact expression of $T(n)$?\n- Besides, it seems that the depth of the decoder block is determined based on the complexity/difficulty of the subroutine $P’$ at each step inside the loop (Appendix F). How are they actually chosen? Or, how should we decide the size of the repeating decoder block?\n\n**W5. Some experimental details seem missing or wrong.**\n\n- I guess Equation (2) has a typo: shouldn’t it be arg-main instead of arg-max?\n- In Binary Addition, it seems that $T$ is chosen to be $n$ (the length of each operand). However, Proposition 3.4 claims that $T(n)=n+1$ for the same task. Why is there a discrepancy between theory and experiment?\n- In Binary Multiplication, I guess some words are used in a wrong way. In Lines 417-418, I think it should be: “We define the problem length to be the **length** of the second **number**, and set $T$ to be the product of the lengths of two **numbers**.”\n- In Section 6.1.2, are input injections also applied to NTP-based methods? Also, I’m not sure why it is fair to compare their method (based on FAP) to NTP methods with the architectural setting “…with a depth 20 times the depth of the looped block” because such depth might be suboptimal for NTP-based methods.\n- Although the paper indirectly showcases that their adaptive decision of the number of steps works quite well via Figure 5, it would be better to display similar performance plots to Figure 4 (plots based on the “Oracle” inference) but using the adaptive confidence-based method instead, at least in their appendix.\n\n**W6. Minor writing issues**\n\n- Section 4.1, fourth bullet point: I guess $T(n) \\in \\\\{T(1), \\ldots, T(n_{\\rm max})\\\\}$ is correct ($T(1)$ instead of $1$).\n- Equations (1) and (2) have several weird-looking brackets (too many open brackets etc.)\n- Line 510: Use *less* abbreviations like “w.r.t.”\n\n---\n\n**References**\n\n[1] Sanford, Clayton, et al. \"Transformers, parallel computation, and logarithmic depth.\" ICML 2024.\n\n[2] Sanford, Clayton, et al. \"Understanding transformer reasoning capabilities via graph algorithms.\" NeurIPS 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1. In Figure 5, why do some tasks perform well even when exceeding the step count, while others degrade immediately? For instance, the performance of the parity task and the binary sum task immediately drops when executed with additional steps, whereas the addition, multiplication, and copy tasks retain accuracy to some extent.\n- Particularly for the copy task, the selected step count is significantly higher than the actual number of steps required, which seems unusual to me.\n\nQ2. Are there any tasks whose T(n) is nonlinear (e.g. sqrt(n), n^2) to the length of the input sequence? It would be interesting to see experimental results for such tasks.\n\nQ3. Why is the output reversed for binary multiplication (but not for binary addition)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-structured and clearly written.\n- The introduction of Looped Transformers is well-motivated and effectively argued.\n- The results are strong and solid. They do not require the use of a scratchpad. Also, the prediction is conducted using an end-to-end, full-answer prediction setup, which is a more general way than the conventional next-token prediction setup.\n- The paper clearly illustrates that the model can determine the number of steps to take on its own and does not require T(n) in the test time." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the length generalization problem of Transformer models, which refers to the inability of the model to deal with longer samples than encountered during the training phase. While recent literature has focused on modifying the positional embeddings and the input formats, this paper proposes to use Looped Transformers, which can dynamically adjust their computation steps according to the problem length. The authors define n-RASP-L problems to figure out which problems can be solved by Looped Transformers. Then, they train the models on these tasks (parity, copy, binary addition, binary sum, binary multiplication, unique set) under a full-answer prediction setup. Empirically, the trained models could successfully length-generalize to longer lengths by appropriately adapting the number of loops at inference time." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weakness 1: Applicability Limited to n-RASP-L Tasks\n\n- The approach is limited to tasks that belong to n-RASP-L categories, as it requires the ground-truth number of steps in the training data.\n\nWeakness 2: Insufficient Experimentation.\n\n- ***Effect of Curriculum Learning.*** How does the model perform without curriculum learning? Is the use of curriculum learning necessary?\n\n- ***Tolerance to Step Counts.*** I am curious whether this method will still perform well with different choices of T(n). For example, for tasks like parity, would the model maintain its performance if T(n) were set to n+1 rather than n? What about 2n instead of n? This question stems from the possibility that there might be more efficient solutions to n-RASP-L problems than human-designed ones, which could work with fewer steps. Testing whether the model is robust under overestimated T(n) values could help verify the robustness of this approach.\n\n- Overall, the paper requires more ablation studies." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I listed my questions in the weaknesses section." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Overall, I really liked the paper, I think that using a looped transformer to achieve length generalization is an interesting idea that was not studied in the past to my knowledge. This paper complements all the other techniques (universal transformers, different types of position emebedding, etc.) that were used in the past for length generalization The paper is well-written and well-explained. This is why I advocate for acceptance of this paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper examines how looped transformers perform in terms of length generalization. The focus is on n-RASP-L problems, which are problems that can be tackled using a loop of a single RASP-L program. The concept is that Transformers can learn steps that are independent of length, employing a flexible number of iterations in the looped transformer to achieve length generalization. The authors first demonstrate that n-digit addition, n-bit parity, and copying n symbols can be addressed with n-RASP-L solutions. They then reveal that when utilizing the looped transformer with adaptive stopping time, the results exhibit significantly stronger length generalization compared to next token prediction (NTP) and other methods like using pause tokens or NTP-loop with a fixed stopping time." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I would like to raise the following weaknesses/questions regarding this paper: \n\n- **Lack of other baselines**: What would happen if you have a very deep universal transformer? Universal transformers also have shared parameters and looks equivalent to the loop transformer. The depth may play the role of the number of loops. Would this be equivalent to the fixed loop NTP? It would be interesting to run the same experiments with a universal transformer.\n\n- **Comparison with other methods**: Where would you position the looped transformers in the list of all the tricks for length generalization? Are the effects similar or complementary to change of the input (index hinting, reverse order of operands, etc.) ? Changes of positional encoding? Chain of Thought? It would be interesting to understand this by making combinations of the tricks with looped transformers with other tricks and analyze the performance differences.\n\n- What is the depth of the encoder block in the loop transformer? I think this information is important to put in the main paper. \n\n- **Adaptive inference time**: I think one weak point of the method is actually coming up with an adaptive inference time. The methods that are proposed are nice but may look a bit hacky. Do you think one could learn this adaptive inference time?\n\n- In Figure 2, which adaptive inference time method is used for FAP-Loop-Adaptive?\n\n- Lastly, this is a wild question: have you tried your method on problems where there is no n-RASP-L solutions? Would it still work better than just doing NTP?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Are the quantities reported in Figure 5 indeed for a single training example? When using the maximum confidence criterion, how do the results compare to the ones reported in Figure 4 with access to the ground truth number of steps?\n\n2. In Bansal et al. 2022, they avoid the need for knowing the exact number of steps during training and inference. Have you tried using similar heuristics?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The paper is mostly well-written and easy to follow.\n\n2. Demonstrates that, given knowledge of the number of steps required to perform a given task, a certain looped Transformer, which jointly predicts the full output sequence, tends to learn a length-generalizing solution. The length-generalizing capabilities of this looped Transformer are shown to surpass baselines that use next-token prediction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Empirically explores the ability of looped Transformers, i.e. Transformers that repeatedly apply the same block of layers, to length-generalize on several algorithmic tasks, including copy, parity, and addition. First, the authors manually derive length-generalizing solutions to the considered tasks through a variant of the RASP language, which they term n-RASP-L. Then, based on these ground truth solutions, they show that looped Transformers length-generalize well when trained with access to the true number of steps required to compute the output for a given input." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main weakness of the current paper is that the significance of the results is somewhat limited. In particular, I find that it falls somewhere in between works that are practically relevant and those that may not be practically relevant, but improve our understanding of certain phenomena. On the one hand, this work shows empirically that, in some algorithmic tasks for which we already know how to write explicitly a length-generalizing solution (in terms of looped Transformer weights), looped Transformers generalize well to longer lengths, if they have access during training to the number of steps required for solving the task for a given input. Consequently, the practical relevance is limited since the proposed method requires that we already know how to manually write a length-generalizing solution, in which case there is arguably no point in learning. On the other hand, this work does not provide much in terms of understanding why or how looped Transformers are able to length-generalize.\n\n Note that one may consider the demonstration of such length generalization to be possible as a main contribution. Yet, the ability to extrapolate through recurrence of layers has been demonstrated in the past, albeit for other architectures (see Bansal et al. 2022 [1], which notably do not require knowing the ground truth number of steps in training).\n\n2. A related issue is the usage of ground truth stopping time during inference. The quantities reported in Figure 5 seem to be for a single training example, yet it is not entirely clear. If so, then how does the maximum confidence stopping criterion fair across the dataset? It would be useful to report results similar to those of Figure 4 but when using the proposed stopping criterion as opposed to the ground truth stopping time, which should be unknown.\n\nOverall, my assessment of the paper tends towards the positive side, yet it is not a clear accept due to the substantial limitations mentioned above. Specifically, the significance of the contributions can be greatly improved if it would be possible to remove the dependence on knowing the ground truth number of steps required to solve the task for a given input during training (and by how it seems from the current results, during test time as well).\n\n\nAdditional (more minor) comments:\n- In Definition 3.1, it seems that the intention is for $P’$ to be some RASP-L program, as opposed to just a program. Otherwise, trivially any program $P$ is an n-RASP-L program by choosing $P’ = P$ and $T(n) = 1$.\n- In Equation (2), I believe that the criterion should be an argmin over the cross entropy loss instead of an argmax.\n\n\n[1] Bansal, A., Schwarzschild, A., Borgnia, E., Emam, Z., Huang, F., Goldblum, M., & Goldstein, T. (2022). End-to-end algorithm synthesis with recurrent networks: Extrapolation without overthinking. Advances in Neural Information Processing Systems, 35, 20232-20242." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024looped,\ntitle={Looped Transformers for Length Generalization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2edigk8yoU},\nnote={under review}\n}" }, "abstract": { "value": "Recent work has shown that Transformers trained from scratch can successfully solve various arithmetic and algorithmic tasks, such as adding numbers and computing parity. While these Transformers generalize well on unseen inputs of the same length, they struggle with length generalization, i.e., handling inputs of unseen lengths. In this work, we demonstrate that looped Transformers with an adaptive number of steps significantly improve length generalization. We focus on tasks with a known iterative solution, involving multiple iterations of a RASP-L operation—a length-generalizable operation that can be expressed by a finite-sized Transformer. We train looped Transformers using our proposed learning algorithm and observe that they learn highly length-generalizable solutions for various tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Transformers" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/27c2a50c25cc416d848ca4867ff07cb9236e8597.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Looped Transformers for Length Generalization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2efNHgYRvM
On the Identification of Temporal Causal Representation with Instantaneous Dependence
main
Active
Causal Representation Learning;Instantaneous Dependency;Identification
unsupervised, self-supervised, semi-supervised, and supervised representation learning
6;6;8
3;4;2
3;3;3
3;2;3
3;3;3
6.666667
3
3
2.666667
3
-0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- line 41: do you mean “mixing function” instead of “mixture function”?\n- lines 332, 386 and 387: Notation. You are using $\\mathcal{L}$ and $L$ interchangeably. Could you revise this?\n- If I am not mistaken, your identifiability theory does not obtain the causal graph, but a markov equivalence of it (please correct if mistaken). Yet apparently, the synthetic experiments suggest that you estimate the instantaneous causal graph with 100% accuracy (Figure 4, bottom left). Could you provide some explanation for this? For example, is it possible that your assumptions allow for stronger identifiability results that are overlooked in the presented theory?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The manuscript is clear in terms of motivating the problem and introducing the theoretical framework.\n- Incorporation of instantaneous effects into sequential latent variable models is a very significant contribution.\n- The paper discusses limitations of the assumptions in comparison to recent works.\n- The experiments with real-world data motivate the incorporation of instantaneous effects." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes IDOL, a framework for achieving identifiability in sequential latent variable models with instantaneous dependencies. The authors establish identifiability up to permutation of the latent variables and demonstrate that the underlying causal graph can be identified up to its Markov equivalence class (if this interpretation is correct). They thoroughly discuss the limitations of their assumptions in comparison to recent works, which helps underscore the significance of the proposed framework.\n\nAn estimation method is also introduced, with experiments on synthetic data verifying the theoretical results, while real-world experiments highlight the importance of incorporating instantaneous dependencies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Minor Concerns**\n- **Computational Complexity:** The sparsity constraint introduced in Eq. (11) seems to introduce significant computational complexity to the algorithm. The paper would benefit from a more detailed analysis regarding this. For example, would it be possible to compute wall-clock times (in training) for IDOL in comparison the proposed baselines?\n- **Scalability to High-Dimensional Data:** The authors acknowledge limitations with respect to high-dimensional data, which can restrict the application to real-world scenarios. An experiment to understand how high-dimensional one can go with IDOL would be ideal to support your point to understand how high one could go for IDOL.\n\n**Major Concern: Theory Section Clarity and Limitations**\n\nThe paper’s theoretical claims, particularly around identifiability, would benefit from clarification to avoid potential misunderstandings regarding the nature of identifiability achieved. It appears that IDOL identifies the latent Markov Network rather than the true causal graph for the instantaneous component of the latent dynamics. This is an important distinction, as conditional independence relations allow only for the identification of the Markov equivalence class, not the directed causal structure itself. However, the presentation throughout the paper, especially in the introduction, experiments (such as Figure 4), and conclusions, may lead readers to infer that IDOL identifies the causal graph rather than the Markov network.\n\nTo address this issue, the authors could consider the following changes:\n\n- **Introduction (around line 89):** Indicate that the identifiability of the instantaneous structure in IDOL is only up to a Markov equivalence class, clarifying that IDOL does not identify the directions of edges in the instantaneous part.\n- **Figure 1c Modification:** Consider modifying Figure 1c to remove the arrow pointers from edges, signaling that the result is a Markov network rather than a causal graph when discussing identifiability (this might make sense in terms of theory, but not from a data generation perspective).\n- **Conclusion:** Mention the Markov equivalence class limitation explicitly. This would open a path for further research to extend the identifiability result from Markov equivalence to the full causal structure, especially given the promising empirical results observed in Figure 4.\n\nThe following specific statements in the theory section could be revised to improve clarity and accuracy:\n\n- lines 130-132: “the latent causal relations are also immediately identifiable because conditional independence relations fully characterize instantaneous causal relations in an instantaneous causally sufficient system”. I don’t think this line is correct without any additional assumptions. Conditional independence relations only provide the Markov equivalence class, not the exact causal graph, without further assumptions. Rephrasing this to accurately reflect the distinction between the Markov equivalence class and the causal graph would strengthen the theoretical foundation.\n- lines 171-172: Could you indicate whether $p_{c_t}$ refers to the marginal distribution $p(c_t)$ or the conditional distribution $p(c_t|z_{t-2})$?\nlines 165-188: For better readability, could you indicate $c_t \\in R^{2n}$ in your example? Otherwise, at first glance it reads as $\\{z_{t,i}, z_{t-1,i} \\}$ for $c_{t,i}$ in Theorem 1.\n- line 217: Would it be better to use $\\emptyset$ to refer to $\\Phi$ as an empty set?\n- line 230: Could you define “isomorphic” for Markov networks? A footnote or reference to the Appendix suffices." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How does IDOL handle cases where the latent process sparsity assumption is only partially met? \n\nCould the authors clarify the computational complexity of IDOL compared to baselines, especially for high-dimensional data? \n\nAre there specific real-world scenarios where IDOL might struggle due to non-invertible mixing processes?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper introduces a novel approach to identifying temporally causal representations in time series data with instantaneous dependencies. This approach addresses a gap by proposing a sparse latent process assumption that is more practical for real-world applications than previous assumptions. \n\nExtensive evaluations are performance to demonstrate the effectiveness of the proposed approach.\n\nThe paper is well-organized. The use of illustrative figures helps clarify the complex concepts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a framework, IDOL (identification of instantaneous Latent Dynamics), for identifying temporally causal representation with instantaneous dependencies. IDOL employs a sparse latent process assumption, which is more adaptable to real-world data. The framework is validation through extensive experiments on both synthetic and rea-world human motion datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Providing further discussions on the possibility of extending IDOL to handle high-dimensional data can be beneficial. \n\nGiven the limitation due to the dependency on invertible mixing processes, providing guidelines for real-world applicability would add value." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Can you please comment on the performance of your method in noisy environments and low-sample regimes?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The proposed IDOL framework moves beyond traditional methods that often rely on grouping of variables or direct interventions, by introducing a sparse influence assumption to capture the natural sparsity in many real-world datasets. This approach is novel in handling instantaneous dependencies without requiring interventions or grouping. Furthermore, the paper demonstrates rigorous theoretical and empirical quality, supported by a well-founded identifiability proof and a solid mathematical framework. Experimental validation on both synthetic and real-world human motion datasets further underscores the robustness and reliability of the model, showcasing its ability to accurately identify causal relationships and achieve high predictive accuracy is synthetic and real-world datasets. The paper is overall clearly written and easy to follow. Overall, this work is significant for the field, since causal discovery for time series with instantaneous effects is an important open problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a framework called IDOL (Identification framework for Instantaneous Latent dynamics) to enhance temporally causal representation learning for time series data with instantaneous dependencies. Traditional approaches for identifying latent causal processes in time series data often assume that the latent causal variables lack instantaneous interactions, limiting real-world applicability. IDOL addresses this limitation by applying a sparsity constraint on causal influences, allowing for both time-delayed and instantaneous dependencies in latent variables. The IDOL frameworkassumes a sparse influence within latent causal processes, allowing both time-delayed and instantaneous relations. Unlike prior methods that require data interventions or predefined groupings to achieve identifiability, IDOL elies on this sparse latent structure alone, making it highly applicable to scenarios where interventions are impractical. The framework’s theoretical foundation is built on leveraging sufficient variability and temporal contextual information, establishing identifiability through a combination of variational inference and sparsity regularization. This enables the model to accurately reconstruct latent variables and the underlying causal relationships without complex external assumptions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The model assumes an invertible mixing process to reconstruct latent causal structures, which may not always be feasible in real-world data. In some scenarios, particularly in non-linear and noisy environments, this assumption could lead to inaccurate or incomplete latent representations, potentially undermining the model’s performance and causal interpretability. Furthermore, IDOL’s effectiveness heavily depends on the assumption of a sparse latent process. In cases where this sparsity assumption does not hold (i.e., when the causal structure is dense or complex), IDOL’s performance degrades, as demonstrated in the experiments. This sensitivity suggests that the framework may be less robust in scenarios where latent processes are highly interconnected." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024on,\ntitle={On the Identification of Temporal Causal Representation with Instantaneous Dependence},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2efNHgYRvM},\nnote={under review}\n}" }, "abstract": { "value": "Temporally causal representation learning aims to identify the latent causal process from time series observations, but most methods require the assumption that the latent causal processes do not have instantaneous relations. Although some recent methods achieve identifiability in the instantaneous causality case, they require either interventions on the latent variables or grouping of the observations, which are in general difficult to obtain in real-world scenarios. To fill this gap, we propose an \\textbf{ID}entification framework for instantane\\textbf{O}us \\textbf{L}atent dynamics (\\textbf{IDOL}) by imposing a sparse influence constraint that the latent causal processes have sparse time-delayed and instantaneous relations. Specifically, we establish identifiability results of the latent causal process based on sufficient variability and the sparse influence constraint by employing contextual information of time series data. Based on these theories, we incorporate a temporally variational inference architecture to estimate the latent variables and a gradient-based sparsity regularization to identify the latent causal process. Experimental results on simulation datasets illustrate that our method can identify the latent causal process. Furthermore, evaluations on multiple human motion forecasting benchmarks with instantaneous dependencies indicate the effectiveness of our method in real-world settings." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Causal Representation Learning", "Instantaneous Dependency", "Identification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/badc2e85fc7d35b46f803953bddc9d0e527d1f56.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/a03bf45ce19c829873c3700af4ef80ba4c6a20be.zip" }, "title": { "value": "On the Identification of Temporal Causal Representation with Instantaneous Dependence" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ev44Srmt9
Revisiting Convergence: A Study on Shuffling-Type Gradient Methods
main
Active
shuffling-type gradient methods;convergence analysis;relaxed smoothness assumptions
optimization
1;3;5;8
4;4;3;2
2;2;3;3
1;2;2;3
1;2;3;4
4.25
3.25
2.5
2
2.5
-0.961891
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is extremely well written and enjoyable to read. Moreover, as far I checked the math seems correct and sound. Without being an expert myself on the respective literature, I find the respective very interesting and challenging from a theoretical standpoint." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper revisits the case of random shuffling-type stochastic gradient methods for finite sum minimization problems. More precisely, the paper considers objectives without the traditional structural assumption of Lipschitz smoothness. In doing so, the authors focus on non-convex, strongly convex and convex objectives which satisfy as a smoothness \"surrogate\" the notion of $\\mathcal{l}-$ smoothness.\nTo that end, the authors provide respective convergence rates for each respective case." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concerns consider two main factors:\n\nFirst, the notion of $\\mathcal{l}-$ smoothness introduces additional parameters to be tuned as it becomes apparent from the definitions of the step-sizes in the main theorems. It would be could to include some discussion of how difficult are these to be evaluated both in real life scenarios and in theory. More precisely, do the authors believe that the respective toolbox from the adaptive algorithm like in [1] can be incorporated?\n\nSecondly, the proposed step-size policies seem to rely on a prior knowledge on the iteration horizon $T$. Do the authors believe that an any time convergence rate guarantee can be achieved? \n\n\n[1] Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum Minimization, Neurips 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See **Weaknesses**." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I appreciate the paper tries to tackle more realistic problems (i.e., functions with non-uniform smoothness) and studies the shuffling algorithm, an arguably more common scheme in practice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the shuffling method under the generalized smooth assumption, which was proposed recently to fit many modern machine learning tasks. The authors proved that, under properly picked parameters, the shuffling method provably converges under the weak smoothness condition for both nonconvex/strongly convex/convex objectives. Numerical experiments are also conducted to support the theory." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Major points**.\n\n1. All convergence results are proved in a high-probability manner. However, the dependence on the margin $\\delta$ is in the order of $\\mathrm{poly}(1/\\delta)$, which makes the results weak. I also suggest the authors explicitly give the dependence on $\\delta$ in the final sample complexity.\n\n2. Some descriptions of the existing works contain wrong facts.\n\n - Lines 90-91, the bounded variance assumption is **not** required to improve the rate $\\mathcal{O}(1/T^2)$ to $\\mathcal{O}(1/(nT^2))$ in Nguyen et al. (2021). Instead, Nguyen et al. (2021) can handle **unbounded** variance.\n\n - Lines 92-93, results in both Mishchenko et al. (2020) and Nguyen et al. (2021) hold under **unbounded** variance condition. The current description is not correct.\n\n3. The conditions on noises, i.e., Assumptions 4.3, 4.4, and 4.7, are strong compared with the existing literature, which significantly reduces the impact of the work. I will elaborate more on this point in the following.\n\n\n2. Nonconvex part. \n\n - Random reshuffling scheme.\n\n - In this case, previous works only consider Assumption 4.7, or even a weaker version, i.e., the non-uniformly bounded variance, to obtain the $\\mathcal{O}(\\sqrt{n}/\\epsilon^{3})$ sample complexity.\n\n - However, to recover the same rate, this work requires much stronger Assumptions 4.3 and 4.4 in Theorems 4.5 and 4.6, respectively. Hence, the results in this paper are not directly comparable to prior literature.\n\n - When only Assumption 4.7 holds, Corollary 4.8 has extra dependence on $n$ as indicated by the authors.\n\n - In addition, I am not sure why Corollary 4.8 is a corollary and cannot find its proof. Did I miss anything?\n\n - Arbitrary shuffling scheme.\n\n - Again, the authors require stronger Assumption 4.3 to make their sample complexity as good as the previous results. However, the latter can hold under non-uniformly smoothness, e.g., see Nguyen et al. (2021). As such, the claim in Lines 63-64 is misleading.\n\n - Moreover, imposing three assumptions on noises may confuse the reader. Especially, the proofs under Assumptions 4.3 and 4.4 are similar as claimed in Lines 860-863. I didn't see a necessary reason for the authors to do so.\n\n4. Strongly convex and convex parts. \n\n - Assumption 4.3 is strong and not assumed in previous best-known results, making the contributions weak.\n\n - As far as I know, the previous best results don't need any assumption on the noises for convex problems (Assumption 4.14). Hence, whichever condition among Assumptions 4.3, 4.4, and 4.7 is used, the result is not an improvement in my opinion.\n\n5. The writing can be improved. Some theorems give a threshold on the stepsize (e.g., Theorem 4.5) but others give an exact choice (e.g., Theorem 4.12). Can the author present a unified statement?\n\n**Minor points**.\n\n1. Line 290, the second $T=\\mathcal{O}(\\frac{1}{\\epsilon^3})$ should be $\\mathcal{O}(\\frac{n}{\\epsilon^3})$.\n\n2. Line 310, $Delta_1$ should be $\\Delta_1$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Definition 2.2 seems missing subscript for $x$ in the right hand side. It is unclear what authors mean. It could be either symmetric (x is any of $x_1$ or $x_2$) either non-symmetric $L_0, L_1$-smoothness (maximization over $x_1$ $x_2$ interval), see [1]. Or something different? Symmetric case is much easier to analyze. I tried to get it through the proofs, and it was very strange to see, that everywhere (e.g. Lemma A.2, Lemma A.4 ) authors used $\\|\\|\\nabla F(\\omega)\\|\\| \\le G$, and then said (line 672) that \"From definition 2.2, we can use Lipschitz smoothness\". The standard smoothness AFAIU. So the question: Can the $G$ be huge? Is it just hidden to the $\\cO$? and is the analisys mainly like for the standard smoothness?\nIt seems to me that it actually is, because in lines 615, 704, 890, 1000, etc authors use standard smoothness inequalities. \nI simply can bound $|\\|\\nabla f(x) \\|\\| = |\\|\\nabla f(x) - \\nabla f(x^*) \\|\\| \\ \\le (L_0 + L_1|\\|\\nabla f(x^*) \\|\\|)R = RL_0$ which could be a G.\nand then my effective smoothness is $L= L_0 + RL_0L_1$.\nIn the non-symmetric we have extra exponents as a multipliers, according to [1].\nIs this what authors effectively did? Of course the one can recover the same rate as for standard smoothness. The problem is that the constant will be huge.\n\n- what is $r := G/L$ in Theorem 4.5, Theorem 4.6, Lemma A.1, Lemma A.2. I couldn't find where authors referr to $r$. What is it for? The results of the mentioned theorems and lemmas do not depend on $r$.\nThen $r$ is appearing in Lemma A.4 and in bounds two subsequent trajectory points difference norm. Which mainly coincides with my above bound on the gradient (if we plug in the step).\n\nI briefly check the proofs and it seems they are to adapting my above bound on the gradient norm to the stochastic case (which is where the weaker variance assumption is used -- no expectation, and the difference between the full gradient and its stochastic counterpart is estimated).\n\nHowever, the correct approach is to allow the step size to increase as the gradient norm approaches zero. E.g [1] suggests clipping with gradient norm in the denominator -- when the norm is large - the stepsize is small and vice-versa.\n\nIf I was mistaken, I would be glad to investigate the proofs more carefully if authors argued that I was wrong. My understating is that gradient norm is just trivially bounded and the contribution is poor.\n\nReferences:\n[1] Z. Chen et al. 2023 Generalized-Smooth Nonconvex Optimization is As Efficient As Smooth Nonconvex Optimization" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Results match the standard Lipschitz smoothness rates" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper considers shuffling-type gradient descent under more general smoothness assumption -- $L_0, L_1$-smoothness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Variance assumptions, which are stronger than the most standard bounded variance assumption" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the Weaknesses above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The theoretical analysis is rigorous and considers multiple cases for different function properties.\n2. The authors discuss the limitations of their work and suggest directions for future research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper examines the convergence rate of shuffling-type gradient methods without assuming Lipschitz smoothness, achieving results that match the current best-known convergence rates. The theoretical analysis covers non-convex, strongly convex, and non-strongly convex cases under both random reshuffling and arbitrary shuffling schemes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concern is the experimental section, which feels too limited and simple to fully support the theoretical findings.\n1. The data used in the experiments is relatively simple. It would be valuable to see if this method remains effective on more complex datasets, such as image datasets or applications with large language models.\n2. Additionally, since the theoretical analysis includes both random reshuffling and arbitrary shuffling, it would strengthen the paper to show results for both methods compared to the baseline SGD.\n3. Similarly, since the analysis considers three different cases (non-convex, strongly convex, and non-strongly convex), conducting experiments separately under each case would add depth to the findings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024revisiting,\ntitle={Revisiting Convergence: A Study on Shuffling-Type Gradient Methods},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ev44Srmt9},\nnote={under review}\n}" }, "abstract": { "value": "Shuffling-type gradient methods are favored in practice for their simplicity and rapid empirical performance. Despite extensive development of convergence guarantees under various assumptions in recent years, most require the Lipschitz smoothness condition, which is often not met in common machine learning models. We highlight this issue with specific counterexamples. To address this gap, we revisit the convergence rates of shuffling-type gradient methods without assuming Lipschitz smoothness. Using our stepsize strategy, the shuffling-type gradient algorithm not only converges under weaker assumptions but also match the current best-known convergence rates, thereby broadening its applicability. We prove the convergence rates for nonconvex, strongly convex, and non-strongly convex cases, each under both random reshuffling and arbitrary shuffling schemes, and under bounded or sub-Gaussian gradient noise. Numerical experiments further validate the performance of our shuffling-type gradient algorithm, underscoring its practical efficacy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "shuffling-type gradient methods", "convergence analysis", "relaxed smoothness assumptions" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/975ec4dc36967471622efcbb0d684c17bb6d19e6.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/75a100c42cddfbbf70e03451f8b7371c107e3cd1.zip" }, "title": { "value": "Revisiting Convergence: A Study on Shuffling-Type Gradient Methods" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ezRxhlAxJ
Efficient-vDiT: Efficient Video Diffusion Transformers With Attention Tile
main
Active
Efficient inference;video generation;diffusion;Transformer
infrastructure, software libraries, hardware, systems, etc.
5;5;6;6
4;3;4;3
3;2;3;3
2;2;3;2
3;3;3;3
5.5
3.5
2.75
2.25
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The changes in Dynamic Degree seem to exhibit a certain trend; are there any related experimental analyses available?\n- There is a discrepancy between the acceleration results in Table 1 and Table 7. Could you please provide the specific experimental parameter differences (as they seem to be absent in the paper)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper effectively optimizes DiT using sparse attention and MCD, and the proposed framework demonstrates commendable speed results alongside assured generation quality. Specific strengths include:\n\n- The identification of the Attention Tile phenomenon, accompanied by a detailed analysis, provides a background for the design of sparse attention masks and proposes an algorithm for searching optimal mask sets. Comprehensive evaluation experiments validate the effectiveness of this method.\n- The integration of the consistency distillation method leads to a complete acceleration framework, with rigorous ablation studies confirming the framework's soundness and ensuring generation quality. The FVD metric significantly outperforms the use of the MLCD method alone." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the acceleration of 3D full attention video generation models, focusing on sparsifying 3D attention and reducing sampling steps. The authors propose an algorithm for searching optimal sparse attention masks based on the observed Attention Tile phenomenon and combine this with a consistency distillation method to reduce the number of steps, resulting in an accelerated version of DiT while striving to maintain generation quality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper is rich in content, there are still potential issues to consider: According to Table 7, the acceleration benefits from sparse attention masks are not substantial, with noticeable quality degradation occurring beyond a 1.45× acceleration. Although there is some improvement when combined with MLCD (compared to a 5× acceleration), the effectiveness of the design based on the Attention Tile, which is a core contribution of the paper, appears insufficient here." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper makes a significant contribution by discovering the \"Attention Tile\" phenomenon in 3D full attention Diffusion Transformers (DiTs) for video data. This insight into the redundancy and repetitive patterns within attention maps is a valuable addition to the understanding of how attention mechanisms function in video generation models.\n2. Building on the Attention Tile observation, the authors propose a new family of sparse 3D attention mechanisms that reduce computational complexity from quadratic to linear concerning the number of video frames. This is a substantial improvement that directly addresses the inefficiency issues in existing models.\n3. The introduction of the EFFICIENT-VDIT framework is a well-thought-out approach that combines multi-step consistency distillation, layer-wise sparse attention mask searching, and knowledge distillation. This pipeline effectively accelerates inference while maintaining high metrics.\n4. Achieving these results using only 0.1% of the pretraining data is notable. It indicates that the method is not only computationally efficient but also data-efficient, which is advantageous when large datasets are not readily available." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper tackles the inefficiency of DiTs used in video diffusion model. The speedup of the presented method comes from two sources: 1) pruning the large full 3D attention of VDM DiTs and 2) distilling the model into a multi-step consistency model.\nThe authors identify a repetitive tile-like pattern, termed \"Attention Tile,\" in the 3D attention maps of video data. Leveraging this pattern, they propose a new family of sparse 3D attention mechanisms that reduce the computational complexity from quadratic to linear with respect to the number of video frames.\nTo further accelerate the inference process, the paper introduces a multi-step consistency distillation (MCD) technique. By dividing the sampling trajectory into segments and performing consistency distillation within each, the number of sampling steps required for video generation is significantly reduced.\nResults show that the method achieves good speedup without suffer much performance, using limited training data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper could benefit from a more in-depth discussion of the trade-offs involved, such as the balance between sparsity level and video quality or the impact on different types of video content (e.g., fast-moving vs. static scenes). For instance, why don't you directly use the demo videos on OpenSORA's websites and compare the qualitative results? They provided both static scenes with only relative camera poses and more dynamic scenes, e.g. filming of an explosion scene.\n2. The method relies on the observation that the Attention Tile pattern is data-independent. If this assumption does not hold for certain types of video data (e.g., highly dynamic scenes), the efficiency gains might not translate, potentially limiting the method's applicability.\n3. The use of only 0.1% of the pretraining data raises concerns about the generalization capabilities of the accelerated model. While performance loss is minimal on tested datasets, the model may underperform on unseen data or less common video scenarios.\n4. While the paper uses VBench and FVD for evaluation, these metrics may not capture all aspects of video quality, such as temporal coherence in more complex scenes or perceptual quality under different conditions. Including additional metrics or user studies could provide a more comprehensive assessment. This is especially concerning combined with weakness #2, since FVD is commonly known as a weak metric that focuses strongly on independent frames rather than overall video coherence. Overall, the evaluation seems to favor more static videos rather than highly dynamic videos, and I suspect the attention pruning would encourage such results too. A metric that takes motion into account is Content-Debiased FVD [1], but ideally, this is more suitable via a user study (even though I do not think this is necessary for the rebuttal stage, but better prepare it for another iteration of the paper).\n5. Inherit my point in #2 and #4, the paper does not provide any video data, making it challenging to assess the actual quality of the generated contents. From my point of view, a VDM paper should always be accompanied with as many videos as possible within the supplemental material size limit. Again, a good set would be the demo videos on OpenSORA's websites. They provided a wide range of descriptions and all the corresponding text prompts --- supposedly those prompts would work well on OpenSORA.\n\n[1] Ge et al., On the Content Bias in Fréchet Video Distance, in CVPR 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "please refer to the weakness section" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The finding (attention tile) is quite interesting and could be useful for future research in the community in this area.\n - the proposed layer-wise optimal search for sparse attention masks is somewhere novel in the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed an efficient method for DiT based text-to-video generation. They found a unique pattern in the attention map of DiT based video generation diffusion models and proposed a method to exploit this pattern to ignore the computation of attention between many query/key pairs and hence speed up the generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- This work over-claimed the contribution of the proposed method. Actually, the efficiency improvement is mostly coming from MLCD, which is proposed by another work. The real improvement from the main finding or the proposed 'new' method in this paper is much less than MLCD.\n - Experiment is not thorough. This paper only experimented their method on one DiT based text-to-video generation model.\n - Comparison with other methods is missing. This paper only compared the results from different hyper parameters of the proposed method. Many existing methods that accelerate diffusion models are missing in the paper.\n - The larger diagonal attention is not something new or surprising as the each query token is computing the `correlation` with itself." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What happens if the stages are switched, i.e., first obtain T_{sparse}, then T_{MCM}​ from T_{sparse}, and finally apply the knowledge distillation step?\n\nTable 4 needs additional quantitative metrics like aesthetic quality, subject consistency, imaging quality, and FVD to provide a complete understanding of the effect of parallelization.\n\nWhen comparing speed-up performance for parallelization, are the baseline models also trained with parallelization (Table 4)?\n\nHow does the proposed model achieve a lower FVD (Table 5) than the main base model, given that the proposed model is ultimately a distilled version of the main model?\n\nHow is the claim (lines 424 to 430) that model performance is within 1% of the base model accurate? It is evident that the numbers for imaging quality and subject class are significantly lower than those of the base model.\n\nAblation studies in Table 6 show that only MLCD can speed up the process by 5 to 8 times compared to the base model without significantly compromising quality. What is the justification, then, for the need for sparse attention maps on top of that?\n\nIt seems the main contribution is the sparse attention part. However, some doubts remain. Therefore, I can increase my rating if my questions and concerns in the weakness section and questions section are answered satisfactorily." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written.\n\nThe computational complexity of video diffusion models presents a significant challenge, and the authors effectively highlight this issue and provide a good motivation for addressing it.\n\nTo tackle this, the solution provided by the authors of using a sparse attention map is interesting. Although thinking in this direction is not new, the way the authors motivate the solution and compute the attention maps is scientifically sound and has some novelty.\n\nThe computational speed-up achieved by the method looks impressive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a framework to speed up video generation using Video Diffusion Transformers by optimizing attention computation and reducing sampling steps. A repetitive attention tile pattern in 3D attention maps is identified which allows for sparse attention that lowers complexity. The framework uses a three-stage training pipeline: multi-step consistency distillation to reduce sampling steps, a layer-wise search for optimal sparse attention masks, and knowledge distillation to retain performance. This approach claims to achieve up to a 7.8× speedup in video generation with minimal quality loss." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In the video generation literature, there are models that generate frames sequentially or follow an auto-regressive approach [1,2]. These models may be less computationally expensive than those using full 3D attention heads, yet there is no empirical or theoretical comparison with such models in the paper.\n\nThere should be an ablation study with the separate effects of sparse attention (without the MLCD) to understand each component in more detail.\n\nThe sampling distillation stage (Stage 1) is not really new, either technically or conceptually. There has been a line of work that provides a similar methodology [3,4], etc. It is not clear how different the proposed distillation is from the existing literature. The same can be said for the knowledge distillation in the final stage (Stage 3).\n\nThe paper has only two qualitative video generation results (or at least what I have found), of which only four frames are shown. There should be a lot more generated videos shown side by side to compare the method qualitatively.\n\n[1] Diffusion forcing: Next-token prediction meets full-sequence diffusion. Chen et al. 2024.\n\n[2] Diffusion models are real-time game engines. Valevski et al. 2024.\n\n[3] MLCM: Multistep Consistency Distillation of Latent Diffusion Model. Xie et al. 2024.\n\n[4] SCott: Accelerating Diffusion Models with Stochastic Consistency Distillation. Liu et al. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024efficientvdit,\ntitle={Efficient-vDiT: Efficient Video Diffusion Transformers With Attention Tile},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ezRxhlAxJ},\nnote={under review}\n}" }, "abstract": { "value": "Despite the promise of synthesizing high-fidelity videos, Diffusion Transformers (DiTs) with 3D full attention suffer from expensive inference due to the complexity of attention computation and numerous sampling steps. For example, the popular Open-Sora-Plan model consumes more than 9 minutes for generating a single video of 29 frames. This paper addresses the inefficiency issue from two aspects: 1) Prune the 3D full attention based on the redundancy within video data; We identify a prevalent tile-style repetitive pattern in the 3D attention maps for video data, and advocate a new family of sparse 3D attention that holds a linear complexity w.r.t. the number of video frames. 2) Shorten the sampling process based on multi-step consistency distillation; We split the entire sampling trajectory into several segments and perform consistency distillation within each one to activate few-step generation capacities. We further devise a three-stage training pipeline to conjoin the low-complexity attention and few-step generation capacities. Notably, with 0.1% pretraining data, we turn the Open-Sora-Plan-1.2 model into an efficient one that is 7.4x −7.8x faster for 29 and 93 frames 720p video generation with less than 1% performance loss in VBench. In addition, we demonstrate that our approach is amenable to distributed inference, achieving an additional 3.91x speedup when running on 4 GPUs with sequence parallelism." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Efficient inference", "video generation", "diffusion", "Transformer" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1e84b60ce5ccba47c5a2638140b9a9eedd50673a.pdf" }, "presentation": null, "primary_area": { "value": "infrastructure, software libraries, hardware, systems, etc." }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Efficient-vDiT: Efficient Video Diffusion Transformers With Attention Tile" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2fZ9iOVzpR
A Study of Posterior Stability for Time-Series Latent Diffusion
main
Active
Latent Diffusion;Time Series;Diffusion Models;Posterior Collapse;Impact Analysis
generative models
3;5;5;5;5;8
4;3;2;3;5;4
2;2;3;2;2;3
2;2;2;2;3;3
2;2;3;3;2;3
5.166667
3.5
2.333333
2.333333
2.5
0.059514
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.\tCould the authors clarify how the dependency measure scales with longer time-series datasets? Does the framework handle large datasets efficiently?\n2.\tHave the authors considered extending this approach to other data types beyond time series? If so, how might the framework need to be adapted?\n3.\tIs there a specific reason for not including additional baselines, such as non-latent diffusion models, for comparison in the empirical section?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tThe introduction of dependency measures to diagnose and address posterior collapse is both novel and insightful, providing a fresh perspective on an important issue within latent diffusion models.\n2.\tThe paper offers a solid theoretical foundation for the analysis of posterior collapse, and the proposed framework is well-motivated by both theoretical insights and empirical observations.\n3.\tThe proposed framework demonstrates significant improvements in the performance of time-series generation models, effectively addressing a key limitation in existing approaches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of posterior collapse in latent diffusion models, specifically when applied to time series data. The authors provide a systematic analysis of this issue, showing that posterior collapse can reduce the expressiveness of latent diffusion to that of a variational autoencoder (VAE). They introduce a novel dependency measure to quantify the impact of latent variables on the generation process and identify a phenomenon called dependency illusion when time series data are shuffled. Building on these insights, the authors propose a new framework that eliminates the KL-divergence regularization, permits an expressive prior distribution, and ensures the decoder remains sensitive to the latent variable. Extensive experiments demonstrate that this framework avoids posterior collapse and significantly improves time series generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tWhile the paper presents strong results for time-series data, it lacks a detailed discussion on the generalizability of the approach to other data modalities, such as images or text. Including a brief exploration or discussion of potential extensions could further enhance the contribution.\n2.\tThe experimental details, including specific configurations for baselines and the selection of hyperparameters, are not fully elaborated in the main text. Providing more comprehensive explanations in these areas would improve the paper’s clarity and reproducibility.\n3.\tAlthough the results are promising, some of the visualizations could be made more intuitive, particularly for readers unfamiliar with latent diffusion models. Additionally, converting the figures to vector graphics would significantly improve their quality, as several of the current images appear blurry and lack sharpness, which makes interpretation more difficult. Enhancing the clarity of the figures would improve the overall presentation of the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could the authors include comparisons with recent state-of-the-art time series models, such as ARIMA, LSTMs, transformers, and TCNs, which are naturally robust against posterior collapse? This would contextualize the proposed method’s advantages relative to stable baselines.\n\n- Could the authors provide clearer definitions or examples for terms like dependency illusion and posterior collapse in the context of latent diffusion models? A simplified explanation would improve accessibility.\n\n- Are there specific real-world applications, such as anomaly detection or real-time forecasting, where this framework would be particularly useful? A discussion of practical use cases would strengthen the framework’s relevance." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper addresses a previously underexplored issue in time series diffusion models—posterior collapse—which has primarily been studied in variational autoencoders (VAEs) but not in the context of diffusion models for time series.\n- The dependency measure provides an insightful tool for quantifying the decoder’s reliance on the latent variable. This measure enables detection of both posterior collapse and dependency illusion, offering valuable diagnostic capabilities for latent-variable models.\n- The approach aligns with the paper’s theoretical objectives, yielding meaningful performance improvements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the issue of posterior collapse in latent diffusion models for time series data, where the latent variable becomes ineffective in influencing the model’s output. The authors propose a dependency measure to quantify how much the decoder relies on the latent variable, highlighting not only posterior collapse but also a related phenomenon termed dependency illusion. Then the paper introduces a new framework to address these issues by removing KL-divergence regularization and enhancing the decoder’s sensitivity to the latent variable, improving posterior stability. Experiments demonstrate that the proposed method achieves better performance than standard latent diffusion models with posterior collapse mitigation techniques across various time series datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The empirical evaluation lacks comparisons with stable time series models that naturally avoid posterior collapse, such as ARIMA, RNNs, LSTMs, transformers, and temporal convolutional networks. Including these baselines would provide context on whether the proposed framework offers advantages beyond mitigating posterior collapse. The author also did not compare with recent baselines for time series, which are diffusion-based. Please check papers published in NeurIPS/ICLR/ICML in the past two years. \n\n- The paper references Bowman et al. (2016) to support claims about posterior collapse in latent-variable models for time series, which may be outdated. This raises questions about whether latent diffusion models represent the current state of the art in time series modeling. Comparing the approach with recent state-of-the-art time series methods would strengthen the justification for the proposed framework.\n\n- Although the datasets used are realistic, the paper does not discuss broader real-world applications or scenarios where posterior stability is crucial, such as in anomaly detection or real-time forecasting. Adding context on practical use cases would clarify the framework’s relevance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "It’s unclear to me why negative local dependency is bad. The authors claimed that it’s because the previous timestamp’s data may be from a low density region and therefore an outlier. But in case that the actual next value to be decoded should indeed be an extreme value, why is that problematic?\n\nCan you discuss the stability of the training? In cases where we 1) train the diffusion model and the decoder together 2) we require the decoder to decode the time series regardless of which timestamp’s noised version of the latent variable is selected." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "--The paper tries to focus on the specific issues of time-dependency collapse in the case of time series data and diffusion models.\n\n--The shuffling experiments help illustrate how a latent variable is not being used strongly throughout all time steps" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the issue of posterior collapse in time-series latent diffusion models, a phenomenon that limits model expressivity by reducing it to a simpler VAE model. The authors introduce a dependency-measure regularization aimed at alleviating this collapse specifically within time-series data. Experiment results on three datasets (WARDS, MIMIC, and Earthquakes) demonstrates initial improvements in preventing posterior collapse over shorter timeframes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "--The problem is not sufficiently well motivated. In particular, the two types of mode collapse which in time series (time-dependent and time-independent) are not discussed. The reduction to a VAE is only about the elimination of the time-dependent influence. The impact of this simplification is not sufficiently discussed.\n\n--Moreover, the less expressivity is not shown explicitly to be a bad thing in the context of time series in general. There are potentially time series which are driven by a static latent process\n\n--Although the dependency measure is well-defined, there is little theoretical analysis exploring its properties and its relationship to the reduction to a VAE model\n\n--There is no analysis of the results showing specifically how the introduced technique solved the problem of mode collapse. Results with good Wasserstein distance do not directly imply that the issue of mode collapse was resolved.\n\n- This paper claims that they are the first to address the posterior collapse problem in latent diffusion for time series, but it really boils down to the old autoencoder problem. And the diffusion model became a redundant module when the input is a standard Gaussian distribution is a simple extension of the problem of autoencoder." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The study of the latent diffusion model when applied in the context of time series is a trended topic and super interesting. The authors approach this by defining a proper dependency measure to quantify the problem of posterior collapse of the latent variable, and propose a new framework inspired by re-thinking the design of VAE and autoencoders. The new framework is equipped with new loss functions and regularizations, free from posterior collapse. The discussion comes together with empirical support. Overall, the paper's content is clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new approach to establish a stable posterior for time series within the latent diffusion framework. The new approach circumvents the problematic KL-divergence regularization, prevents the posterior collapse, and maintains the influence of the latent variable in the decoder. The authors provide both theoretical and experimental support to their newly proposed framework." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experimental results mainly focus on real-world data to demonstrate the sampling benefits of the proposed method. Can the authors conduct synthetic data experiments to interpret and validate the effectiveness of the newly proposed component in the framework (e.g., the introduced loss function or regularization)?\n\n2. The prediction ability of a time series model is critical. Can the authors evaluate the proposed framework in terms of other metrics, such as the predictive MAE, to demonstrate its prediction ability?\n\n3. In addition to point 2, can the author compare with other advanced time-series models? Only compared with the latent diffusion family would not be convincing enough for advertising the model in the time series community." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.You claim that “when applied to time series data, this framework might suffer from posterior collapse” in the first paragraph. Do you have any evidence to support this claim? Is this phenomenon due to the diminishing influence of the latent variable on the decoder over time steps? How do you justify the decreased dependency correspond to the posterior collapse of latent diffusion for time series data? \n\n2.In section 4.2, you mention that the the variational inference in your framework leads the latent variable to be smooth in its effect on decoder. Is this the reason why your framework can increase the sensitivity of decoder to latent variable? Can your framework be applied to non-time series data? It seems the proposed method is not specific to time series data." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors focus on an important issue of latent diffusion, that is posterior collapse, and propose a potential method to quantify the posterior collapse of latent diffusion for time series data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to address the posterior collapse problem of latent diffusion for time series data. The authors propose a dependency measure method to quantify how posterior collapse happens. And they propose a KL-divergence regularization based method to improve the sensitivity of decoder to the latent variable for time-series latent diffusion." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.Regarding to the dependency illusion, you give an example (upper right subfigure of figure 1) to explain. But it’s unclear from figure 1 how you arrive at the conclusion that \"Even when the time series is randomly shuffled and thus lacks structural dependencies, the decoder of latent diffusion still heavily relies on input observations (instead of the latent variable) for prediction.\" Could you clarify how you determine that the decoder \"heavily\" depends on input observations? Providing a more detailed explanation or additional quantitative evidence would help support this observation.\n\n2.In section 3.1, the definition of posterior collapse seems a general term for all data, not only time series data. In section 3.2, you introduce a dependency measure to demonstrate the occurrence of posterior collapse in time series data. How does this measure specifically address time series data? Would this measure yield the same conclusion if applied to non-time series data?\n\n3.As shown in Figure 4, the dependency of the decoder on the latent variable decreases across both datasets. Although this trend appears improved compared to Figure 2, it would strengthen your findings to compare your method against additional baseline models, rather than only basic latent diffusion.\n\n4.There is a lack of experimental evidence supporting that the proposed dependency measure can accurately assess the impact of the latent variable on the decoder. You should compare your method with other measurement approaches and demonstrate how it outperforms them, providing a more comprehensive validation of its effectiveness.\n\n5.The baselines are not sufficient enough. Only three baselines and all of them are before 2019. Please compare with more the state-of-art works.\n\n6.Reference in this paper seems to be too old. And some of them are repeated. For example, papers in line 573 and line 576 are the same one." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I agree on the importance of posterior collapse issue found by the authors. I am wondering how is the generation performance of time series diffusion model in which latent variables have the same dimension as the observations." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Novelty: It is the first work to discuss the posterior collapse issue for time series latent diffusion. In particular, the paper introduces the novel dependency measure to quantify how the impacts of the latent variables on the generation of the predicting observations, decrease along the time steps. The authors develop a novel framework, which effectively avoid the dependency illusion issue, and outperforms the related time series latent diffusion models in terms of generation quality.\n\nClarity: This work clearly illustrates the posterior collapse and dependency illusion issues by plotting the dependency measures over time steps. The most parts of the analysis are clearly presented, and easy-to-follow.\n\nSignificance: The work demonstrate a significant issue for latent diffusion being applied to capturing time series data. The introduce dependency measure might be used in quantifying the posterior stability of the other related methods, and thus appears to be crucial." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work starts from an analysis on the posterior collapse issue of latent diffusion for capturing time series. In particular, the KL-divergence from the standard Gaussian prior to the latent variable distribution approximated using latent diffusion, may reduce the time series latent diffusion to a VAE model. The authors define a dependency measure, which shows that the influences of the latent variables on decoding the observations, will decrease to zero along the diffusion time steps. In particular, as analyzed in the paper, the decoder built upon recurrent neural nets may decode the current observations only using the past observations, and thus leads to the dependency illusion issues. To address the problems, the paper develops a novel framework, in which the authors remove the KL-divergence regularization that causes the posterior collapse, decode the predicting observations using the latent variable sampled at intermediate diffusion steps, and introduce a novel penalty to avoid dependency illusion issues. The final experiments demonstrate the new framework can effectively avoid posterior collapse, and thus achieves superior generation quality, in comparisions to some SOTA time series latent diffusion methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The final experiments only demonstrate the compared models on only three dataset, using Wasserstein distance to measure between the true and generated time series. Perhaps the experiments could be enhanced by considering more evaluations metrics?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Conducted a solid impact analysis of posterior collapse for time-series latent diffusion and proposed a new framework that is free from the problem." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Study of Posterior Stability for Time-Series Latent Diffusion},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2fZ9iOVzpR},\nnote={under review}\n}" }, "abstract": { "value": "Latent diffusion has demonstrated promising results in image generation and permits efficient sampling. However, this framework might suffer from the problem of posterior collapse when applied to time series. In this paper, we first show that posterior collapse will reduce latent diffusion to a variational autoencoder (VAE), making it less expressive. This highlights the importance of addressing this issue. We then introduce a principled method: dependency measure, that quantifies the sensitivity of a recurrent decoder to input variables. Using this tool, we confirm that posterior collapse significantly affects time-series latent diffusion on real datasets, and a phenomenon termed dependency illusion is also discovered in the case of shuffled time series. Finally, building on our theoretical and empirical studies, we introduce a new framework that extends latent diffusion and has a stable posterior. Extensive experiments on multiple real time-series datasets show that our new framework is free from posterior collapse and significantly outperforms previous baselines in time series synthesis." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Latent Diffusion", "Time Series", "Diffusion Models", "Posterior Collapse", "Impact Analysis" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a907270f26c6262416b21d2cf096b7f589d61e7a.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "A Study of Posterior Stability for Time-Series Latent Diffusion" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2fgzf8u5fP
Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding
main
Active
Diffusion models;Reinforcement learning;AI for science
generative models
3;3;3;5;5
4;5;4;3;2
3;2;1;3;3
2;1;2;3;2
2;2;2;3;3
3.8
3.6
2.4
2
2.4
-0.880705
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is generally well-written, and the motivation is also clear. It starts with an important problem and proposes a well-motivated solution that requires no finetuning or differential proxy model. \n\n- The paper is clear about how two critical challenges (the soft-value function is both unknown and unnormalized) are addressed by the proposed algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method for diffusion models to sample data that is both within target distribution and maximizing some downstream reward function. The problem the paper studies is of great importance, and the method shows empirical effectiveness in some downstream tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The soft value function seems to difficult to approximate in general. Is there any anlysis or justification to quantify the quality of the approximation? How does one know a good approximation is indeed attained? Moreover, how does the approximation quality matter for the generation? More ablation study can improve the paper further.\n\n- Is there any additional computational overhead for the proposed method? Is the approximation to the soft value function costly?\n\n- The performance gain does not seem to be very significant compared to simple baselines, say Best-of-N. From Table 2, Best-of-N baseline is only incrementally worse than the proposed method in molecule-related experiments.\n\n- A minor question: Does the size of the diffusion model affect the performance of SVDD? I will be interested to see how this method works for diffusion models of different size." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. I did not see it mentioned in the manuscript -- how many seeds were used for experiments?\n2. Could the authors provide a discussion of the relation between the consistency of nested SMC and the consequence of using more or less particles in their method?\n3. How expensive is training the soft-value function estimate in SVDD-MC? If it is reasonably long would it be worth adding a fine-tuning based method (e.g., relative trajectory balance https://arxiv.org/abs/2405.20971). On the other hand, if training the soft-value estimate is especially cheap it would be worthwhile to emphasize this more in the manuscript as a benefit of this method compared to direct fine-tuning methods.\n4. Since the goal of SVDD is to sample from the product distribution of reward and pretrained model could they add some metrics evaluating the diversity of their samples or their naturalness (e.g., likelihood under the pre-trained model when available)?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper presents a number of strengths, such as\n\n- Presenting a novel application of nested Sequential Monte Carlo to the difficult problem of sampling from the product distribution $p^*(x_0)$ given a pre-trained diffusion model.\n- The method, especially SVDD-PM, provides a particularly efficient method to sample from the product distribution when no differentiable reward is available and the reward function is cheap. The manuscript shows that SVDD can indeed increase the reward over the pre-trained model, offering a compelling option to sample from the target product distribution with little overhead.\n- The problem of cheaply sampling from the product distribution in the presence of non-differentiable rewards is especially significant as existing methods typically require availability of gradients or expensive (typically simulation-based) fine-tuning algorithms. Non-differentiable rewards are often seen in scientific discovery, a target area aptly pointed out by the authors." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces SVDD, a method which aims to sample from the product distribution $p^*(x_0) \\propto p^{pre}_0(x_0) \\exp(R(x_0)/\\alpha)$ for some non-negative reward function $R(x)$, constant $\\alpha \\geq 0$ and pre-trained diffusion model $p^{pre}_t(x_t | x_{t + 1})$.\n\nThe method employs an SMC-inspired procedure for iteratively sampling and reweighting samples according to a soft value function and its corresponding optimal policy. Providing two options to obtain the soft-value function (which is required for the method's importance sampling step), the authors show that SVDD can be used with a cheap approximation based on the diffusion model's denoiser or an amortized version based on regression to a Monte Carlo estimate. The authors evaluate the performance of SVDD on a series of tasks -- images, molecule design, and DNA/RNA design." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall I had a some issues regarding clarity of the paper, concerns about sources of bias that are not discussed in the manuscript, and am worried that the experimental section does not paint a fair picture of SVDD's performance relative to baselines. I will discuss each of these in turn\n\n\n\n### Unclear focus of probabilistic inference vs reward maximization\n\nSection 3.2 states that the objective of this paper is to perform probabilistic inference and sample from the target distribution $p^*(x_0) \\propto p^{pre}(x_0)\\exp(R(x_0) / \\alpha)$. However, towards the beginning of the experiment section and throughout the appendix the manuscript begins to say that SVDD is actually meant for reward maximization, not the problem of sampling from $p^*(x_0)$. In particular, the manuscript states that in practice they set $\\alpha = 0$ for all experiments, which corresponds to a constrained reward maximization where $p^*(x_0)$ is a Dirac centered at $x_0^* = \\underset{x_0 \\in Support(p_0^{pre}(x_0))}{\\arg\\max}R(x_0)$. This is quite different from sampling from the $p^*(x_0)$ for any $\\alpha$ and if this is the goal of SVDD it should be compared to baselines which try to do reward maximization.\n\n\n\n### Missing discussion and investigation on bias of soft value function estimates\n\nThe manuscript defines the soft value function as $v_t(x_t) = \\alpha \\log \\mathbb{E}_{x_0 \\sim p^{pre}(x_0 | x_t)}[\\exp(R(x_0) / \\alpha)]$. Next, due to issues with numerical stability for small values of $\\alpha$ the authors make use of an approximation\n\n$v_t(x_t) = \\alpha \\log \\mathbb{E}_{x_0 \\sim p^{pre}(x_0 | x_t)}[\\exp(R(x_0) / \\alpha)]$\n\n$ \\approx \\alpha \\log \\exp(\\mathbb{E}_{x_0 \\sim p^{pre}(x_0 | x_t)}[R(x_0)] / \\alpha)$\n\n$ = \\mathbb{E}_{x_0 \\sim p^{pre}(x_0 | x_t)}[R(x_0)]$\n\nThe second step takes the $\\exp$ function outside of the expectation and as such requires an application of Jensen's inequality, implying that $v_t(x_t) \\geq \\mathbb{E}_{x_0 \\sim p^{pre}(x_0 | x_t)}[R(x_0)]$. This means that the Monte Carlo regression used for SVDD-MC is in fact biased (although consistent), a fact which is not mentioned in the paper.\n\nThe situation is more complicated for SVDD-PM which first applies Jensen's and then another approximation as \n\n$v_t(x_t) \\geq \\mathbb{E}_{x_0 \\sim p^{pre}(x_0 | x_t)}[R(x_0)]$\n\n$ \\approx R(\\mathbb{E}_{x_0 \\sim p^{pre}(x_0 | x_t)}[x_0])$\n\nIt is unclear to me whether the error of the posterior mean estimate can be shown to be bounded as the reward function is potentially non-convex, but would be happy if the authors had some insight into this.\n\nGiven that SVDD requires accurate estimates of the soft-value functions to sample from the target distribution $p^*(x_0)$ I would be more convinced of SVDD's abilities were there a more detailed (potentially including an empirical results) analysis of the bias of the Monte Carlo regression and posterior mean estimates.\n\n\n\n### Issues with inconsistent setting of $\\alpha$ for baselines\n\nThe stated goal of SVDD is to sample from the target distribution $p^*(x_0;\\alpha) \\propto p^{pre}(x_0)\\exp(R(x_0) / \\alpha)$, where the temperature parameter $\\alpha$ controls how peaky $p^*(x_0;\\alpha)$ becomes. As discussed above, as $\\alpha \\rightarrow 0$ the target distribution becomes focused on the maximizer of the reward which is in the support of the pretrained model such that \n\n$\\mathbb{E}_{x \\sim p^* (x;0^+)}[R(x)] = \\underset{x_0 \\in Support(p^{pre}(x_0))}{\\arg\\max}R(x_0)$. \n\nIn general, as the value of $\\alpha$ is decreased the expected reward under the target distribution should increase. As such, comparing the distribution of rewards of generated samples for methods using different values of $\\alpha$ does not paint an accurate picture of each method's performance as one method having a higher quantile reward may simply be a consequence of the setting of $\\alpha$. \n\nUnfortunately, the manuscript's experiments use significantly different values of $\\alpha$ for its method and baselines while using the reward at different quantiles as the main performance metric. This is more problematic as the value of $\\alpha$ for their method is set to $0$ (where the true expected reward is the maximum reward value) and a larger value of $\\alpha$ for baselines. Because the value of $\\alpha$ is not set to be equal for SVDD and all baselines I do not believe that the experimental results in Table 2 paint a fair picture of SVDD's performance.\n\n\n\n### Overall comments\n\nI generally have concerns with the settings of either the number of particles $M$ being too small or the bias of the soft-value function estimates being too large. As far as I understand (and perhaps I am missing something!) by setting $\\alpha=0$ for SVDD in the experiment section the method _should_ be suffering from mode collapse and generating very high reward samples as the target distribution $p^*(x_0)$ is a Dirac centered at $x_0^* = \\underset{x \\in Support(p_0^{pre}(x))}{\\arg\\max}R(x)$. However, samples from SVDD do not exhibit this expected mode collapse, which seems to indicate that either many more particles $M$ need to be used or the bias from the value function estimation is preventing the algorithm from properly sampling from the target distribution.\n\nI note that the main reason for my score mostly due to the issue with inconsistent setting of $\\alpha$ for SVDD and baselines in the experiments section as well as the missing discussion. A missing discussion/analysis of the bias of the value function estimates and their impact on SVDD's ability to sample from the target distribution also contributes significantly to my score." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Is there a typo under the first equation in Section $4.1$, where the expectation is induced by $p_t^{pre}(\\cdot | x_{t-1}) -$ note the negative instead of positive sign." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors provide a widely applicable method that can be applied both to discrete and continuous diffusion settings.\n- The proposed method, unlike previous guidance algorithms, does not rely on an explicitly trained conditional diffusion model (eg. for classifier free guidance), or on differentiable reward terms (eg. classifier based guidance).\n- Results on both image and scientific domains highlight the benefits of the approach towards controlled generation of objects with high downstream reward, as intended.\n- The work also conducts experiments in a wide variety of domains, ranging from images, molecules and DNA." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work provides a unified framework of guidance in diffusion models, both discrete and continuous, with minimal additional training and applicability in domains where a downstream reward might not even be differentiable. The proposed method SVDD (MC and PM) is applicable in discrete diffusion where a continuous gradient of energy cannot be directly added to the discrete state space, as well as cases where the reward is non differentiable which is the case in a lot of scientific domains. The work tackles an important problem in the scientific domain and leads to controllable generation without having to fine-tune large scaled models. Their results show that generations from SVDD lead to higher downstream rewards than the baselines considered." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The authors consider setting $\\alpha=0$ in their experiments. However, prior work highlights that setting $\\alpha=0$ leads to over-optimization and increasingly reduces the diversity and realistic nature of the samples. Could the authors provide clarity on why this is not a problem in their setup?\n- The work relies on two major assumptions (one for SVDD-MC and the other for SVDD-PM), which are neither well motivated theoretically nor are there any details provided about it. \n- **Assumption about SVDD-MC**: The authors replace the logarithm of expectation with the expectation of logarithm in their quantity of interest, which in reality is only a bound on the actual quantity. Could the authors consider experimenting on some synthetic domain to describe the bias and variance caused by this approximation? When is this approximation reasonable and when would it be extremely incorrect?\n- **Assumption about SVDD-PM**: This algorithm combines the above approximation with pushing the expectation inside the reward function $r(\\cdot)$. As with above, could the authors conduct experiments on synthetic domains and highlight when and where such an assumption is reasonable, and when is it violated?\n- While the approach leads to generation of samples with high reward, the authors do not provide any kind of metrics that test for diversity of the samples generated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I would appreciate it if the authors could address my theoretical concerns regarding 1.) what is the optimal distribution hit by SVDD 2.) what is the bias introduced in the MC estimate and 3.) the actual computational cost of SVDD-MC.\n\nIn addition, I would appreciate it if the authors could carry out the additional experiments I have suggested with added diversity quantification." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper tackles a timely problem in considering fine-tuning diffusion models. Moreover, the suggested approach of SVDD-PM enjoys being computationally cheap to use as it does not require any further training while both SVDD-MC and SVDD-PM are applicable in settings where the reward function is non-differentiable. This is impactful because this unlocks a lot of potential application domains that have black-box rewards where learning a surrogate reward model is non-trivial. Finally, the paper considers a diverse set of experimental settings to showcase the universality of the proposed approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces SVDD which is a new method to fine-tune diffusion models in both continuous and discrete spaces. SVDD-MC learns a soft value function by regression to $x_t$ while SVDD-PM exploits the posterior mean parametrization of masked diffusion models to estimate the soft value function directly. Given a soft value function, SVDD can be applied to a general class of reward functions, including non-differentiable ones, at inference time without further fine-tuning. This can be seen as a variation of the famous Sequential Monte-Carlo algorithm but applied and modified for diffusion models. Experiments are done on images, molecules, and docking and show improvements in fine-tuning performance under a specified reward metric." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper has some notable strengths there are a few lingering questions that point to potential weaknesses. I will try to list them below.\n\n**Theoretical weaknesses**\n\nTwo main questions arise when looking at the setup. The first one is the actual target distribution and whether SVDD hits it. In the continuous setting, I have severe doubts about whether the correct terminal distribution is reached due to the initial value function bias problem as introduced in Adjoint Matching (Domingo-Enrich et. al 2024). Certainly, nothing in the current theory suggests the process during fine-tuning is memoryless. Moreover, it is unclear what ramifications and bias we introduce in the MC setup when the regression is done using $r(x_0) \\to x_t$ as opposed to the more numerically unstable soft value function. For example, I believe when you remove the $\\exp$ from your regression target, this is a based value function but there is no discussion on this point outside of the fact that it is less numerically stable. As a result, I am dubious about the claims made about hitting the correct target distribution.\n\n\nAnother question is the connection to Sequential Monte Carlo. There is a discussion on this in the paper but I think it's not accurate enough. I disagree with the statement made in the paper. The algorithm you propose is quite literally SMC but adapted to reward maximization, there is even a resampling step which is exactly what is done in SMC. The arguments that SMC is done over a batch are lukewarm. There is nothing wrong with demonstrating that SMC can be effectively applied to sampling from discrete diffusion---like analogously done for an autoregressive model by Zhao et. al 2024---and this is a valuable contribution. I suggest the authors be a bit more forthright with their claims as I would buy it a lot more. In fact, with the right framing, you achieve novelty by showing how SMC applies to a newer more interesting problem domain.\n\n**Additional Technical weaknesses**\n\nOne of the main selling points of SVDD is the fact that it is supposed to be a cheap inference time algorithm. This I believe is not quite true because of the need to estimate the soft value function in SVDD-MC. Indeed, one must estimate the soft value function using rollouts which I believe adds a heavy pre-processing step. I also did not see SVDD-MC in the ablation studies about computational cost---likely because it's significantly more expensive than SVDD-PM. Thus, I believe the main claim for SVDD-MC being a lightweight method is a bit misleading. Of course, if you had the perfect estimated value function then inference scales as indicated in the plot for 3c,d but this is not the full picture.\n\n**Experimental weaknesses**\n\nA glaring missing baseline is Relative Trajectory Balance (Venkataraman et. al 2024) which does fine-tuning exactly like this paper considers for both discrete and continuous diffusion models. I kindly request the authors to consider adding this important baseline. Moreover, it is a bit surprising that there is no text experiment given the heavy emphasis on using Masked Diffusion Models which have primarily been introduced for text. I would be encouraged to see a text experiment---perhaps of a similar scale to Zhao et. al 2024---to highlight that SVDD can be applied in the text setting.\n\nThe current experimental findings in Table 2 are not complete as they do not show other important aspects of the generated sample. They simply show that reward is maximized but this could also happen through gamification of the reward function. For instance, I would appreciate the authors providing sample-based diversity metrics to quantify how bad the drop in diversity is among the baselines. At the very minimum, FID scores for images should be provided and I'll let the authors determine appropriate diversity metrics for the other domains to complement the findings in Table 2.\n\n**Closing remarks**\n\nHaving said all of these weaknesses, I will note that I am open to significantly raising my score if **all of my concerns** are adequately addressed to my level of satisfaction. I will also state that I did not read the appendix so if I have missed something I would appreciate a pointer to the result there.\n\nI encourage the authors in their rebuttal endeavors and I hope they can strengthen the paper which I would like to eventually recommend for acceptance but not in its current state.\n\n\n**References**\n\nVenkatraman, Siddarth, et al. \"Amortizing intractable inference in diffusion models for vision, language, and control.\" arXiv preprint arXiv:2405.20971 (2024).\n\nZhao, Stephen, et al. \"Probabilistic inference in language models via twisted sequential monte carlo.\" arXiv preprint arXiv:2404.17546 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I do not have further questions." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "There are several advantages of SVDD over previous methods:\n1. No need for differentiable proxy models\n2. No fine-tuning required\n3. Works with both continuous and discrete spaces\n5. Maintains better sample diversity compared to other approaches\n\nThe writing of this paper is clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new method called Soft Value-based Decoding in Diffusion models (SVDD) for optimizing diffusion models to generate samples with desired properties while maintaining naturalness. The contributions include:\n- SVDD is an inference-time technique that doesn't require fine-tuning the original diffusion model\n- Can work with non-differentiable reward functions, unlike previous methods that required differentiable proxy models\n- Applicable to both continuous and discrete diffusion models in a unified way\n\nThe algorithm works in the following way:\n1. Uses \"soft value functions\" that predict future rewards from intermediate noisy states\n2. At each denoising step:\n - Generates multiple samples using the pre-trained model\n - Selects samples based on their predicted value\nThere are two variants:\n - SVDD-MC: Uses Monte Carlo regression to learn value functions\n - SVDD-PM: Directly uses reward feedback without additional training\n\nExperimental Results span across multiple domains: image generation, molecule generation, DNA/RNA sequence generation. The proposed method consistently outperformed baseline methods while maintaining sample validity. \nThe paper demonstrates that SVDD provides an effective way to guide diffusion models toward desired properties while preserving the natural characteristics learned during pre-training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness is about novelty. To be more specific, I can not see significant difference with twisted SMC methods (e.g., the papers mentioned in Sec 6 and App B). In the writing I see two differences claimed by the authors:\n1. In previous works such as Wu et al., the reward is a classifier; while here it is \"reward maximization\" setting. \n\nFirst, I think the setting in this work should not be called \"reward maximization\" but be called \"alignment\" or \"reward sampling\" or similar names, due to the reasons in Sec 3.2 \"HIGH REWARDS WHILE PRESERVING NATURALNESS\". Second, whether the reward function is a classifier is not critical, as even for an usual reward r(x), we can understand it as an unnormalized probability prob(optimality | x).\n\n2. \"SMC methods involve resampling across the “entire” batch, which complicates parallelization. Additionally, when batch sizes are small, as is often the case with recent large diffusion model\"\n\nI do not quite understand this part. I may miss the difference between SVDD and twisted SMC methods. Does the batch size mean the number of particles in SMC? It will be good if there could be a clarification." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Derivative-free, training-free, fine-tuning-free reward optimization algorithm in diffusion models" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024derivativefree,\ntitle={Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2fgzf8u5fP},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences. However, rather than merely generating designs that are natural, we often aim to optimize downstream reward functions while preserving the naturalness of these design spaces. Existing methods for achieving this goal often require differentiable proxy models (e.g., classifier guidance or DPS) or involve computationally expensive fine-tuning of diffusion models (e.g., classifier-free guidance, RL-based fine-tuning). In our work, we propose a new method to address these challenges. Our algorithm is an iterative sampling method that integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future, into the standard inference procedure of pre-trained diffusion models. Notably, our approach avoids fine-tuning generative models and eliminates the need to construct differentiable models. This enables us to (1) directly utilize non-differentiable features/reward feedback, commonly used in many scientific domains, and (2) apply our method to recent discrete diffusion models in a principled way. Finally, we demonstrate the effectiveness of our algorithm across several domains, including image generation, molecule generation, and DNA/RNA sequence generation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion models", "Reinforcement learning", "AI for science" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c64d8504b9afdf5f8263f4ac994b47c05d99e298.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/9945e26857e3d3afdb7b77aa6d8ab7287d8283fb.pdf" }, "title": { "value": "Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2fojNANZSv
Mixture of In-Context Prompters for Tabular PFNs
main
Active
Prior-Fitted Networks;Tabular Learning;Sparse Mixture of Experts.
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;5;6
2;3;2
3;3;3
1;3;3
3;3;3
4.666667
2.333333
3
2.333333
3
0.188982
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I could not find an ablation study on the number of clusters K vs model performance, have you done these experiments?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well written and proposes a justified solution to address the context length issue for in-context learning models such as TabPFN. Authors conduct extensive experiments on many real world dataset to demonstrate the effectiveness of the proposed approach and compare with leading tree-based and deep learning tabular methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a mixture of experts approach for in-context learning on tabular data. Each expert in the mixture is a K-means cluster and the model routes the input instance to the closest cluster. This addresses the problem of context size in large datasets and provides a better selection of prompt instance than random sampling. To adapt the model to this type of routing authors also propose fine tuning by selecting a cluster of each training instance and maximizing the likelihood." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There is a very related previous work \"Retrieval & Fine-Tuning for In-Context Tabular Models\" by Thomas et al, which proposes both nearest neighbor retrieval to improve the prompt and fine tuning with this approach to adapt the model to the target distribution. I think the authors have to compare with this work and highlight what is novel in MixturePFN." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses.\n\nCan categorical features simply be encoded as ordinal features? Is that not implying false relationships between unordered elements?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The MICP strategy effectively reduces memory usage, allowing the model to handle larger datasets compared to existing TabPFN\n- CAPFN bootstrapping and finetuning approach appears to be an effective way to mitigate distribution shift ICL for tabular data\n- Extensive benchmarks against 19 strong baselines show good performance in both mean rank and Condorcet ranking across diverse datasets" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the MixturePFN framework, which extends TabPFN for large tabular datasets by addressing the performance and scalability limitations of the number of table rows. The authors propose:\n1. Mixture of In-Context Prompters (MICP), which optimizes inference by using a sparse mixture of experts to route test samples to specific \"prompters\" that create context-specific prompts to separate large training datasets into manageable clusters. \n2. Context-Aware Finetuning (CAPFN), which addresses distributional shift issues by specializing each prompter on its assigned\ncontext via parameter efficient finetuning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While MIXTUREPFN improves dataset scalability, it still struggles with feature-rich datasets, potentially limiting its applicability in domains with high-dimensional data, such as patient healthcare data. I realize the authors leave this to future work, but this is an area where simple XGBoost performs quite well, and I would be curious about their thoughts on tackling this issue.\n\n- MICP's reliance on K-Means clustering to segment data into meaningful clusters as the quality of clusters can vary significantly based on dataset properties / distance metric chosen. Poor clustering could lead to suboptimal routing and ineffective prompts for certain test samples. I'd be curious to see some ablations in this area.\n\n- The CAPFN bootstrapping method might introduce biases or overfitting if the sampled subsets are not representative of the entire dataset. Bootstrapping from small clusters may fail to capture enough diversity, especially in cases with imbalanced classes or rare features. I'd be also curious to see how this method works with highly imbalanced labels e.g. 1\\% positive." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can you provide a comparison with LoCalPFN [1]? If not possible, I think the comparison should be done using k neighbor samples rather than random sampling, at least for TabPFN*.\n\n2. I see that the authors say in the limitations section that they didn't do it on a dataset with a million samples, but I'm somewhat curious about the effectiveness of MixturePFN on a dataset with a million samples, since the paper is aimed at the scale-up aspect.\n\n3. I'm also curious about the effectiveness of MixturePFN on datasets with hundreds or thousands of features, which is very practical in the real world.\n\n----\n[1] Thomas et al., Retrieval & Fine-Tuning for In-Context Tabular Models, NeurIPS 2024" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea of Mixture of Experts blending into TabPFN seems novel.\n\n2. The effectiveness of MixturePFN is well evaluated in well-established benchmarks against a variety of baseline methods.\n\n3. Writing is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose MixturePFN, an extension of Sparse Mixture of Experts to TabPFN to alleviate the context size limitations of the existing TabPFN. On the TabZilla benchmark, MixturePFN outperforms state-of-the-art tabular prediction models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The biggest weakness I think is that the paper is missing a comparison with LoCalPFN [1]. Since LoCalPFN also tries to make TabPFN effective even on datasets with many-shots, I think it should be mentioned in the paper.\n\n----\n[1] Thomas et al., Retrieval & Fine-Tuning for In-Context Tabular Models, NeurIPS 2024" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a mixture of prompters technique for tabular in-context learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024mixture,\ntitle={Mixture of In-Context Prompters for Tabular {PFN}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2fojNANZSv},\nnote={under review}\n}" }, "abstract": { "value": "Recent benchmarks find In-Context Learning (ICL) outperforms both deep learning and tree-based algorithms on small tabular datasets. However, on larger datasets, ICL for tabular learning suffers in both efficiency and effectiveness. In terms of efficiency, transformers incur linear space and quadratic time complexity w.r.t. context size. In terms of effectiveness, contexts at inference encounter distribution shift compared to contexts from pretraining. We propose MixturePFN, which extends Sparse Mixture of Experts to the state-of-the-art ICL for tabular learning model. Specifically, MixturePFN finetunes a specialized ICL expert on each cluster of tabular data and routes new test samples to appropriate experts at inference. MixturePFN supports constant-size contexts by splitting large training datasets into more manageable clusters. MixturePFN addresses distribution shift by finetuning an expert on each training dataset cluster via bootstrapping. Extensive experimental results shows MixturePFN outperforms 19 baselines both in mean rank and as the Condorcet winner across 36 diverse tabular datasets under both accuracy and F1 score with statistical significance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Prior-Fitted Networks", "Tabular Learning", "Sparse Mixture of Experts." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b8bc18fbc85f957a3ad23209a3f6725d57fff7ae.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Mixture of In-Context Prompters for Tabular PFNs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2gTEW29qsM
Masked Generative Priors Improve World Models Sequence Modelling Capabilities
main
Active
World Modeling;Model based RL
reinforcement learning
3;5;5;5
5;4;4;4
2;2;2;2
2;2;2;2
2;3;2;3
4.5
4.25
2
2
2.5
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could the authors elaborate on why GIT-STORM occasionally does not surpass STORM and the conditions where improvements are only minor? Understanding this would clarify the contextual efficacy of the MaskGIT prior.\n- Regarding the reported state-of-the-art claim, Table 6 suggests that DrQ-v2 outperforms GIT-STORM in some highlighted environments. Could the authors comment why they claim GIT-STORM provides SOTA results on these? It is not the case, right?\n- What is the rationale for improving STORM over directly utilizing DreamerV3, which appears to perform better in many scenarios? Or put differently: why would one care to improve STORM with the proposed modifications when there is DreamerV3 and I could just use it or improve over DreamerV3? \n\nI am open to increase my score once there is clarity on these questions." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The empirical evaluation spans discrete and continuous action benchmarks, providing a robust assessment of GIT-STORM’s performance. The reported results demonstrate that GIT-STORM not only improves sample efficiency in RL tasks but also enhances video prediction quality, particularly in the Atari 100k benchmark, aligning well with the study's objectives. Moreover, the paper is well-written with a clear structure, providing a good experience as a reader. Extending the transformer-based world models to continuous action tasks also poses a sufficient novelty and broadens the utility of these models in RL and video prediction applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper, Masked Generative Priors Improve World Models Sequence Modelling Capabilities, introduces GIT-STORM, an extension of the STORM architecture, incorporating MaskGIT as a dynamics prior to enhance sequence modeling in world models. The authors address two main gaps in previous research: the limitation of transformer-based world models in continuous action environments and the inadequacies of prior methods, like STORM, in capturing effective state representations. Through experiments on Atari 100k (discrete) and DeepMind Control Suite (continuous), GIT-STORM demonstrates improvements in RL and video prediction, suggesting that Masked Generative Priors could be a powerful inductive bias for world models, supporting broader applicability across diverse RL tasks and environments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It remains unclear why GIT-STORM does not consistently outperform STORM across all benchmarks or why it fails to close the performance gap with DreamerV3 in environments beyond Atari 100k. The paper does not fully explain the conditions under which GIT-STORM’s improvements are more marginal, suggesting a need for clearer insights into the impact of individual architectural components.\n\nThe paper claims state-of-the-art results for GIT-STORM on select environments, yet Table 6 seems to indicate that DrQ-v2 outperforms GIT-STORM on two environments (where the authors claim they are better?). Clarifying the conditions under which GIT-STORM achieves these results or adjusting the claim would help ensure consistency and accuracy in presenting the model's achievements.\n\nThe proposed approach for handling continuous action spaces is promising, yet lacks a comprehensive empirical analysis. Additional studies on more diverse continuous control tasks could provide stronger validation of the state mixer function's effectiveness and the broader applicability of the model in continuous settings. Most importantly, the modifications from STORM to GIT-STORM are extensive, involving MaskGIT, state mixer, policy adjustments from DreamerV3, and an observation module from STORM. The compounded modifications make it difficult to discern the exact contribution of each component to the reported performance improvements. A more focused ablation study could is required to isolate the impact of each modification." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Results on the Freeway task have very high variance according to Figure 10. How many out of the five runs does GIT-STORM actually achieve non-zero performance? \n- The most challenging aspect of learning the Freeway task is obtaining the first successful trajectory, which I believe is more related to the exploration strategy than state predictions, given the sparse rewards. How does GIT-STORM assist the agent in exploring more efficiently? Is this strategy stable, or are the successful trajectories obtained by random seeds?\n- Why would the pendulum swingup task fail for both STORM and GIT-STORM? DreamerV2, DreamerV3 and TransDreamer can learn this task fairly easily.\n- The experiment results in Table 5 and Figure 10 appear inconsistent. For instance, the Gopher score reported in Table 5 is 8562, but the last point in Figure 10 shows a performance of around 2500. Do these two results use different metrics?\n- Could you add the learning curves of STORM or DreamerV3 to Figure 10 for a better comparison, considering that you have reproduced these results?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The motivation of incorporating a MaskGIT prior into the STORM architecture is clear.\n- The proposed method is straightforward and easy to reproduce.\n- MaskGIT can effectively improve the video prediction quality of STORM, indicating applicability of GIT-STORM to more complicated tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces GIT-STORM, which incorporates three modifications to the base algorithm STORM: a MaskGIT prior that replaces the MLP dynamics head, a draft-and-revise decoding scheme for enhanced consistency, and a state mixer for continuous action environments. Experimental results demonstrate that GIT-STORM surpasses STORM on both Atari 100K and DMC benchmarks. Video prediction results indicate that this improvement is attributed to more accurate representations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper contains a misstatement in its contributions. The authors claim that they \"apply transformer-based world models to continuous action environments for the first time\". This claim is inaccurate, as TransDreamer[1] can also be applied to continuous action environments. The authors are evidently aware of this paper, given that they have cited it in this work.\n- The state-mixer design is not properly addressed. If the authors claim this part of their contribution, they should either elaborate on the design, or provide empirical results to show the superiority of this method. Based on the overlapping tasks, TransDreamer appears to have better performance than GIT-STORM+state-mixer on the continuous control benchmark DMC.\n- The experimental results in Atari 100K only demonstrate marginal improvement. The gain over STORM seems to primarily originate from the gopher task alone, which contains inconsistent results, as detailed in the questions section.\n\n[1] Chen et al. TransDreamer: Reinforcement Learning with Transformer World Models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. On lines 307-309, I think STORM uses KV caching in both the conditioning phase and the imagination phase, see [here](https://github.com/weipu-zhang/STORM/blob/e0b3fd44320d7e213ec905c673ad3f35b61b89f4/sub_models/world_models.py#L363). The `predict_next()` uses `forward_with_kv_cache()` for decoding.\n\n2. Missing comma on line 214?\n\n3. What's new in the proposed state mixer compared to the STORM's action mixer?\n\n4. `Freeway` is a hard exploration environment, as the agent has to repeat the `up` operation many times to get a first reward, which is a rare event for a random policy. Without the first reward, the value space is all zero and the policy would be further optimized as a uniform random policy. STORM, IRIS, and DIAMOND have different tricks that can mitigate such an issue. But what is the underlying reason for GIT-STORM to reach a non-zero result? I think this is not related to the improved decoding or world modelling quality since DreamerV3 and STORM (w/o traj) could also produce a nearly perfect reconstruction and prediction on `Freeway`.\n\n5. For the `Quadruped Run` in Figure 6, I wonder if it's too small (compared to Figure 4 in [DreamerV3](https://arxiv.org/pdf/2301.04104)).\n\n6. Lines 529-530, \"Replacing...\", the order is reversed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper clearly distinguishes itself from previous work, with good comparison and illustration.\n\n2. One-hot categorical latent is widely used in recent model-based RL, yet the research on it is insufficient. This paper provides a novel view of it.\n\n3. This paper bridges the gap of the lack of evaluation of transformer-based world models on continuous control tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed to replace the MLP head with MaskGIT prior in STORM to achieve a higher quality of latent generation, and therefore achieve better performance on the Atari100k benchmark.\nThis paper also bridges the gap of the lack of evaluation of transformer-based world models on continuous control tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation and effect of using MaskGIT head in world models are unclear.\nIs there any evidence that the world models would have hallucinations, and how could a MaskGIT head mitigate such issues?\nHow to distinguish if the improved performance (both RL and FVD) comes from more parameters or the MaskGIT prior?\n\n There should be some further investigation into the mechanism of the MaskGIT head. Such as:\n\n (a) What's the difference between the latent variables (or distributions) generated with the MLP head and MaskGIT head?\n\n (b) This MaskGIT head looks like a skip/residual connection from $z_{t}$ to $\\hat{z}_{t+1}$, would this reduce the KL divergence in the training or imagination?\n\n (c) These are sample questions. An investigation like this would improve the soundness and contribution of this paper.\n\n2. Section 2.1 could be more concise, as these are not quite related to the key contributions and are frequently repeated in each of the model-based RL papers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "There are also some minor questions:\n\n1. In Line 309, why KV cache can improve sample efficiency? Do you mean computational efficiency?\n2. To my knowledge, perplexity is a metric whose lower values mean better. However, in Table 3, higher perplexity is marked as better.\n3. In Figure 6, the quadruped agents are too small in the images. This work seems to have used an unusual camera setting for these tasks.\n\nIf the authors well address my concerns, I am willing to improve my rating." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "To my knowledge, MaskGIT models, with their strong expressiveness, are not yet utilized for world models in the MBRL community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes GIT-STORM, which utilizes a MaskGIT model instead of an MLP for the prior head in world models (based on STORM). It also makes minor modifications (a state mixer) to support continuous actions. Experiments are done on Atari100k and DMC benchmarks, considering both policy learning and video prediction performance. GIT-STORM outperforms its base method STORM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The illustration and descriptions of the model are confusing. Can authors provide more insights for their specific designs?\n - In Figure 1 (left), it seems that GIT-STORM uses masked $z_t$ as inputs for reconstructing $z_{t+1}$. This is strange since, in the original MaskGIT, we mask and reconstruct masked target tokens. Similarly, I think it is more reasonable to mask $z_{t+1}$ as inputs. \n - In Figure 1 (left), there is no $\\xi_t$ but only $\\eta_t$. \n - Also, the dot product seems to be a commonly used trick that ties weights for embedding and linear layer before Softmax. If so, relevant literature should be cited.\n - The Draft-and-Revise decoding scheme, if not proposed by this work, should be moved into a preliminary section. \n2. The contribution to supporting continuous actions is overclaimed (as 'for the first time'). In fact, concatenating or summating continuous inputs with hidden states is a too straightforward approach in current VLA models (e.g., OpenVLA for inputting continuous visual representations) and action-conditioned video prediction models (e.g., iVideoGPT for inputting continuous actions).\n3. The performance of GIT-STORM on DMC is outperformed by its base method, DreamerV3." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024masked,\ntitle={Masked Generative Priors Improve World Models Sequence Modelling Capabilities},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2gTEW29qsM},\nnote={under review}\n}" }, "abstract": { "value": "Deep Reinforcement Learning (RL) has become the leading approach for creating artificial agents in complex environments. Model-based approaches, which are RL methods with world models that predict environment dynamics, are among the most promising directions for improving data efficiency, forming a critical step toward bridging the gap between research and real-world deployment. In particular, world models enhance sample efficiency by learning in imagination, which involves training a generative sequence model of the environment in a self-supervised manner.\nRecently, Masked Generative Modelling has emerged as a more efficient and superior inductive bias for modelling and generating token sequences. Building on the Efficient Stochastic Transformer-based World Models (STORM) architecture, we replace the traditional MLP prior with a Masked Generative Prior (e.g., MaskGIT Prior) and introduce GIT-STORM.\nWe evaluate our model on two downstream tasks: reinforcement learning and video prediction. GIT-STORM demonstrates substantial performance gains in RL tasks on the Atari 100k benchmark.\nMoreover, we apply Transformer-based World Models to continuous action environments for the first time, addressing a significant gap in prior research. To achieve this, we employ a state mixer function that integrates latent state representations with actions, enabling our model to handle continuous control tasks. We validate this approach through qualitative and quantitative analyses on the DeepMind Control Suite, showcasing the effectiveness of Transformer-based World Models in this new domain.\nOur results highlight the versatility and efficacy of the MaskGIT dynamics prior, paving the way for more accurate world models and effective RL policies." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "World Modeling", "Model based RL" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/eebe05e57972961f46f1ee28caf39e904ad203f4.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Masked Generative Priors Improve World Models Sequence Modelling Capabilities" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2gW8lTRh9m
Continual Memorization of Factoids in Large Language Models
main
Active
Continual Learning;Large Language Model;Memorization
foundation or frontier models, including LLMs
3;5;5;8
4;4;4;4
3;2;3;4
2;2;3;3
3;3;3;4
5.25
4
3
2.5
3.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Table 1, the degradation is more severe for Stage 2 is also a factoid dataset. Do you have any explanation? Also, there is big drop when using GSK8k. It will be very insightful to understand the interplays of the datasets.\n\nFor the Replay approach, what if we use a ratio = 1.0?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The papers present problems and solution pretty clear and easy to follows.\nThe authors proposed a simple yet effect way to reduce the interference among the different fine-tuning datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studied the forgetting issues when finetuning a LLM in multi-stage datasets. They focus the setting of continual memorizing of factoid facts - Stage 1 is factoid fact datasets and Stage 2 finetune with fact/non-fact datasets. The authors find non-fact datasets will cause smaller drop. Based on this intuition, the authors proposed a data mixing strategy (introducing some unrelated datasets) in multi-stage fine-tuning to reduce the forgetting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are some other method to reduce the interference among the datasets of stage 1 and stage 2. For example, the method needs to compare with another baseline, i.e. \"mixing of Data A and Data B\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- The authors show that their method causes the model to store factoids in more layers of the model, which presumably means that the factoids overwrite previous data in these shallower layers. It would have been interesting to investigate whether this results in any significant degradation of other model capabilities (e.g. fluency) compared to the basic two-stage training process. I understand that this paper specifically focuses on factoid memorization and contains many experiments already, but this could be mentioned as future work.\n\n- Another interesting experiment would be to vary either the model or the dataset's size, to evaluate the link between model capacity and the efficacy of REMIX/replay techniques. Do the authors have any insight or early intuition regarding this?\n\n- Have the authors considered/tried combining REMIX with classic replay techniques? This seems like a natural next step to know whether the use of both methods leads to even better results." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper is very well-written and clear. The section order feels natural, and the reasoning and intuition for each idea are given clearly. The tables and charts summarize the data well, and while informative, the text flows well and is not overly complicated. As a result, this is a very pleasant paper to read. In addition, the references are recent and seem relevant.\n\n- The paper touches upon the issue of catastrophic forgetting of factoids in LLMs, which is a relevant and unsolved issue, especially in the current context where many pre-trained LLMs showcase good reasoning capabilities but cannot easily be updated afterward to store novel world knowledge.\n\n- The paper contains a large number of experiments, that give clear motivation for introducing REMIX, and then show its efficacy over many settings.\n\n- The ideas found in this work are not revolutionary per se (which is not to say that they lack originality; see my next point), but the execution is straightforward and good. The authors carefully checked for important details such as dataset overlap.\n\n- The idea of mixing generic/random data with the training dataset is quite creative and original. Despite being counterintuitive, the authors justify this idea mathematically.\n\nAs a result, I recommend this paper for publication with no major point of criticism." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work focuses on the continual memorization setting in LLMs, a subsetting of continual learning in which a model is first fine-tuned on a factoid dataset, then successively fine-tuned on other datasets (multiple training stages) and must retain knowledge learned during the first stage. The authors first demonstrate that catastrophic forgetting occurs in a 2-stage training process, especially if the dataset from the second stage is a factoid one, and that usual replay methods used in continual learning do not satisfactorily mitigate the issue.\n\nThe authors then introduce REMIX, a strategy for preventing forgetting in the multi-stage learning process. In this strategy, additional training data is added to one or both of the training stages. This data takes the form of either generic of random data. The authors show that this new method produces significantly better result than the basic training process on LLaMa-3 and Mistral-7B, which they show to be linked to a more diversified and larger set of layers in which factoids are stored.\n\nFinally, the authors perform a large number of validation experiments, proving that this method is effective with different mixing datasets, and investigate the effect of several other hyperparameters such as the mixing sequence length, the mixing ratio and the number of training stages." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Many of the points of criticism I had while reading this paper were answered later on, or in the appendices. The other points that I have mainly consist of questions (see section below).\n\n- In section 4.2, the word \"Figure\" is used several times instead of \"Table\".\n- Section 3.2 (on replay) is lacking detail in comparison to other sections, especially as it justifies the use of REMIX compared to other replay methods. In particular, I could not find which of the two LLMs was used to measure the effect of replay methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Suggestions\n- I would like to suggest the authors the strengthen the motivation of needing to memorize long-tail knowledge through the form of factoids, but showing that it transfers the knowledge itself to downstream NLP tasks that require integrating those long-tail information. Simply getting a high score in the factoid task itself is insufficient to motivate the problem formulation.\n- I would suggest that the authors include more baselines from the continual learning literature that can mitigate the forgetting of previously learned knowledge." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: The paper draws attention to the issue of continual memorization of long-tail information through factoid memorization.\nQuality: The experiments are conducted rigorously, covering a range of datasets and demonstrating REMIX’s impact across several configurations.\nClarity: Explanations are mostly clear, and the figures help illustrate key points. \nSignificance: The method has some practical relevance for fact retention." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper tackles the issue of continual memorization of factoids in large language models (LLMs), focusing on retaining specific, rare knowledge (factoids) as the model undergoes further training on unrelated datasets. Typical replay techniques fail to prevent forgetting of such factoids in LLMs, leading the authors to propose REMIX, a data-mixing approach that interleaves random or generic data during training stages to reduce forgetting. The paper demonstrates that REMIX helps preserve factoid knowledge across various datasets and training scenarios, with results analyzed using tools like Logit Lens." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unclear Problem Motivation: The paper does not convincingly explain why memorizing long-tail knowledge in the form of factoids is important in practical applications. Without a clear motivation, the relevance of the problem formulation is uncertain, which diminishes the contribution’s significance. If we are only concerned about factoids, why use LLMs in the first place? Why not just use traditional knowledge-based systems? The authors should show how memorizing factoids leads to downstream applications, such as utilizing the information from the factoids on tasks that specifically require LLMs.\nLack of Novelty: REMIX lacks sufficient originality; the idea of mixing generic data into training is not groundbreaking and does not specifically address the unique challenges of factoid memorization. \nLack of Baselines: The authors only explore experience replay as the baseline approaches, whereas there exists other methods in literature that can mitigate forgetting during continued pretraining (parameter expansion-based methods, regularization methods, etc.)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1 Comparison with Other Forgetting Mitigation Techniques:\nHow does REMIX compare with other established forgetting mitigation methods across different tasks? A systematic comparison would strengthen the case for REMIX’s advantages.\n2 Exploration of Bidirectional Relational Memory with Atomic Fact Datasets:\nThe datasets used appear to consist mainly of isolated factoids or \"atomic\" facts, without directly exploring bidirectional or inverse relational memory. For example, if the model learns that \"Bob’s dad is Sam,\" it would be valuable to evaluate whether the model can infer the inverse relationship, such as \"Who is Sam's son?\" This type of associative memory is essential for comprehensive fact retention, as it reflects a more integrated understanding of relationships. Could the authors clarify whether such tests were conducted, or suggest if REMIX could potentially extend to this type of bidirectional memorization?\n3 Why Forgetting is More Pronounced with Factoid Datasets:\nThe paper reports that models experience significant forgetting when fine-tuned on factoid datasets in the second stage, but not on non-factoid datasets. Could the authors elaborate on why forgetting is more pronounced with factoids compared to non-factoids, as well as any observed differences in how REMIX performs on these types? This could provide further insight into the underlying mechanisms of forgetting and the strengths of REMIX.\n4 Rationale Behind Data Mixing Types:\nThe paper employs various data sources (e.g., Knowledge Pile, random word sequences) as mixed data in REMIX. However, the choice of these sources appears empirical, lacking theoretical justification or detailed explanation. It remains unclear why certain data sources yield better performance on specific tasks, and this potential variation across tasks is not fully explored. There is no clear guideline for selecting mixed data types, nor an analysis of how different types of mixed data impact task performance. A more thorough theoretical or empirical examination of these differences could enhance understanding of REMIX’s applicability and effectiveness across various contexts.\n5 Impact of REMIX on New Task Performance:\nThe paper focuses on preventing forgetting in prior tasks, but it does not discuss the potential impact of REMIX on performance for new tasks introduced in later stages. While REMIX seems effective at preserving knowledge from earlier stages, it remains unclear whether this approach might inadvertently reduce performance on new tasks due to constraints placed on the model’s capacity or flexibility. An analysis of how REMIX affects the model's performance on new tasks would provide a more balanced understanding of its effectiveness in continual learning contexts.\n6 Effectiveness of Random vs. Generic Text Mixing:\nThe paper explores both random word sequence mixing and generic pretraining text mixing in REMIX. However, it is not entirely clear whether these two approaches yield similar or differing effects on knowledge retention. Could the authors provide more details on any observed differences in effectiveness between random and generic data mixing? Understanding how each type impacts forgetting could offer valuable insights into the dynamics of memory retention in large language models.\n7 Combined Mixing Effectiveness:\nThe results indicate that combining random word sequence mixing with generic data mixing produces the best outcomes, but it is not fully explained why this combination is most effective. Is there a theoretical or empirical rationale for why mixing both types of data provides better retention compared to using either one alone? Additional explanation of this combined effect would enhance understanding of REMIX’s underlying mechanisms and may help guide future applications.\n8 100% Accuracy in Table 1:\nIn Table 1, it is stated that all Stage 1 datasets are trained to 100% accuracy before Stage 2 training. Could the authors clarify how this 100% accuracy is achieved and guaranteed across different datasets? Specifically, were there particular training techniques or criteria used to ensure full memorization of Stage 1 data? Additional details on this process would help in understanding the baseline setup for evaluating forgetting.\n9 Suitability Across Task Types:\nHas REMIX been tested on other types of tasks, such as generative or dialogue-based tasks? Additional testing on these tasks would clarify REMIX’s versatility and applicability beyond factoid retention." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Novel Approach to Memory Retention: REMIX introduces a unique approach to mitigate forgetting by mixing random and generic data during training, achieving substantial performance improvement compared to replay-based methods.\nThorough Experimental Analysis: The authors conduct extensive experiments across multiple datasets, providing empirical evidence of REMIX’s effectiveness. They also analyze layer-specific behavior, offering insights into how REMIX modifies the model’s memory dynamics.\nGeneralizable Insight for Continual Learning: By demonstrating the limitations of replay techniques and proposing alternative strategies, this paper offers valuable insights for both continual memory retention and general continual learning in LLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper examines the problem of forgetting in large language models (LLMs) during continual learning, particularly when training on a small set of long-tail factoids (subject-relation-object triples). The authors identify two primary challenges in retaining these long-tail facts over successive training stages: the limitations of standard replay techniques and the interference from training on unrelated datasets. To address these challenges, the authors propose REMIX (Random and Generic Data Mixing), which combines unrelated, generic data with the factoid data to prevent forgetting. Through comprehensive experiments, REMIX is shown to outperform replay-based methods and recover performance from severe forgetting. The authors further analyze how REMIX influences the learning process, noting that it shifts the storage of factoids to earlier layers and diversifies the layers used for storing these facts, thus reducing interference from later training stages." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1 Lack of Comparison with Other Forgetting Mitigation Techniques:\nAlthough the authors discuss the limitations of replay-based methods, the paper lacks a systematic comparison with other common forgetting mitigation techniques, such as Elastic Weight Consolidation (EWC) or Knowledge Distillation. For instance, EWC is frequently used in continual learning to reduce interference by regularizing key weights, while Knowledge Distillation selectively retains critical information. Comparing REMIX with these methods would help clarify REMIX’s unique advantages and performance under similar conditions.\n2 Synthetic and Specific Dataset Selection:\nThe datasets used in this paper, such as Key-Value Recall and PopQA, are primarily synthetic and consist of isolated factoids, which may not fully reflect the complexity of real-world data. For example, in practical scenarios, knowledge is often presented in overlapping or nested forms (e.g., “The author of Hamlet is Shakespeare” and “Shakespeare wrote Hamlet”) rather than as isolated facts. Testing REMIX on more commonly used datasets, such as Wikipedia or open-domain QA datasets (e.g., Natural Questions), could provide a more realistic evaluation of its effectiveness and generalizability.\n3 Unclear Justification for Types of Data Mixing:\nThe paper employs both random word sequences and knowledge-rich text (e.g., Knowledge Pile) as mixed data to prevent forgetting, but it does not provide a clear explanation of why these two disparate types would produce similar effects. For example, random word sequences contain no factual content, while Knowledge Pile includes a substantial amount of knowledge and contextual information. The authors could further analyze why both random and knowledge-rich data help prevent forgetting or test the specific impacts of each type in different scenarios.\n4 Impact on Performance in New Tasks:\nWhile REMIX performs well in retaining early-stage knowledge, the paper does not explore its impact on subsequent new tasks. For instance, it would be useful to know whether REMIX might limit the model's ability to learn these new tasks when introduced for fine-tuning. Evaluating REMIX’s impact on new tasks could provide insights into potential trade-offs between memory retention and generalization to new tasks.\n5 Limited Evaluation on Extended Stages:\nThe experiments primarily focus on two-stage continual learning, with limited testing of multi-stage scenarios. In real-world applications, models may undergo multiple updates, such as continual fine-tuning in legal or medical domains. Testing REMIX in a three-stage or four-stage setting could provide better insight into its stability and effectiveness over longer training cycles.\n6 Resource and Scalability Concerns:\nREMIX relies on incorporating additional mixed data during training, which may increase computational costs, especially for large models such as Llama-3-8B. Expanding this method to resource-intensive domains like finance or healthcare could present challenges. If the authors could discuss the trade-offs between added data usage and computational demands or provide a rough estimate of the resources required to implement REMIX in a real-world setting, it would help assess its feasibility and scalability in high-resource environments." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "In continual memorization, the LLM needs to memorize a small set of facts and not forget after further training. Modern LLMs still struggles. We found an effective mitigation strategy by mixing generic data or random word sequences during training." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024continual,\ntitle={Continual Memorization of Factoids in Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2gW8lTRh9m},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) can absorb a massive amount of knowledge through pretraining, but pretraining is inefficient for acquiring long-tailed or specialized facts. Therefore, fine-tuning on specialized or new knowledge that reflects changes in the world has become popular, though it risks disrupting the model’s original capabilities. We study this fragility in the context of continual memorization, where the model is trained on a small set of long-tail factoids (subject-relation-object associations) and must retain these factoids after multiple stages of subsequent training on other datasets. Continual memorization focuses on the specific challenge of retaining long-tail factoids, whereas general continual learning aims to maintain the LLM’s capabilities across a wide range of generic tasks (e.g., reasoning, commonsense knowledge). Through extensive experiments, we show that LLMs suffer from forgetting across a wide range of subsequent tasks, and simple replay techniques do not fully prevent forgetting, especially when the factoid datasets are trained in the later stages. We posit that there are two ways to alleviate forgetting: 1) protect the memorization process as the model learns the factoids, or 2) reduce interference from training in later stages. With this insight, we develop an effective mitigation strategy: REMIX (Random and Generic Data Mixing). REMIX prevents forgetting by mixing generic data sampled from pretraining corpora or even randomly generated word sequences during each stage, despite being unrelated to the memorized factoids in the first stage. REMIX can recover performance from severe forgetting, often outperforming replay-based methods that have access to the factoids from the first stage. We then analyze how REMIX alters the learning process and find that successful forgetting prevention is associated with a pattern: the model stores factoids in earlier layers than usual and diversifies the set of layers that store these factoids. The efficacy of REMIX invites further investigation into the underlying dynamics of memorization and forgetting, opening exciting possibilities for future research." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Continual Learning", "Large Language Model", "Memorization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0cdd08e60bf62b2cae91666aae4587bc11f196f5.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Continual Memorization of Factoids in Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2h1siDrSMl
RoRA-VLM: Robust Retrieval-Augmented Vision Language Models
main
Active
retrieval-augmented generation;vision language model
applications to computer vision, audio, language, and other modalities
3;5;6
4;3;3
3;3;3
1;2;3
2;2;3
4.666667
3.333333
3
2
2.333333
-0.944911
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In line 257 - do you mean top-2 (instead of top-(k-1))?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well written and easy to follow\n- The authors clearly state the motivation for the proposed method and its necessity.\n- RORA-VLM introduces a unique two-stage retrieval approach, effectively bridging the gap between visual and textual information for more accurate knowledge retrieval.\n- The paper tackles the common issue of irrelevant or noisy data in retrieval-based methods by implementing noise resilience strategy\n- The paper address a clearly practical application that might be useful for the community and the industry." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces RORA-VLM, a retrieval-augmented framework designed to enhance Vision-Language Models (VLMs) by addressing two main challenges: managing multimodal query discrepancies and filtering out irrelevant, noisy retrievals. RORA-VLM employs a two-stage retrieval process: 1) Image-Anchored Entity Retrieval: This stage retrieves visually similar images based on the query image, anchoring the retrieval with associated entity information 2) Query-Expanded Text Retrieval: Using entity names from the first stage, the method expands the query to retrieve additional textual knowledge from sources like Google Search." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Method:\n\n- Section 3.2: The authors describe in details two stages. For my understanding, stage-1 is just formulation of the K-NN of the query image in the WIT images (within CLIP latent space). This is a well-known concept, especially in this line of work. I think this is well-detailed stage but it should be on the appendix, while the main paper should contain a brief description of the stage.\n- Line 270: “Similarly, the image I is encoded into a sequence of visual embeddings…” - this is not clear. CLIP encodes an image/text intro a shared embeddings space of dimension d. How do you encode the image patches (n) to the same dimension? Do you feed-forward each patch, separately, to the CLIP model? Do you use the N internal CLIP features for each patch? If so, are you sure that their dimension is d, before the last projection layer? Do you project them with the last visual projection layer as the pooled [CLS] token projected? Please elaborate more on this procedure.\n\nSection 5 currently combines results, ablation study, and discussion, which affects the clarity and flow of these findings. Separating these into distinct sections—such as “Results,” “Ablation Study,” and “Discussion”—would make it easier for readers to follow each component and understand the contributions more clearly. Additionally, crucial details and experiments appear to be missing, and some existing experiments do not convincingly support the claims made. Below are specific areas where the section could be strengthened:\n\nEvaluation:\n\n- Main results: Lines 307-316 (Baselines): The authors list a several MLLM backbones for the QA task which is great. However, baselines to compare to should be other RAG methods. If I understand correctly, only RORA-VLM and Wiki-LLaVA* are using Retrieval Augmentations. If so, how is it comparable to other baselines that uses zero-shot?\n- Building on previous point, I am not fully understand the entire training setup: the only model that was tuned (lines 317-345) was RORA-VLM? If so, again, how is it comparable to other baselines? Please clarify these points.\n\nThere are not enough details about the evaluation protocols and datasets in the paper, and some comparisons are missing. For example, what was the training set of each baseline in Table 1? Did the authors fine-tuned each baseline on the same dataset? which one of them use the proposed RAG method? what about other RAG methods and baselines?\n\nAblation Study:\n\n- Lines 365-367 states “we use the widely adopted average pooling (kernel size of 2, stride of 2) to obtain the same number of visual tokens as our refinement approach” What does “widely adopted average pooling” mean on N CLIP vectors? How does it relate to a kernel size of size 2 and stride 2? Did you manipulated the input image/kernel of CLIP to get the same amount of CLIP vectors? The authors should elaborate on the experiment that was done here, it is unclear.\n- Lines 405-419: I am not convinced why this experiment proves that the model ignore “noise” in the retrieval samples. I would be more convinced with an following experiments, for example: providing the model 1 relevant sample with 2 other randomly-sampled ones, will not change the model’s answer, regardless which 2 noise samples were chosen, or by just proving the 1 relevant sample with no other samples. \n- Lines 420-430 describe Figure 4 that supposed to show how the model ignore “noise” samples. However, it seems like the model pays attention to specific words the correlate with the question (e.g., row 1, “how wide…” attend “height” and “width”). These examples does not show any rubsness to “noise” retrieval as intended." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the above section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The motivation for this work is clearly presented and easy to follow.\n2. Experiment results demonstrate its effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a multimodal version RAG targeting multimodal large language models, such as LLaVA-1.5 for information-seeking VQA. To solve two challenges, the authors propose a 2-stage retrieval process with image-anchored textual query expansion and noise-resilient retrieval augmented generation. Experimental results highlight its effectiveness on OVEN and InfoSeek benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In the Introduction section, providing a figure showing the process of the 2-stage retrieval method would be easier to understand.\n2. Related work discussion and method novelty. How to incorporate multi-modal knowledge into models is not a new problem[1][2]. Some related work is proposed in other multi-modal tasks, such as knowledge-based VQA. Besides, adversarial training is also adopted in existing vision and language training, such as [3]. The authors are encouraged to discuss the existing work and compare the related ones with the proposed method.\n3. The correspondence between the ablation model variants in Table 2 and the proposed module is somewhat unclear. What about the ablation of the two-stage retrieval ?\n4. Figure 2 lacks some of the details of the methodology. The authors are encouraged to refine it.\n\n[1] Gui, Liangke, et al. \"KAT: A Knowledge Augmented Transformer for Vision-and-Language.\" Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2022.\n[2] Lin, Weizhe, et al. \"Fine-grained late-interaction multi-modal retrieval for retrieval augmented visual question answering.\" Advances in Neural Information Processing Systems 36 (2023): 22820-22840.\n[3] Gan, Zhe, et al. \"Large-scale adversarial training for vision-and-language representation learning.\" Advances in Neural Information Processing Systems 33 (2020): 6616-6628." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. **Differences in Approach and Motivation**: The two-stage retrieval approach proposed in the paper seems to be driven by image entities. Does a retrieval approach that combines images and queries better address pattern differences? Furthermore, how do the authors reconcile the conflicting motivations behind query-oriented visual token refinement (introducing noise for adversarial learning) and adversarial noise injection (focusing on denoising)? Would it be more consistent with adversarial learning principles if the former were used only during inference and the latter only during training?\n2. **Fairness of experimental comparisons**: In Table 1, do the authors plan to conduct more experiments to ensure that all models are evaluated on a level playing field?\n3. **Lack of ablation studies**: Can the authors provide insights on the impact of these parameters (k,l,m) on model performance?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Novelty**: The authors address two core challenges in multimodal retrieval-augmented generation (RAG): the retrieval inaccuracies caused by modality discrepancies and the noise often present in retrieved content. To tackle these issues, they propose an innovative solution using a two-stage retrieval process that mitigates modality inconsistency, allowing the system to capture multimodal background information more comprehensively. Combined with an anti-noise strategy, this approach effectively suppresses irrelevant information while enhancing retrieval accuracy and overall performance in multimodal tasks.\n\n2. **Significance**: RORA-VLM offers a valuable method for improving VLMs, especially in knowledge-intensive domains, where retrieval-augmented tasks are often challenged by noise. This framework effectively addresses this key issue, making it particularly suitable for such applications.\n\n3. **Clarity of Presentation**: The paper is well-structured with a clear research motivation, providing thorough explanations of the methodology and experimental results. This clarity aids readers in understanding both the approach and its effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces RORA-VLM, a framework aimed at improving Vision-Language Models (VLMs) on knowledge-intensive tasks. The method addresses two challenges: (1) effectively retrieving relevant multimodal information given the inherent discrepancy between vision and language modalities, and (2) managing the noisy and extraneous information in retrieved knowledge. The paper’s contributions include a two-stage retrieval process with image-anchored textual-query expansion and a robust retrieval augmentation method that employs adversarial noise and visual token refinement. Extensive experiments demonstrate that RORA-VLM outperforms current models on benchmarks such as OVEN, InfoSeek, and Enc-VQA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Inconsistency in Method and Motivation**: The two-stage retrieval in the paper looks like an image entity-driven retrieval approach, would modal differences be better handled with image+query composite retrieval; Additionally, the motivations behind the designs of Query-oriented Visual Token Refinement and Adversarial Noise Injection for Robust Augmentation seem to conflict. The former introduces noise for adversarial learning, while the latter focuses on denoising. It might align better with the concept of adversarial learning if the former were applied solely during the inference phase and the latter exclusively during training.\n2. **Fairness of Experimental Comparisons**: In the main experiments presented in Table 1, the authors' method has undergone pre-training and fine-tuning on knowledge-intensive datasets, whereas many baseline models may not have been trained on such datasets. This raises questions about the fairness of the experimental comparisons.\n3. **Lack of Ablation Studies**: The paper lacks ablation studies on key parameters such as k, l, and m. Including these analyses would provide valuable insights into the impact of these parameters on the model's performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024roravlm,\ntitle={Ro{RA}-{VLM}: Robust Retrieval-Augmented Vision Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2h1siDrSMl},\nnote={under review}\n}" }, "abstract": { "value": "Though vision-language models (VLMs) have demonstrated impressive capabilities as general-purpose visual assistants, they still exhibit inferior performance on knowledge-intensive tasks such as information-seeking visual question answering, primarily due to the challenge of accurately encoding all the associations between visual objects and scenes to their corresponding entities and background knowledge. While retrieval augmentation methods offer an efficient way to integrate external knowledge, extending them to vision-language domain presents unique challenges in (1) precisely retrieving relevant information from external sources due to the inherent discrepancy within the multimodal queries, and (2) being resilient to the irrelevant, extraneous and noisy information contained in the retrieved multimodal knowledge snippets. In this work, we introduce RORA-VLM, a novel and robust retrieval augmentation framework specifically tailored for VLMs, with two key innovations: (1) a 2-stage retrieval process with Image-anchored Textual-query Expansion to synergistically combine the visual and textual information in the query and retrieve the most relevant multimodal knowledge snippets; and (2) a robust retrieval augmentation method that strengthens the resilience of VLMs against irrelevant information in the retrieved multimodal knowledge by injecting adversarial noises into the retrieval-augmented training process, and filters out extraneous visual information, such as unrelated entities presented in images, via a query-oriented visual token refinement strategy. We conduct extensive experiments to validate the effectiveness and robustness of our proposed methods on three widely adopted benchmark datasets: OVEN, InfoSeek and Enc-VQA. Our results demonstrate that with a minimal amount of training instance, RORA-VLM enables the LLaVA-v1.5 model to achieve significant performance improvement and constantly outperform state-of-the-art retrieval-augmented VLMs on all benchmarks while also exhibiting a novel zero-shot domain transfer capability." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "retrieval-augmented generation", "vision language model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/568d83a9236d15299490b7aa52dfee20c3bf546f.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "RoRA-VLM: Robust Retrieval-Augmented Vision Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2hI3o9GHMq
Constraining embedding learning with Self-Matrix Factorization
main
Active
representation learning;constrained matrix decomposition;link prediction
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;5
5;2;4
2;2;2
1;1;2
2;1;2
3.666667
3.666667
2
1.333333
1.666667
0.188982
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How does SMF handle sparse data matrices, and what is its performance compared to other methods in such scenarios?\n- Can the authors elaborate on any potential biases that might be introduced by the learned object similarities in SMF?\n- What are the computational requirements for training SMF, and how does it compare to other methods in terms of training time and resource usage?\n- How does SMF perform in dynamic environments where the association data changes over time, and is there any strategy to update the embeddings efficiently?\n- Could the authors provide more insights into the choice of hyperparameters and their impact on the model's performance?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Performance Evaluation: The paper uses a variety of metrics (RMSE, precision at top-K, AUROC, AUPRC) across different datasets to evaluate the model's performance, which provides an assessment of its capabilities.\n- Comparison with State-of-the-Art: SMF is compared against several established methods, which strengthens the paper's claims about the superiority of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a method, Self-Matrix Factorization (SMF), for learning object representations from association data without prior knowledge of object attributes. The paper claims that SMF outperforms other methods like SLIM, HCCF, and NMF in predicting missing associations and encoding object attributes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of Theoretical Foundation: The paper could benefit from a deeper theoretical analysis of why SMF works better than existing methods. The underlying assumptions and mathematical properties of SMF need more exploration.\n- Complexity and Scalability: The paper does not discuss the computational complexity of SMF or how it scales with larger datasets, which is crucial for practical applications.\n- Limited Discussion on Hyperparameter Sensitivity: While the paper mentions hyperparameter tuning, there is limited discussion on how sensitive the model's performance is to these hyperparameters, which is important for reproducibility and practical use.\n- Overfitting Concerns: The paper does not address potential overfitting issues, especially given the use of regularization terms in the loss function.\n- Generalization to Other Domains: The paper primarily focuses on association data between two types of objects. It is unclear how well SMF generalizes to other types of data or more complex relationships." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please see weaknesses above. My main questions are with respect to the differences of this approach to other MF works. Section 4 kind of wraps this up but doesnt discuss relations and what this method offers.\n\nQ1) What would you say are the contributions of this method compared to the closest ones?\nQ2) could you clarify what specific innovations, if any, have been made in deriving these update rules in Eq3-4 compared to previous work? Could you discuss how the incorporation of the alpha factor contributes to the overall novelty of their approach?\nQ3) could you include more state-of-the-art matrix factorization methods in the comparative analysis? This would help provide a more comprehensive evaluation of SMF's performance." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper explores an interesting topic, ie to generate embeddings that capture implicit object attributes by leveraging similarities inferred from associations \n\n- The addition of the term that exploits the fact that objects (amy) lie on multiple linear manifolds, is interesting and seems to provide some gains over NMF." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Self-Matrix Factorization (SMF), a matrix decomposition method that constrains the nonnegative matrix factorization optimization, among other with a \"Self-Expressivity\" term that aims to preserve the linear manifold information implicit in the original association matrix. \n\nTested on datasets like MovieLens and Drug-SE, SMF outperformed traditional methods in predicting associations and clustering objects based on latent features (e.g., genres or categories). This method shows promise for recommendation systems and unsupervised learning tasks where labeled data is limited." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1) There is no related works section, and the contribution and relationships to the closest matrix factorization methods is unclear. Although a popular topic, there is only a handful of matrix factorization works cited. What are the closest matrix factorization works and how does Eq 2 compares? The second term in Eq. 2 allows each row to be reconstructed from others. Is this the first use of this \"self-expressive\" constraint in MF and representation learning, or have similar constraints been applied in other methods?\nI think that authors should consider adding a dedicated related work section comparing SMF to other recent matrix factorization methods, particularly those using similar self-expressive constraints.\n\nW2) The update rule in Eqs 3-4 are derived from Lee & Seung, 2000 and applied to Eq 2. Unclear if there is any substancial contribution there. Same as the addition of factor alpha that is borrowed from related work.\n\n\n\nW3) Figure 1 seems way too generic and fails to adequately illustrate the novel aspects of SMF. Figure 1(a) depicts a generic matrix factorization, which does not highlight SMF’s unique contributions. Figure 1(b) shows linear subspaces, but it lacks clarity on how the method effectively utilizes only points within the same subspace to reconstruct an object. \nThe authors should consider adding a visual representation of how SMF utilizes points within the same subspace for reconstruction, or including a side-by-side comparison with traditional matrix factorization to highlight SMF's unique approach.\n\nW4) The datasets used for evaluating SMF are relatively small, which limits the generalizability of the results, and the comparative analysis is not extensive. The main competitor in Table 2 is NMF, with modest improvements in RMSE observed for SMF. Additionally, SLIM performs significantly worse than NMF, so it may be more insightful to reorder the rows in Table 2 to better highlight SMF's performance against the second-best model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The paper relies on the assumption that objects reside on multiple linear low-dimensional manifolds embedded within a high-dimensional space. However, this assumption appears to have already been utilized by numerous prior matrix factorization works, rendering it relatively uninnovative. \n\n2. The paper asserts that object similarities can be derived directly from the data matrix, yet it fails to elucidate the method of learning or the criteria for determining these similarities.\n\n3. The paper compares its proposed SMF to other methods such as SLIM, HCCF, and NMF, but it does not provide a comprehensive analysis of the strengths and weaknesses of each method. \n\n4. The experiments conducted in this paper are relatively simplistic, both in terms of the datasets and tasks employed, and the comparative methods utilized are outdated." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper focuses on the problem of learning object representations from solely association data, and proposes a Self-Matrix Factorization (SMF) method. \n2. The authors performed experiments at recovering missing values on the different association matrices and show that SMF obtains comparable or better predictions than its competitors." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the problem of learning object representations from solely association data, and proposes a Self-Matrix Factorization (SMF) method. The innovation of this paper is relatively weak, and the core contributions have not been clearly elaborated.\n\nThere are several concerns that need to be addressed. \n\nFirstly, the paper relies on the assumption that objects reside on multiple linear low-dimensional manifolds embedded within a high-dimensional space. However, this assumption appears to have already been utilized by numerous prior matrix factorization works, rendering it relatively uninnovative. \n\nSecondly, the paper asserts that object similarities can be derived directly from the data matrix, yet it fails to elucidate the method of learning or the criteria for determining these similarities.\n\nThirdly, the paper compares its proposed SMF to other methods such as SLIM, HCCF, and NMF, but it does not provide a comprehensive analysis of the strengths and weaknesses of each method. \n\nForthly, the experiments conducted in this paper are relatively simplistic, both in terms of the datasets and tasks employed, and the comparative methods utilized are outdated." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper relies on the assumption that objects reside on multiple linear low-dimensional manifolds embedded within a high-dimensional space. However, this assumption appears to have already been utilized by numerous prior matrix factorization works, rendering it relatively uninnovative. \n\n2. The paper asserts that object similarities can be derived directly from the data matrix, yet it fails to elucidate the method of learning or the criteria for determining these similarities.\n\n3. The paper compares its proposed SMF to other methods such as SLIM, HCCF, and NMF, but it does not provide a comprehensive analysis of the strengths and weaknesses of each method. \n\n4. The experiments conducted in this paper are relatively simplistic, both in terms of the datasets and tasks employed, and the comparative methods utilized are outdated." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose Self-Matrix Factorization (SMF), a method that learns object representations by constraining them with object similarities that are learned together with the representations from solely association data" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024constraining,\ntitle={Constraining embedding learning with Self-Matrix Factorization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2hI3o9GHMq},\nnote={under review}\n}" }, "abstract": { "value": "We focus on the problem of learning object representations from solely association data, that is observed associations between objects of two different types, e.g. movies rated by users. We aim to obtain embeddings encoding object attributes that were not part of the learning process, e.g. movie genres. It has been shown that meaningful representations can be obtained by constraining the learning with manually curated object similarities. Here, we assume that objects lie in multiple linear manifolds embedded in high-dimensional space, and we argue that similarities between objects that correspond to sharing manifolds can be learned from the observed associations. We propose Self-Matrix Factorization (SMF), a method that learns object representations by constraining them with object similarities that are learned together with the representations. In our extensive evaluation across three real-world datasets, we compared SMF with SLIM, HCCF and NMF obtaining better performance at predicting missing associations as measured by RMSE and precision at top-K. We also show that SMF outperforms the competitors at encoding object attributes as measured by the embedding distances between objects divided into attribute-driven groups." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "representation learning", "constrained matrix decomposition", "link prediction" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/91b9cdcbd2bf54425c20f058a4b86c7f2388df46.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/5ce77adab0903568b110d3abc140a6089cdc9bc0.zip" }, "title": { "value": "Constraining embedding learning with Self-Matrix Factorization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2hKDQ20zDa
Language Reconstruction with Brain Predictive Coding from fMRI Data
main
Active
fMRI-to-text decoding;predictive coding theory
applications to neuroscience & cognitive science
3;5;5;5
5;4;5;4
2;3;3;3
2;2;3;2
2;2;3;3
4.5
4.5
2.75
2.25
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "> In the regions of interests selection experiment, the authors only consider 'random,' 'whole,' and 'BPC' as the ROIs...\n\nThe selected BPC area (superior temporal sulcus, angular gyrus, supramarginal gyrus, and opercular, triangular, orbital part of the inferior frontal gyrus) contributes the most to predictive coding, as indicated in some neuroscience studies [1][2].\n\nThe process for selecting \"BPC\": For the Narratives dataset, Destrieux atlas is applied and the above mentioned ROIs are extracted. For LeBel's dataset, since the fMRI signals are not projected to a standardized space, we use the “Auditory” region provided by the authors', containing parietal-temporal-occipital (PTO) area. The BPC area of both datasets cover highly similar area.\n\nThe process for selecting \"random\": For the Narratives dataset, G_and_S_cingul-Ant, G_and_S_subcentral, G_and_S_transv_frontopol, G_orbital, S_front_middle, S_subparieta are selected. For LeBel's dataset, we randomly choose 1000 voxels from brain surface data.\n\nThe process for selecting \"whole\": We use the whole brain surface data as ROIs for both datasets.\n\nWe believe selecting random and whole ROIs as controlled experiments is sufficient for demonstrating the effectiveness of using predictive coding to improve decoding performance: \n\n1. random vs. BPC demonstrates only ROIs related to predictive coding in human language comprehension can improve decoding.\n\n2. whole vs. BPC not only confirms conclusion in 1, but also shows whole brain surface which contains BPC area still can't contribute to better decoding, because some other brain regions contain too much noise. \n\n3. none (PREDFT without SideNet) vs. BPC. PREDFT without SideNet is equivalent to not using any ROIs for predictive coding. This comparison shows predictive coding improves decoding accuracy significantly.\n\n**All the above clarifications are included in sec 4.3 and Appendix A.4. We will add more key information in sec 4.3 in the updated version**\n\n[1]. Evidence of a predictive coding hierarchy in the human brain listening to speech. Nature Human Behavior\n[2]. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature\n\n> Could the authors provide pseudocode for the method...\n\nWe will provide pseudocode in the appendix for edited version of paper. The discussion of time complexity is already in Appendix A.4 (line 845) of original paper.\n\n> The results provided by the authors mostly only include the mean value...\n\nWe guess you indicate the experiment of analyzing the impact of prediction length and distance to model performance (sec 4.4), as we have presented results per subject for other experiments. **The per-subject results of analyzing the impact of prediction length and distance are already presented in Figure 16,17,18 in the appendix of original paper.**\n\n> In the methods section, some symbols are not defined...\n\n**A notation table for symbols is presented in Table 3 in the appendix of original paper.**\n\n### Questions\n\nPlease refer to clarification for weaknesses." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Rebuttal (part 2)" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "We sincerely appreciate your effort in reviewing our paper. We will address your concerns point by point.\n\n### Weaknesses\n\n> In Section 3.3, the authors state, 'During the inference stage, as illustrated in Figure 8, the decoder in the side network is abandoned.' However, they do not provide a detailed explanation of why the decoder is discarded or discuss the potential impact of this decision...\n\nWhat we really want is predictive coding representation, which is produced by side network encoder. The side network decoder is designed to help train the side network encoder (we can't find out how to directly obtain predictive coding representation). \nSpecifically, during the training process, the label for side network decoder is predicted words instead of complete sentences (as shown in Figure 3). The side network learns mapping between specific areas of brain (BPC area) and predicted words.\nHowever, the goal of our task is to decode complete sentences. So the side network decoder is useless after training.\n\n**We will add the motivation and reason for discarding the side network decoder in the methodology section in the updated version.**\n\n> As shown in Table 1, PREDFT does not achieve the best performance on ROUGE1-R...\n\nWe think the different lengths of generated content might contribute to this factor, since we don't apply a word rate model to control the number of generated words.\nAlthough PREDFT fails to outperform other models in ROUGE1-R, the gap is narrow.\nJust like recall and precision to f1 score, ROUGE-Recall measures the extent to which a machine-generated content captures the information contained in a reference content, which is a single perspective assessment. ROUGE-Recall and ROUGE-Precision are characterized by a trade-off. When PREDFT gets a relative low ROUGE-R, it gets a high ROUGE-P.\nInstead, ROUGE-F1 is a more comprehensive indicator, combining both ROUGE-P and ROUGE-R. Our model outperforms other models in this metric. \n\n> As shown in Table 1, PREDFT without SideNet performs similarly to other methods. However...\n\nThe SideNet is designed to obtain predictive coding representation.\nPREDFT without SideNet can be viewed as traditional deep learning approach which directly applies Transformer to decode text from brain recordings, while PREDFT with SideNet combines deep learning and neuroscience findings (predictive coding). The SideNet provides predictive coding representation to the decoder in Main network, and the decoder incorporates both current fMRI representation and predictive coding representation for text decoding.\n\nThe idea of PREDFT is motivated by predictive coding theory in Neuroscience, which indicates human can naturally predict upcoming words. Since predictive coding has been verified to contribute to human language comprehension, we seek to investigate whether such predictive information can help language reconstruction. \nThe improvement of incorporating SideNet highlights 1. the effectiveness of our model design 2. predictive coding has potential to improve brain-to-text decoding. **We will provide the illustration of PREDFT without SideNet for better understanding in the updated version.**\n\n> Although the authors provide a detailed description of the hyperparameter selection...\n\nAll the hyperparameters are chosen to minimize the training & validation loss as much as possible.\nWe don't understand which hyperparameter the reviewer is confused about.\nThe influences of ROIs selection, prediction length, prediction distance, $ \\lambda $ to model performance are detailedly discussed in sec 4.3, sec 4.4, Appendix E. The learning rate is set to stabilize training. We don't test the influence of model layers (e.g. Transformer layers) due to limited computational resources." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Rebuttal (part 1)" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weaknesses. I would need to be convinced that majority of the claimed improvements in the model are not merely from a bias towards outputting high-frequency words, and thereby overfitting the chosen test metrics of BLEU and ROGUE, in order to change my score. Right now, I am fairly convinced that this is the case." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The attempt to use hypothesized predictive coding representations to enable better text decoding is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper describes a decoding method \"PredFT\" that uses a main decoding network and a side network to perform decoding from fMRI recordings of subjects listening to stories to text. The side network is responsible for obtaining predictive coding representations from specific brain regions and integrating them into the main network, enhancing language decoding. The authors claim that this integration leverages brain regions known for predictive functions (like the parietal-temporal-occipital areas) to better align brain signal decoding with anticipated semantic content. This is supported by results that have claimed the brain performs predictive coding during language stimulation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concern is that the metric does not seem to produce even locally coherent text, which substantially damages the authors' claims that this method is an advancement over prior work, such as Tang et al., which uses an LM to guarantee local coherence. Consider the following example from the case study: \"He don’t know my girl you of the eyes but his girl sleep he and he said and he said and the to the and and which I not wrong. But the Guy\". Clearly, this has no meaning, and does not even obey basic local grammatical rules (e.g. \"and and\"). The problem seems to be that the model has merely learned repeat short, high-frequency words like \"the\", \"he\" and \"and\", which improves BLEU/ROGUE score but does not actually move forward towards the goal of better language decoding. I imagine if you just had the model repeatedly and randomly output words sampled from the top 100 most common English words that it would behave fairly similarly. My expectation is that a small percentage of the improvement in BLEU score is genuinely derived from brain signals, with most of the benefit deriving from this output bias. The unreasonably high 5.62 BLEU-3 score when compared to other methods is more of a red flag, because its pretty clear that the model is simply guessing every high frequency trigram in the English language.\n\n The paper is also quite difficult to read for no reason and pointlessly notational, for example when the self-attention equation is repeated three separate times in only slightly different ways." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Section 3.3, the authors state that the decoder in the side network is abandoned during the inference stage. Could the authors provide a detailed explanation of why the decoder is discarded and discuss the potential impact of this decision on the overall performance and functionality of the model?\n\n2. As shown in Table 1, PREDFT does not achieve the best performance on ROUGE1-R. Could the authors analyze the potential reasons for this and discuss any factors that may have contributed to the lower performance in this specific model? For instance, how might the model's architecture, training process, or characteristics of the ROUGE1-R metric explain this discrepancy? Did the authors observe any patterns in the types of language constructs where PREDFT underperformed on ROUGE1-R?\n\n3. As shown in Table 1, PREDFT without SideNet performs similarly to other methods, while the inclusion of SideNet leads to a significant performance improvement. Could the authors provide a detailed analysis of this phenomenon to explain how SideNet contributes to the model's enhanced performance?\n\n4. Although the authors provide a detailed description of the hyperparameter selection, could they explain the rationale behind these choices? How do these choices relate to the model's performance or the underlying theory of predictive coding?\n\n5. In the regions of interest selection experiment, the authors only consider 'random,' 'whole,' and 'BPC' as the ROIs. Could the authors clarify whether there are other potential ROIs associated with predictive coding? If so, could they provide supporting neuroscience literature for the selection of BPC? Additionally, can the authors explain the process for selecting these particular ROIs and why they believe these are sufficient to demonstrate the effectiveness of their approach? Did the authors consider any other ROIs, and if so, why were those not included in the study?\n\n6. Could the authors provide pseudocode for the method and an analysis of its time complexity to enhance the reproducibility of the article?\n\n7. The results provided by the authors mostly only include the mean value. Could the authors include the variance and statistical test results in the experimental results?\n\n8. In the methods section, some symbols are not defined. Could the authors compile a list of symbols used in the paper in an appendix to help readers understand better?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Integrating predictive coding theory into the decoding process offers a fresh perspective on reconstructing language from brain signals.\n2. Experimental results demonstrate that PREDFT outperforms other methods across various evaluation metrics, showing significant improvements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents PREDFT (FMRI-to-Text Decoding with Predictive Coding), a novel framework that utilizes predictive coding to translate fMRI signals into continuous language. This approach combines a primary decoding network with an auxiliary network focused on capturing brain predictive coding, aiming to improve the accuracy of language reconstruction from brain signals. The authors conduct experiments on two established naturalistic language comprehension fMRI datasets, showing that PREDFT achieves state-of-the-art performance across multiple evaluation metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In Section 3.3, the authors state, 'During the inference stage, as illustrated in Figure 8, the decoder in the side network is abandoned.' However, they do not provide a detailed explanation of why the decoder is discarded or discuss the potential impact of this decision. It is recommended to elaborate on the rationale behind this choice and its implications on the overall performance and functionality of the model.\n\n2. As shown in Table 1, PREDFT does not achieve the best performance on ROUGE1-R. The authors should analyze the potential reasons for this and discuss any factors that may have contributed to the lower performance in this specific model. For instance, the model's architecture, training process, or characteristics of the ROUGE1-R metric that might explain the discrepancy. If the authors observed any patterns in the types of language constructs where PREDFT underperformed on ROUGE1-R.\n\n3. As shown in Table 1, PREDFT without SideNet performs similarly to other methods. However, the inclusion of SideNet leads to a significant performance improvement. The authors should provide a detailed analysis of this phenomenon to explain how SideNet contributes to the model's enhanced performance.\n\n4. Although the authors provide a detailed description of the hyperparameter selection, they do not explain the rationale behind these choices. How these choices relate to the model's performance or the underlying theory of predictive coding.\n\n5. \"In the regions of interests selection experiment, the authors only consider 'random,' 'whole,' and 'BPC' as the ROIs, which appears somewhat limited. The paper does not clarify whether there are other potential ROIs associated with predictive coding, nor does it provide supporting neuroscience literature for the selection of BPC. It is recommended to either justify the choice of BPC with relevant references or explore additional ROIs to strengthen the study's validity. If authors can explain the process for selecting these particular ROIs and why authors believe these are sufficient to demonstrate the effectiveness of their approach. Additionally, if authors considered any other ROIs and why those were not included in the study.\n\n6. It is recommended that the authors provide pseudocode for the method and an analysis of its time complexity to enhance the reproducibility of the article.\n\n7. The results provided by the authors mostly only include the meanvalue. The experimental results should provide the mean, variance, and statistical test results.\n\n8. In the methods section, some symbols are not defined. It is recommended that the authors compile a list of symbols used in the paper in an appendix to help readers understand better." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(1) Why did the author only use BLEU and ROUGE in the experiment? Why doesn't the author use WER, METEOR, and BERTScore which is used in the Tang and MapGuide? BLEU and ROUGE both evaluate the matching degree of n-grams, which can easily lead to surface matching but semantic mismatch. METEOR and BERTScore can better reflect semantic similarity.\n(2) Many of the methods compared by the author incorporate LLM, while the author's model is entirely trained with their own transformer. Does this result in the author's method being inferior to the baseline method in terms of semantic similarity?\n(3) The author's method was inspired by predictive coding and validated it on LLM using a prediction score. But can the author's own model still observe the same phenomenon on the prediction score? I haven't seen the same experiment evaluating the author's own model.\n(4) In some parts of the paper, fMRI is spell as FMRI." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The author provided sufficient experiments to demonstrate the significance of his motivation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In the submission-6263, the authors propose PREDFT (FMRI-to-Text decoding with Predictive coding) , which was inspired by predictive coding theory. This theory suggests that when humans listen to a certain speech, their subconscious brain predicts the words they may hear next. Then the author validated this theory through a prediction score. The verification method is to first calculate the correlation coefficient between the features extracted by LLM at the current location and the brain features, and then add the features of an upcoming text segment to the current location features, calculate the correlation coefficient again, and observe the changes in the correlation coefficient. The experimental results show that incorporating upcoming text features can increase the correlation coefficient between LLM features and brain features. Based on the above experimental results, the author designed their own model, which includes the side network to decode upcoming text. In the decoding of current text, the feature from the side network is used to incorporate the predictive coding theory into the method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although the author's explanation of motivation is very sufficient, I still have a few major questions about the author's method and list them in the questions part." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What would be the chance-level performance when reconstructing continuous language? Is there a baseline available for comparison? Additionally, what is the percentage of overlap between random ROIs and whole-brain voxels? Did the authors repeat the selection of random ROIs multiple times to ensure robustness, or did they only select a single set of random ROIs?\n2. What is the rationale for using 4D volume data from the Narratives dataset while using 2D brain data from the Moth Radio Hour dataset? Since the Narratives dataset includes both smoothed and unsmoothed versions, along with brain masks to select activated voxels from the 4D volume, why did the authors make these choices regarding data representation?\n3. There is no interpretation provided for the two encoders used in PREDFT. The authors could project these voxels onto brain maps to verify the quality of their encoders.\n4. Figures 3, 4, 6, and 8 appear redundant. The authors could combine these into a single figure with a comprehensive caption, instead of presenting multiple, repetitive figures.\n5. What does the y-axis represent in Figure 9?\n5. Several major questions are raised in the weaknesses section.\n\nTypos:\n\n1. Line 35: Bhattasali et al. (2019); Wang et al. (2020); Affolter et al. (2020); Zouet al. (2021) - > (Bhattasali et al. 2019; Wang et al. 2020; Affolter et al. 2020; Zouet al. 2021)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The motivation for using predictive coding in continuous language reconstruction is clear and well-explained.\n2. The proposed approach aims to improve the reconstruction of narrative stories from fMRI brain data. This is a very interesting research area because reconstructing language is challenging due to the slowness of the hemodynamic response.\n3. The authors compared the reconstruction performance using evaluation metrics against recent studies. Additionally, ablation studies were conducted on the proposed approach, with and without the predictive coding component." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Recent brain decoding studies have demonstrated that speech perception can be decoded from fMRI recordings and subsequently reconstructed as continuous language. These studies reconstruct continuous language either from specific regions of interest (ROIs) or from the whole brain, using decoder-based language models like GPT-2. Additionally, recent predictive coding studies reveal that the human brain naturally engages in continuously predicting future words across multiple timescales. Building on recent linguistic brain decoding research and the predictive coding approach, this paper explores predictive coding theory in the context of continuous language reconstruction. To this end, the authors propose PREDFT (fMRI-to-Text decoding with Predictive Coding), which consists of a main decoding network and a side network (the predictive coding component). Experimental results on two naturalistic brain datasets (Moth Radio Hour and Narratives) indicate that PREDFT achieves superior decoding performance when comparing the actual story with the reconstructed story." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There are several major weaknesses in this work, particularly concerning the evaluation of reconstruction results:\n\t- A major concern is that the current study (PREDFT) does not provide a clear evaluation of reconstruction results compared to the baseline paper by Tang et al. (2023).\n\t- For example, the authors did not evaluate the word rate in the generated narrative story. Since the fMRI data was captured while participants were listening to stories, each word has an onset and offset. Similarly, during decoding, what is the word rate predicted by the proposed model, and does this word rate match the actual word rate of the original stimuli? \n\t- Therefore, comparing the reconstructed stimulus to the ground truth (i.e., the actual transcripts of the stimuli) would provide a good sense of whether the outputs are meaningful, as the dataset includes the ground truth of what words participants heard and when they heard them.\n\n2. Furthermore, the authors performed decoding using either random selections of ROIs, the whole brain, or BPC, which includes language-related ROIs. However, prior studies have focused on specific ROIs, such as the language, prefrontal, and auditory association cortices. Therefore, it is unclear how the proposed method compares with prior methods. Since the authors' main research question revolves around how semantic information is embedded in brain signals to improve decoding, they should consider these ROIs, as they maintain a hierarchy of language processing.\n\t- The random selection of ROIs generally leads to low decoding performance. What are these random ROIs? Do they have any overlap with BPC ROIs?\n\t- Previous studies have conducted both quantitative and qualitative analyses, reporting what the stimulus decoded at each ROI, including language-related regions in both the left and right hemispheres, as well as using four evaluation metrics. However, this paper does not report any reconstructed stimulus in the main content, nor does it include analysis at the ROI level. Additionally, the authors only used two metrics, and throughout the paper, the focus is more on the scores rather than on the main reconstructed language results.\n\n3. Although the authors report some results on predictive length and distance from the current word in Figure 1, there are no qualitative reconstruction results for these different predictive lengths and distances. What type of information is the model forecasting based on brain data? Is it syntactic information, such as nouns and verbs, or semantic content? This analysis is clearly missing from the paper.\n\n4. All the figures lack detailed captions. The results presented in the figures are difficult to understand. For instance, what is the prediction score in each subplot of Figure 1? What does each line in the top plots represent? What does prediction distance \"d\" refer to? Without providing clear details in the figure captions or placing the figures appropriately in the text, it becomes challenging for readers to understand the content and what is being conveyed.\n\n5. Since the authors use two encoders and two decoders in the proposed PREDFT, it is unclear which component is primarily responsible for reconstructing the language and which component provides the theme and narrative structure. It would be interesting if the authors reported the generated stimulus from individual components and from PREDFT as a whole, along with the performance metrics. This would help identify the shared and individual contributions of each component during language reconstruction." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024language,\ntitle={Language Reconstruction with Brain Predictive Coding from f{MRI} Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2hKDQ20zDa},\nnote={under review}\n}" }, "abstract": { "value": "Many recent studies have shown that the perception of speech can be decoded from brain signals and subsequently reconstructed as continuous language. However, there is a lack of neurological basis for how the semantic information embedded within brain signals can be used more effectively to guide language reconstruction. Predictive coding theory suggests the human brain naturally engages in continuously predicting future words that span multiple timescales. This implies that the decoding of brain signals could potentially be associated with a predictable future. To explore the predictive coding theory within the context of language reconstruction, this paper proposes PredFT (FMRI-to-Text decoding with Predictive coding). PredFT consists of a main decoding network and a side network. The side network obtains brain predictive coding representation from related brain regions of interest (ROIs) with a self-attention module. This representation is then fused into the main decoding network for continuous language decoding. Experiments are conducted on two popular naturalistic language comprehension fMRI datasets. Results show that PredFT achieves current state-of-the-art decoding performance on several evaluation metrics. Additional observations on the selection of ROIs, along with the length and distance parameters in predictive coding further guide the adoption of predictive coding theory for language reconstruction." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "fMRI-to-text decoding", "predictive coding theory" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b4151d73d5768790c6f7af1d68357d03c85ba298.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Language Reconstruction with Brain Predictive Coding from fMRI Data" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2hbgKYuao1
Balancing Efficiency and Expressiveness: Subgraph GNNs with Walk-Based Centrality
main
Active
Graph Neural Networks;Subgraph GNNs;Subgraphs;Expressive power
learning on graphs and other geometries & topologies
5;5;5;6
3;3;3;3
2;3;3;3
2;3;2;3
3;2;2;3
5.25
3
2.75
2.5
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Have you investigated how the performance of you sampling strategy varies with graph density? Intuitively, in very sparse graphs, walk-based centrality might be less informative.\n2. In the perturbation analysis, could the bound be tightened by considering specific graph properties or structures? For instance, does the bound become tighter for trees or graphs with bounded degree?\n3. Is there any strategies to extend the perturbation analysis to handle edge features or different types of marking strategies?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. the theoretical analysis provide a solid foundation for their proposed method. The utilization of perturbation theory and expressive theory looks correct for me.\n2. This method is neat in design, which only requires minimal computation overhead compared with baselines while maintaining competitive performance.\n3. The experimental validation is comprehensive, besides comparing their method with SOTAs, they conduct synthetic experiments for counting substructures, detailed ablations and experiment time analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a model termed Hybrid Marketing Network (HyMN), which is designed based on two intuitions which may potentially improve the GNN performance. First, marking the nodes included in Subgraph GNNs using walk-based subgraph centrality measures as an efficient strategy. Second, augmenting the node features with the same centrality measures as Structureal Encodings (SEs). The key insight is that walk-based centrality measures serve both as effective indicators for subgraph importance and as infrmative structrual features. The authors theoretically analysed the node marking strategy with graph perturbation theory and demonstrate that their approach effectively balance expressiveness and efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The sampling strategy does not consider interactions between sampled subrgaphs that are already sampled, which can lead to potential redundancy.\n2. The analysis focus primarily on single-node marking. While mentioned in the limitations, extending the analysis to multi-node marking could provide addiitonal insights. Imagine a social network representing employees in a company. In this network, there's a team of three senior managers (say Alice, Bob, and Carol) who work closely together and have very similar connections to other employees. They all interact with the same team members, attend the same meetings, and collaborate on similar projects. According to your approach, since all three managers have high centrality scores due to their positions, the algorithm might select both Alice and Bob for marking. However, because their network positions and connections are so similar, marking both of them provides largely redundant information. It would be more informative to mark Alice (representing the management team's perspective) and then perhaps mark someone from a different department or organizational level, like a developer or a project coordinator, to capture a different aspect of the company's structure.\n3. The approach can be less effective on graphs where walk-based centrality measures don't align with task objectives. Consider a drug discovery task where we're trying to predict whether molecules will bind to a specific protein receptor. The binding activity often depends on specific functional groups located at the periphery of the molecule. Take acetylsalicylic acid (aspirin) as an example. The molecule's binding properties are largely determined by its acetyl group at the edge of the molecule, but the walk-based centrality measure would give more importance to the central benzene ring because it participates in more walks through the molecular graph. In this case, marking nodes based on centrality would emphasize the structurally central but functionally less relevant parts of the molecule, while potentially overlooking the peripheral functional groups that actually determine the binding behavior. This mismatch between structural centrality and functional importance could lead to suboptimal performance on specific prediction tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could the author provide more evidence to support the validity of the two claims in line 182?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper proposes a novel framework that balances the efficiency and expressiveness of subgraph GNNs.\n2. This paper provides a comprehensive theoretical analysis and experimental validation of why the simple subgraph centrality measure is needed for subgraph GNN.\n3. The experiment results demonstrate the effectiveness and efficiency of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To balance efficiency and expressiveness in subgraph GNNs, this paper proposes a novel framework that utilizes walk-based centrality measures for subgraph subsampling and structural encoding. The necessity of using centrality measures is demonstrated through theoretical analysis and experimental validation. Experimental results show the effectiveness of HyMN across various tasks and datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The two main components (subsample subgraph using centrality measure, and centrality-based structural encoding) are similar with previous work.[1,2]\n2. A more detailed introduction to the key background of this work would be helpful(i.e. the background and method of the node marking)\n3. More backbone(except GIN) and more baseline should be preferred to be considered by authors.\n\n[1] Sun, Qingyun, et al. \"Sugar: Subgraph neural network with reinforcement pooling and self-supervised mutual information mechanism.\" *Proceedings of the web conference 2021*. 2021.\n\n[2] Rampášek, Ladislav, et al. \"Recipe for a general, powerful, scalable graph transformer.\" *Advances in Neural Information Processing Systems* 35 (2022): 14501-14515." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Refer to Weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The walk centrality-based subgraph selection method achieves efficient subgraph sampling, \nsimplifying the model while ensuring performance, making it suitable for larger subgraphs.\n 2. Experiments on various tasks showcase HyMN's adaptability and performance.\n 3. By ranking nodes based on their centrality values, and mark the top-scoring ones, the model \nreduces computation time, making it applicable to a wider spectrum of downstream tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel method called Hybrid Marked Network (HyMN), aimed at balancing \ncomputational efficiency and model expressiveness. The approach combines subgraph GNNs \nwith structure encodings (SEs) based on walk centrality measures. The main innovation lies in \nutilizing subgraph centrality for subgraph sampling and structure encoding, allowing for the \nmaintenance of prediction accuracy while reducing the required number of subgraphs, thus \nlowering computational costs while preserving the model's expressive capacity. Experimental \nvalidation on synthetic and real world tasks demonstrates that HyMN outperforms other state-of\nthe-art GNN methods while reducing computational demands" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Abstract and Background Information:\nThe abstract is incomplete, as it lacks essential background information. This omission leaves me confused about the specific problem the paper aims to address. A well-defined problem statement is crucial for understanding the motivation behind the research and its significance.\n\nExperimental Validation of CSEs:\nI noticed that experiments on the effect of CSEs in the Peptides and Zinc datasets were not conducted. This absence raises questions about the impact of incorporating CSEs (Q4). Clarifying this point is essential for understanding how CSEs contribute to the overall findings of the study. Consider including experimental results or justifications for their omission.\n\nExploration of Selection Strategies:\nThe paper would benefit from exploring more complex selection strategies and different walk-based centrality measures. This exploration could provide deeper insights into the dynamics at play and strengthen the overall analysis.\n\nTypos and Formatting Issues:\nWhile typos and formatting are not the most critical issues, there are several areas that require attention. For instance, the indentation of the abstract does not align with the template requirements. Additionally, line 194 contains a grammatical error with \"is are\" used simultaneously, which should be corrected. Some equations lack punctuation at their end, where it is needed, leading to inconsistencies. Lines 206-210 stray from the scope of Definition 1 and should not be italicized. Furthermore, Algorithm 1 lacks a title, which is necessary for clarity and organization.\n\nSection 2 - Clarification of Content:\nAlthough Section 2 is titled “PRELIMINARIES AND RELATED WORK,” it primarily focuses on related work without adequately presenting the foundational definitions necessary for understanding the subsequent content. It would be beneficial to include a clearer explanation of the definitions and concepts that readers should be familiar with before delving into the related literature." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you clarify what new insights are brought by the results in Table 1 compared to those in Figure 6 and Figure 3? The distinction between these results is not immediately clear to me.\n2. In line 247, you mention that the sampling strategy should \"(...) alter the graph representations in a way that is consistent with the target space.\" This aligns with the theoretical concerns in point 1. However, the experimental study performed pertains to counting substructures, a task where the information extracted by SC is expected to be relevant. How do you expect this sampling method to compare to other centralities in tasks where SC may not encode the most relevant information, such as network flow characterization?\n3. The results under OSAN represent 1-OSAN?\n4. Proposition 2 - Appendix B relates MPNNs with SE to DSS-GNN. The complete proposed method shows better results than 1-OSAN (assumed, see above) which is upper-bounded by the 2-WL, and better results than policy-learn which is expected to be more expressive than 1-WL but less than 4-WL. Where is the complete proposed method positioned in the WL framework for expressivity?\n5. In line 424, it is stated that Figure 3 compares random sampling with SC. However, earlier (line 51), you state that random sampling is suboptimal. Why were other sampling strategies not tested to fully validate SC?\n6. What was the method used to select the hyperparameter T? A brief explanation in the text would provide more clarity.\n7. It was not addressed why a higher T, meaning more subgraphs, meaning more information leads to worse results. Is the information not useful? Is it connected to the sampling procedure not taking into account already sampled subgraphs? This is unclear to me.\n8. For proposition 1, Step 2, the block matrix $W^{(j+1)\\_0}$ seems to select the first $k+j$ columns not $k+j+1$? Consider $k=3$ for $j=1$. The block matrix will have dimensions $7 \\times 7$, with only the block in the first spot of the diagonal performing a selection since the second block $I_{j-1}$ is omitted.\n9. For the same proposition, I would like to seek clarification regarding the explanation provided for $AXW^{(j+1)_1}$. Specifically, the identity matrix is described as having dimensions $k - (j - 1)$, yet the reference appears to describe the selection of the last $j$ columns. Additionally, the process by which the iterations from $j = 1$ to $j = k$ contribute to the formation of the final vector is not entirely clear. I would greatly appreciate any further elaboration on these points to enhance my understanding.\n10. Considering choices of T > 1 in the experiments, for theorem 4, what is the impact of k>1 for top-k Subgraph GNN compared to MPNN+CSE?\n\nMinor:\n1. The phrase starting in line 214, \"If instrumental (...)\", is confusing, consider rewriting it.\n2. Quartic is often misspelled as \"quatric\".\n3. Line 890, the expression \"1-hop neighborhood\" is imprecise, I recommend 1-hop induced subgraph.\n4. Line 836, misspelled \"the\" as \"he\".\n5. Missing subscript on line 952?\n6. No identification of what $m$ denotes in equation 15.\n\nSuggestions (Optional):\n1. Line 228, \"(...) untrained 3-layer GIN\". By untrained I assume the weights were initialized at random. If this is the case, for GIN, different random initialization should lead to some variance, even if small, in the results, leading to variance in the perturbations. It would be more robust to report the mean difference in perturbations across multiple random initializations, as this would account for variance in the results.\n2. There is no direct comparison with some relevant network architectures like $I^2$-GNN that are not captured by 1-OSAN. I understand that $I^2$-GNN was used as a comparison point in MAG-GNN, but the results were quite close in the referred work, hence, I believe it would be useful to add such comparison. More importantly, works like ESC-GNN introduce a faster alternative to $I^2$-GNN. Since the presented work also focuses on efficiency, I believe a comparison with ESC-GNN would be interesting.\n3. Since much of the code used is based on GraphGPS, the configs could be more carefully described to be easier to match the experiments in the paper.\n\nThe paper presents contributions that may be of limited novelty while having some inconsistencies. However, I am **very** open to revisiting and potentially increasing my evaluation score, provided the authors effectively address the identified weaknesses and respond to the questions posed. I encourage the authors to consider these aspects to enhance the manuscript's impact and clarity." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Relevance and Contextualization**: The proposed work addresses an issue (computational complexity of Subgraph GNNs) that is extremely relevant for the community. It clearly identifies gaps in existing methods and situates the research well within the current state of the art.\n2. **Centrality-based Sampling**: The decision of sampling based on centrality measures has some theoretical backing. The analysis based on perturbations is well founded and creative.\n3. **Structural Encoding**: The decision of incorporating structural encodings with subsampling is well motivated and has sufficient theoretical backing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The proposed work mainly addresses concerns regarding the temporal complexity of Subgraph GNNs. It proposes a mechanism based on subgraph centrality to sample the graphs that will be part of the bag. Furthermore, it demonstrates that adding centrality encoding to nodes can enhance the discriminative power of sampled Subgraph GNNs while limiting the added computational overhead.\n\nMain Contributions:\n1. Adoption of centrality-based sampling for Subgraph GNNs.\n2. Clear statement and results regarding incorporating structural encoding as means to improve expressivity of sampled Subgraph GNNs without much computational overhead." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Theoretical Concerns**: The analysis based on perturbation is not sufficient to justify using the highest centrality nodes to guide the sampling procedure. Even though they will tend to produce the highest perturbation in the graph representation, this alone does not fully justify their selection for the sampling procedure. These nodes may cause the highest perturbation in graph representations, but their practical value in enhancing graph learning remains uncertain, there is no guarantee on the usefulness and attainability of such modifications. Additionally, the assumption that high-centrality nodes capture the most relevant information is limited by the specific centrality metric used, which may not capture all essential features.\n2. **Comparison of Centrality Measures**: The comparison between the adopted subgraph centrality, SC, and other centralities is not sufficient to prove SC's superiority. Moreover, centrality measures often complement each other rather than subsumming one another. SC may be more effective for certain tasks but not necessarily for all graph learning tasks. Rather than claiming SC's superiority, it would be better to show in which tasks SC excels and acknowledge that other centrality measures may outperform it in different contexts. Otherwise, stronger results linking the superiority of SC over every other centrality for graph learning, would be necessary.\n3. **RWSE vs CSE**: While the proposed method (CSE) shows some advantages over RWSE, particularly from a sampling perspective as seen in Figure 7, its benefits appear limited. The experiments focus primarily on counting substructures, a relevant task but one that may not fully demonstrate CSE's broader applicability due to its inherent predisposition toward this type of task.\n4. **Inconsistencies in Experiments**: The experimental results lack consistency, as the models used for comparison vary across datasets. It is understandable that in many cases the results for all datasets are not available in the original works. However, this inconsistency can raise confusion and concerns of cherry-picking. It would strengthen the results to ensure uniformity in model comparisons.\n5. **Missing Runtime Comparisons**: The efficiency of the proposed method is emphasized throughout the paper, yet runtime comparisons are not provided for all datasets. Since computational efficiency is a key focus, these comparisons should be included in every experiment to give a more comprehensive view of the method’s benefits.\n6. **Confusing Proof**: The presentation of Theorem 4 lacks clarity. Specifically, in line 910, it is stated that Lemma 1 will be used to demonstrate that the multiset of values in the centrality encoding for each graph will be distinct. However, unless I am overlooking something, Lemma 1 does not seem to establish this. Rather, it appears to indicate that the global node is consistently selected for top-1 sampling, which does not sufficiently ensure distinguishability between the centrality multisets of the two graphs. This interpretation seems supported by the statement in line 963. Moreover, the topic of centrality encoding does not reappear until line 966.\n\nFurthermore, considering that $A$ represents the adjacency matrix and $A^k$ denotes its $k$th power, the assertion in Equation 11 raises concerns regarding its mathematical validity and intuitive clarity. For example, the expression $A^{k+1}\\_{u\\_1,v} = A^{k+1}\\_{u\\_1, ;} \\\\cdot A^{k+1}\\_{;, v}$ appears to be problematic.\n\nFrom a path interpretation perspective, the original statement suggests that the number of paths of length $k+1$ between nodes $u_1$ and $v$ can be derived by aggregating information from $A^{k+1}\\_{u_1, ;}$, which accounts for all paths of length $k+1$ originating from $u_1$, and $A^{k+1}{;, v}$, which encompasses all paths of length $k+1$ terminating at $v$ from any other node. Would it not be more accurate to represent this relationship as $A^{k+1}\\_{u_1,v} = A^{k}\\_{u_1, ;} \\cdot A^{1}\\_{;, v}$?\n\nAdditionally, I would like to point out the presence of redundant statements, such as $A^{k+1}\\_{u_1, v} \\geq A^{k+1}\\_{u_1, v}$ found in line 961, which could benefit from clarification or removal." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel framework that uses walk-based centralities as a powerful Structural Encoding and to reduce the computational cost of Subgraph GNNs." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024balancing,\ntitle={Balancing Efficiency and Expressiveness: Subgraph {GNN}s with Walk-Based Centrality},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2hbgKYuao1},\nnote={under review}\n}" }, "abstract": { "value": "We propose an expressive and efficient approach that combines the strengths of two prominent extensions of Graph Neural Networks (GNNs): Subgraph GNNs and Structural Encodings (SEs). Our approach leverages walk-based centrality measures, both as a powerful form of SE and also as a subgraph selection strategy for Subgraph GNNs. By drawing a connection to perturbation analysis, we highlight the effectiveness of centrality-based sampling, and show it significantly reduces the computational burden associated with Subgraph GNNs. Further, we combine our efficient Subgraph GNN with SEs derived from the calculated centrality and demonstrate this hybrid approach, dubbed HyMN, gains in discriminative power. HyMN effectively addresses the expressiveness limitations of Message Passing Neural Networks (MPNNs) while mitigating the computational costs of Subgraph GNNs. Through a series of experiments on synthetic and real-world tasks, we show it outperforms other subgraph sampling approaches while being competitive with full-bag Subgraph GNNs and other state-of-the-art approaches with a notably reduced runtime." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph Neural Networks", "Subgraph GNNs", "Subgraphs", "Expressive power" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7a65c73ac52f0e7707b0c232c03c1d760f621349.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c378c8a96651dece3e10c10e00731402737ae554.zip" }, "title": { "value": "Balancing Efficiency and Expressiveness: Subgraph GNNs with Walk-Based Centrality" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2hcfoCHKoB
DeepRTL: Bridging Verilog Understanding and Generation with a Unified Representation Model
main
Active
Large Language Model;Program Representation Learning;Verilog Understanding and Generation
foundation or frontier models, including LLMs
3;6;6;8
4;4;3;2
2;1;3;4
1;3;2;3
2;3;2;3
5.75
3.25
2.5
2.25
2.5
-0.802181
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- As a whole, the work seems extensive and relatively careful, from conceptualization to base data collection, human annotation, model training, and evaluation.\n- I am not an expert in EDA, but it seemed like the work was novel from the point of view of such a dataset and model not existing previously.\n- The experimentation is extensive, comparing a fairly large number of models with various evaluation metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a dataset and a model for verilog generation and understanding. It carefully describes the annotation process for the dataset and presents an extensive battery of experimental results. Overall, the paper seems valuable to me, although I should clarify that I am well-versed in code generation, but not in Verilog so I may be missing some context with related work." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- As someone who is not well-versed in Verilog, I would have appreciated an explanation of the basics of the language, what is its basic syntax, characteristics, etc. But there was not very much explanation in the paper.\n- Conceptually, the work was rather straightforward and I did not get many clear research insights from the paper. For this paper I am not extremely concerned about this though, as the work seems valuable nonetheless, and could serve as a base for future research.\n- It was not clear how much of the work will be released for the research community to build on. It seems that some of the data may be released, but presumably the proprietary data will not be? And also it wasn't clear about the model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "--" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Verilog is not a particularly known language. Authors could have explained a bit more its nature, syntax and usage.\n\n* Figure 1, although it helps to understand the flow of data collection, it’s not particularly clear. The fact that the flow goes to the top-left in opposition to the common flow for reading (top to bottom and left to right) makes it unclear. Also, which part is used for training? Only after distil?\n\n* Line 388-392: these lines and Figure 3 describe the progressive training. This explanation is not clear. Are the authors just feeding the model with more to less granular annotations? That could be an example of Curriculum learning. Please clarify and add references if needed.\n\n* why the authors didn’t compared the performance of the new models with Zhang et al., 2024, , Chang et al. 2024b, Liu et al. (2023b); Thakur et al. (2024)," }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Original approach, focusing on both generation and understanding tasks on a low resource code language as Verilog, specifically designed for hardware description.\nThe approach seems reasonable. The field of application is needed and follows the ultimate goal of improving electronic design automation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper deals with the task of code understanding and generation in the context of generation of hardware description language (HDL) code. In particular, this work focuses on the generation of Verilog code.\nThe model is based on an existing CodeLLM (authors used CodeT5+), which was fine tuned with a new augmented dataset created for this purpose. The dataset comprises both open and proprietary verilog codes, which were augmented (commented and summarised) by GPT-4. \nTwo models are trained using a progressive training strategy based on CodeT5+ models. For the understanding benchmark, models are evaluated in terms of BLUE and ROUGE, as well as embedding similarity and GPT score. Results show an improved performance over competitors and baseline models. For the generation part, the models are evaluated on a Verilog generation benchmark introduced by Chang et al. 2024, and compared with GPT series models showing competitive performance against the best, o1-preview and surpassing GPT3.5 and GPT4." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The work lacks clarity. Particularly, the dataset collection and the training regime are not completely clear and their figures do not clarify the issue (see below).\nExperiments seem reasonable but all baselines and competitors weren’t trained specifically on verilog. Since the current work cites other previous approaches, experiments could have compared to them as well (or explain why was not possible)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The paper introduces a novel task for evaluating LLMs in hardware design, focusing on Verilog understanding—prior work mainly focuses on generation. It introduces new training datasets, evaluation benchmarks, and establishes baselines for this new task.\n\n2. DeepRTL, the model proposed in this paper, uniquely good at both the generation and understanding of Verilog, making it different from other models in the hardware design domain.\n\n3. The methodology for creating a natural language-code parallel corpus via prompt engineering with GPT-4 is innovative and shows promise for broader application in fields where parallel corpora are lacking.\n\n4. The diagrams in this paper describes the proposed methods clearly and intuitively." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper makes a contribution to the field of hardware design automation by addressing both the generation and understanding of Verilog code using large language models (LLMs). While previous studies primarily focused on the generation aspect, this work recognizes the importance of understanding Verilog code and proposes a unified representation model, DeepRTL, built on an enhanced CodeT5+ architecture. This model is trained on a specifically curated dataset that tightly aligns natural language descriptions with Verilog code, aiming to improve the semantic alignment between the two. Additionally, the paper introduces the first benchmark specifically for Verilog understanding and develops two novel metrics, embedding similarity and GPT score, to capture semantic similarities more effectively than traditional n-gram-based metrics like BLEU and ROUGE. In comparative assessments, DeepRTL surpasses GPT-4 in Verilog understanding tasks and matches the performance of OpenAI’s o1-preview model in code generation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The reason for selecting T5-like models as the base for DeepRTL is not empirically validated. It remains unclear whether the observed performance gains in Verilog understanding are due to T5's encoder-decoder architecture or the synthesized dataset used for fine-tuning. Comparative analysis with a decoder-only model, such as LLaMa-3-1B or DeepSeekCoder-1.3B, using the same dataset for finetuning would provide clearer insights.\n\n2. The paper does not evaluate the impact of varying context window lengths, which is important given that CodeT5+ supports a limited token count (2,048 tokens), while actual Verilog code often exceeds this length. Dropping examples longer than 2,048 tokens will also bias the results in favor of DeepRTL, which is based on CodeT5+. A model accommodating longer context windows could potentially offer superior performance on the general task, but not for this tailored dataset.\n\n3. The evaluation metrics for code understanding—embedding similarity and GPT score—are solely based on GPT models, leading to potential bias, as evidenced by the inflated scores of GPT-3.5, GPT-4, and o1-preview models shown in Table 2. This overlap may make the comparisons bias in favor of GPT-family models.\n\n4. The evaluation of code generation lacks a comprehensive set of baselines. Despite mentioning various Verilog generation models in the related work section, these models are absent from the comparative analysis in Table 3.\n\n5. The fine-tuning dataset includes proprietary code that cannot be released publicly, and the benchmarks used are also developed by the authors. The absence of shared code, data, or models in the publication hinders reproducibility and make it impossible to assess potential data contamination and bias in evaluation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Why throw away dataset items that are longer than 2,048 tokens? It is true that this is the maximum input length for CodeT5+; however, why make a choice about the dataset based on the (essentially arbitrary) choice of model used in the specific experiments here?\\\nModern LLMs, including open source ones such as Llama, have context sizes way beyond 2,048 tokens." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "A novel dataset and benchmark for highly specialised programming code (Verilog); this might be interesting as it provides a new and interesting resource for a programming language that does not have as much attention as others such as Python, Jave, or C++." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel dataset and model training for Verilog understanding and generation, as well as a new high-quality benchmark for the understanding task.\n\nThe authors provide a large Verilog dataset based on a large quantity of crawled open source code that is processed into code and natural language descriptions via GPT4, as well as a smaller amount of hand-curated code-description items from proprietary sources.\\\nThey also introduce a new benchmark for Verilog understanding, consisting of 100 manually verified, high-quality code-description pairs.\n\nFor experiments, the authors train CodeT5+ -based models of sizes 220M and 16B on their newly introduced dataset, using \"progressive training\" and evaluate model performance in terms of Verilog understanding and generation capabilities.\\\nExperiments show that models trained in this manner outperform strong baselines on various metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Beyond the curation of an interesting new dataset, there is very limited novelty to this work; it seems like authors might be somewhat unfamiliar with the current state of the field of LLMs/Machine Learning, including ML for code:\n- Fine-tuning a CodeT5 model on domain-specific code has been done.\n- The \"progressive training\" is just curriculum learning, which is well-established in the field.\n- Similarity scores based on vector similarity are as old as Word2Vec, if not older.\n- Similarities/evaluations with LMs or LLMs (here \"GPT Score\") are well-established, e.g., see \"LLM as a judge\", BERT Score, etc.\n\nThis seems like it would be a very nice paper for a specialised Verilog/hardware spec conference, but may be of limited value for a venue like ICLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024deeprtl,\ntitle={Deep{RTL}: Bridging Verilog Understanding and Generation with a Unified Representation Model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2hcfoCHKoB},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in large language models (LLMs) have demonstrated significant potential in automating the generation of hardware description language (HDL) code from high-level natural language instructions. While fine-tuning has improved these models' performance in hardware design tasks, prior efforts have largely focused on Verilog code generation, overlooking the equally critical task of Verilog understanding. Furthermore, existing models suffer from weak alignment between natural language descriptions and Verilog code, which hampers the generation of high-quality, synthesizable designs. To overcome these limitations, we present DeepRTL, a unified representation model that excels in both Verilog understanding and generation. Based on CodeT5+, DeepRTL is fine-tuned on a comprehensive dataset that aligns Verilog code with rich, multi-level natural language descriptions. We also introduce the first benchmark for Verilog understanding, alongside two novel metrics, embedding similarity and GPT score, that capture semantic similarity more accurately than traditional metrics like BLEU and ROUGE, which are limited to surface-level n-gram overlaps. DeepRTL's progressive training strategy enables it to significantly outperform GPT-4 in Verilog understanding tasks, while achieving performance on par with OpenAI's o1-preview model in Verilog generation tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Model", "Program Representation Learning", "Verilog Understanding and Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1b0ad7ee7ee0ae6f8aee3d6c47d7523cb2cf714b.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "DeepRTL: Bridging Verilog Understanding and Generation with a Unified Representation Model" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2iCIHgE8KG
Discovering Temporally Compositional Neural Manifolds with Switching Infinite GPFA
main
Active
Computational neuroscience;neural data analysis;Bayesian nonparametrics;latent variable modelling;
unsupervised, self-supervised, semi-supervised, and supervised representation learning
6;6;8;8
4;3;5;3
4;3;4;3
4;2;3;3
3;3;3;4
7
3.75
3.5
3
3.25
0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- unclear sentence line 366: \"Moreover, in an SLDS, only the latent dynamics changes following context switching, hence requiring a non-negligible number of timesteps (depending on the spectral radius of transition operator) for the reflection of context changes in the observation space. in contrast, the compositional nature of factor loading process in the infinite GPFA model allows immediate differential expression of latent processes into neural activities.\"\n\nCan you clarify this a bit? infinite GPFA model seems to also have the factor loading process in latent space, why it allows immediate expression into neural activities than SLDS? \n\n- How is the number of features D selected for svGPFA in the experiments section for synthetic data and real data?\n\n- What's the future research direction for this paper?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is clearly written. The model formulation and related works are clearly introduced. \n- The authors have done extensive experiments on real neural data and synthetic data, and results seem good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the infinite GPFA model, which is Bayesian non-parametric extension of the classic GPFA by combining GPFA with an Indian Buffet Process Prior. This model can potentially infer infinite set of latent factors from data. A variational EM algorithm is proposed to perform the inference. The authors demonstrate the effectiveness of this model through analysis on simulated and real datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The idea of combining GPFA with IBP prior is not revolutionary. \n- I listed some questions in the section below." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The switching GPFA and switching infinite GPFA models effectively tackle a significant limitation commonly encountered in many latent variable models in neuroscience, particularly within GPFA: the a priori selection of latent dimensionality. Additionally, these models enhance the approach by allowing for unequal contributions of latent variables at different time points, addressing another critical shortcoming of traditional GPFA. This advancement represents a noteworthy contribution to latent variable modeling in neuroscience. The authors also incorporate inducing points for improved scalability, a practical and well-established extension from the existing GP literature." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present an extension to GPFA, a widely used latent variable model in neuroscience, that uses an Indian Buffet process as a nonparametric extension to automatically select latent dimensions at each time point. This avoids the need for a priori latent dimensionality choice in GPFA, a well-known limitation to the method, and allows for a sparse selection of latent activations at each time point, which can identify transitions in the latent representation, enhancing the models usefulness in the identification of behavioral states. The authors show strong validation on synthetic datasets as well as real spiking data. The theory is clear and model development and implementation is clear and sound." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The weakest part of the manuscript is the lack of evaluations to any competing approach. The authors appear only compare to variants of their own model. In particular, because the authors emphasize the advantage of not needing to pre-select latent dimensionality, some evaluation against the ARD approach in Jensen et al would be appreciated. The authors claim the ARD is inferior due to requiring marginalizing over all of the data to determine latent dimensionality, and this is sound reasoning, however, I am curious as to how exactly different the models fits and latent posteriors would be. It might be possible, for example, for the ARD GPFA model to learn an appropriate number of latent dimensions and have periods of time where different groups of latents are minimally or highly variable. I think it would help a reader get a sense of how svGPFA compares Bayesian GPFA, as the latter is a model that was motivated in a very similar way.\n\nNote also that the manuscript \"Uncovering motifs of concurrent signaling across multiple neuronal populations\", Gokcen et al. also uses an ARD prior in a similar GPFA-style model - might be worth citing\n\nOne small point -- Figure 2 is difficult to render in the browser and this specific page lags. I suspect the figure size is too large, maybe due to panel d. Downsampling this figure before adding it to the latex might help." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- How does the infinite GPFA handle cases where it identifies overlapping/slightly differing latent factors ?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **Novelty**: The proposed model, infinite GPFA, has a robust mechanism that allows for estimation of both the number of latent factors and their time-varying activations without requiring manual tuning. In addition, the sparsity allows for learning of more interpretable latent factors, which is helpful for interpreting neural computations.\n- This framework opens up new avenues in neuroscience for exploratory investigations of experimental data.\n- Presentation is clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a novel model, an extension to GPFA, that incorporates stochastic activation of latent factors in the loading process via IBP prior. This results in dynamically switching expression of latent factors for each neuron across different timepoints, hence incorporating the dynamic shifts in internal states of the animal. They apply their model, infinite GPFA, to two datasets, one synthetic and one real world neuroscience dataset (hippocampal place cells during spatial navigation)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- More comparison to other methods could have strengthened the utility and performance of infinite GPFA, specifically, using some of the previously established methods like GPFA with ARD prior. Although GPFA with ARD prior is not designed to capture latent factors across time, it would be useful to show it quantitatively.\n\nMinor points\n- l060 ‘An as example,’ → ‘As an example,’\n- Figure2.a the axis labels are illegible .\n- In general figure 2 gets rendered very slowly, I am not sure the exact cause but it might be worth investigating because if it’s simple like rasterization or high resolution graphics, it can be easy to fix." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "for fig.2c what expected level of masking for Z was used? on that topic, it could also be interesting to show how the gap between gpfa and infinite gpfa inference performance varies with the expected value of alpha. additionally, an inline figure or addition to figure 1 helping to build intuition between numerical values of alpha and the expected number of features could be useful.\n\nis the runtime comparison in Fig.2h for a single EM iteration, or only inference time?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "the paper is very well written. the background section is clear and in my opinion and succesfully takes the reader from the original gpfa to their new generative model that incorporates an indian buffet process prior over the binary masking matrix, Z. since approximate inference in this model is highly non-trivial, the authors developed an approximate variational EM procedure for inference/learning. i appreciated the extensive discussion covering what terms are and are not tractable in the variational bound and how the authors deal with the intractable terms in a practical manner; important details that would clutter the main text were referenced often and helped with further clarifications. their synthetic data example validates their inference approach and reassuringly shows the infinite gpfa model can match standard gpfa inference quality even when there is no encoding variability. in their last experiment, they apply their method to neurophysiological recordings taken from a rat performing a spatial navigation task; they demonstrate how their method can reveal the compositional structure of the latent space by identifying different latent factors being loaded onto the neural population at different locations or trial events." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "the authors introduce switching infinite gpfa — an extension of the classical gpfa model to account for the possibly time varying dependence of observed neural activity on different latent factors. the authors make this possible by using an indian buffet process as the generative model for a (infinite) binary mask that selects the latent features read out to the observation space at each point in time. they outline how to perform tractable variational EM inference/learning for this model class. the authors then validate their model and inference/learning procedure on synthetically generated data, and then show how their method can be used to extract behaviorally meaningful latent features from rats performing a spatial navigation task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "more comparisons could be helpful. for example, it could have been interesting to see how bGPFA also compares to infinite gpfa with and without model mismatch similar to the synthetic example presented. \n\nfrom fig 2b and fig 3b, it does appear that infinite gpfa takes substantially longer to reach convergence. do the authors expect this difference gets substantially worse with higher latent state dimensionality? it could be helpful to see convergence plots for a dataset that requires higher latent state dimensionalities." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a fully Bayesian nonparametric extension of GPFA that enables discovery of temporally compositional neural manifolds underlying high-dimensional population neuronal activities." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024discovering,\ntitle={Discovering Temporally Compositional Neural Manifolds with Switching Infinite {GPFA}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2iCIHgE8KG},\nnote={under review}\n}" }, "abstract": { "value": "Gaussian Process Factor Analysis (GPFA) is a powerful latent variable model for extracting low-dimensional manifolds underlying population neural activities. However, one limitation of standard GPFA models is that the number of latent factors needs to be pre-specified or selected through heuristic-based processes, and that all factors contribute at all times. We propose the infinite GPFA model, a fully Bayesian non-parametric extension of the classical GPFA by incorporating an Indian Buffet Process (IBP) prior over the factor loading process, such that it is possible to infer a potentially infinite set of latent factors, and the identity of those factors that contribute to neural firings in a compositional manner at each time point. Learning and inference in the infinite GPFA model is performed through variational expectation-maximisation, and we additionally propose scalable extensions based on sparse variational Gaussian Process methods. We empirically demonstrate that the infinite GPFA model correctly infers dynamically changing activations of latent factors on a synthetic dataset. By fitting the infinite GPFA model to population activities of hippocampal place cells during spatial navigation, we identify non-trivial and behaviourally meaningful dynamics in the neural encoding process." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Computational neuroscience", "neural data analysis", "Bayesian nonparametrics", "latent variable modelling;" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f9aa50b2ef39d7d517f0d4c89b2bb4d2ccdeea0e.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/06972b22ef78ea30036818ac9922fea1fc71f935.zip" }, "title": { "value": "Discovering Temporally Compositional Neural Manifolds with Switching Infinite GPFA" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2iPvFbjVc3
Vision Language Model Based Caption Evaluation Method Leveraging Visual Context Extraction
main
Active
Image Captioning;Evaluation;Vision and Language;LLM as a judge
applications to computer vision, audio, language, and other modalities
3;3;3;3;5
4;4;4;4;2
3;2;3;2;2
2;2;2;2;2
3;3;3;3;3
3.4
3.6
2.4
2
3
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What are the main differences between the proposed method and SPICE/InfoMetIC? What unique innovations does this paper offer?\n- Why choose THumB instead of MSCOCO for evaluation?\n- The meanings of the numbers should be stated more clearly. For example, what do 5/5 and 80/100 mean?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper highlights the urgent need for developing new metrics, considering the fact that model generations have become so detailed that they often exceed the capability of the automatic evaluation metrics.\n\n- The paper is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents VisCE2, a vision-language model-based caption evaluation method designed to evaluate captions in a manner that aligns more closely with human preferences. VisCE2 focuses on visual context, which refers to the detailed content of images, including objects, attributes, and relationships. Experiments are conducted on several datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The literature review should be more accurate. For example, SPICE (Anderson et al., 2016) is mainly based on scene graphs rather than n-grams.\n\n- The novelty of this paper is limited. The proposed evaluation method consists two stages: visual context extraction and VLM-based caption evaluation. The first stage analyzes images based on scene graphs, similar to SPICE (Anderson et al., 2016). The second stage evaluates captions with VLMs, which is not new given existing works such as InfoMetIC (Hu et al., 2023) and CLIPScore (Hessel et al., 2021). While the combination of these two stages may be new, it may not meet the innovation standards expected for ICLR submissions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses.\n\nI will be happy to raise my score if authors address my concerns." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. A novel reference-free image caption evaluation method with VLMs.\n2. This paper is well-written and easy to follow.\n4. This paper proposes a visual context extraction module to describe the image as sentences, which also can be seen as a pseudo reference with abundant details.\n4. The authors conduct comprehensive experiments across multiple datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a reference-free image captioning evaluation metric, called VisCE$^2$. Specifically, VisCE$^2$ leverages pre-trained Vision-Language models (VLMs) to realize two-stage measurements for candidate captions. The first is Visual Context Extraction which uses VLM to obtain detailed descriptions including objects, object attributes and relationships. The second is Vision-LM Caption Evaluation which takes visual context, image and candidate captions as inputs to obtain an evaluation score. Experimental results demonstrate the superiority of this reference-image free method against other metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Figure 1 is not comprehensive. For the left part, RefCLIP-S[1] and RefPAC-S[2] can also accomplish the same measurement. On the other hand, better evaluation performances of VisCE$^2$ than BLEU-4, ROUGE, SPICE and CIDEr are not enough. While for the right part, authors should compare with PAC-S[2] to illustrate the superiority of this work.\n\n2. Line 49 - Line 51 describes the disadvantages about InfoMetIC, but evidence is lacked and can therefore be listed in Figure 1.\n\n3. It is suggested to evaluate the VisCE$^2$ and other reference-free metrics within different *image captioning methods* such as InstructBLIP, LLaVA and even GPT-4, as mentioned in Line 42-Line 44. This is a key step to comprehensively measure the effectiveness of VisCE$^2$. The authors can refer to Table 7 in PAC-S paper[2].\n\n4. Although this paper focuses on reference-free evaluation, it is also recommended to report the results of VisCE$^2$ when the reference captions are provided. \n\n5. An example of visual context given the image should be added into appendix. For instance, authors can list all the objects, object attributes and relationships about the image in Figure 2. \n\n6. In Table 2, it seems that authors only report the values of Kendall’s $\\tau_b$ on Flickr8k-Expert and Composite datasets. Kendall’s $\\tau_c$ should also be included. \n\n7. It is a little bit confusing to read Table 3 about ablation experiments. The first two settings are to prove the effectiveness of each component with the same backbone VLM (LLaVA-v1.5-13B). Then the current model (VisCE$^2$ ours) achieves the best scores across all datasets. But for the last two settings, authors aim to explore the influences of different backbone models or model sizes. From Table 3, GPT-4o can achieve **59.0** score on Composite dataset, higher than VisCE$^2$(**56.0**). THumB and Pascal-50S observe similar phenomenon. Hence, it would be better to split Table 3 into two small tables.\n\n[1] Hessel, Jack, et al. \"Clipscore: A reference-free evaluation metric for image captioning.\" arXiv preprint arXiv:2104.08718 (2021).\n\n[2] Sarto, Sara, et al. \"Positive-augmented contrastive learning for image and video captioning evaluation.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the questions in the weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed method is easy to understand.\n- The proposed method shows favorable performance compared to existing evaluation methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Given the accelerating progress of vision and language modeling, accurate evaluation of machine-generated image captions remains critical. \nIn order to evaluate captions more closely to human preferences, metrics need to discriminate between captions of varying quality and content. \nHowever, conventional metrics fall short of comparing beyond superficial matches of words or embedding similarities; thus, they still need improvement. \nThis paper presents VisCE2, a vision language model-based caption evaluation method. \nThe authors’ method focuses on visual context, which refers to the detailed content of images, including objects, attributes, and relationships. \nBy extracting and organizing them into a structured format, the authors replace the human-written references with visual contexts and help VLMs better understand the image, enhancing evaluation performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The novelty of the proposed method is weak. The only idea in this paper is to use a language model instead of CLIP to evaluate image captioning where the sentence generation performance of the VLMs is imperfect, unlike CLIP’s image-caption alignment performance. The authors suggest using an image captioning model to evaluate image captioning models. How can we evaluate the models that perform better than LLaVA? Using the proposed metric instead of CIDEr or CLIPS scores for future image captioning research is not convincing.\n\n- The discussion on design choice is also weak. In Table 3, the only discussions are on what VLM to use and what kind of visual context to use. However, there are other design choices to be considered. For example, when using language models, a proper prompt is essential. However, the authors didn’t analyze the choice of prompts for the language model. Moreover, whether the visual context extractors (object, attribute, relation) have the best design choice isn't justified. Therefore, it is not clear whether the proposed metric is the best possible method.\n\n- This paper lacks experimental analysis. When suggesting a new evaluation metric, it would be better to evaluate popular image captioning models, such as BLIP2, and analyze the tendency of the performances to understand the unique characteristics of the proposed metric. Also, it would be better to evaluate the proposed metric in different settings, such as FOIL hallucination detection, as CLIPS did." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the questions in Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper proposed a new method to evaluate the generated captions considering the objects, attributes, and relations within the images. And the paper makes great efforts to demonstrate the reliability of the evaluation method by comparing with human judgement. The results indicate this method is better consistent with human rating compared to other metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method that uses the visual concepts extracted by MLLMs to help evaluate image captions, which makes the evaluation results more consistent with human ratings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The evaluation process heavily relies on the use of MLLMs in the following ways: 1. It utilizes MLLMs to extract visual concepts from images; 2. It employs MLLMs to generate evaluation scores for these image captions. If the candidate captions are generated by a same MLLM, the evaluation method may fail to provide a fair evaluation.\n\nIt seems that the evaluation time is significantly longer than the time required by other metrics, due to the use of MLLMs in two stages. How long does it take to evaluate one caption based on an image? Please provide concrete timing comparisons between the proposed method and existing metrics. Additionally, why is the image necessary in the Vision-LM Caption Evaluation stage? If the visual concepts are sufficient to represent an image, the evaluation could potentially be conducted without using the image, which might speed up the evaluation process. The paper should include an ablation study comparing performance with and without the image in the Vision-LM Caption Evaluation stage.\n\nAlso, the paper should add ablation studies on the used prompts, particularly regarding the maximum number of objects. According to the prompts shown in Table 4, the maximum number of objects extracted by MLLM is set to 5. How could this choice affect the reliability of the evaluation method?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. When simply constructing the initial prompt (Vanilla) in a more refined way, e.g. by adding a chain-of-thought (CoT) prompt, would better assessment results also be achieved?\n\n2. Can the authors provide a comparison of runtime with existing evaluation methods?\n\n3. The visual context extracted in the first phase will contain some hallucinations, does this have an impact on the evaluation results?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe paper is well-organized and clearly written.\n2.\tThe proposed VisCE2 is intuitive. And evaluation experiments on multiple datasets demonstrate that the method outperforms existing evaluation metrics and meets human judgments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a VLM-based image caption evaluation method called VisCE2. The proposed method first obtains structured visual context by prompting the VLM, and then evaluates candidate captions based on the extracted visual context and input image. Extensive evaluation experiments show that VisCE2 outputs scores that have good agreement with human judgment and outperform existing evaluation metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is somewhat weak on innovation. The method is simply based on two rounds of prompts, which makes the VLM automatically evaluate image captions, and its core is based on the in-context learning ability of the VLM. Assuming that only one round of prompt is used and combined with the chain-of-thought (CoT) method to make the VLM automatically mine the visual context, while setting the last sentence generated by the VLM as the evaluation result, can this also lead to a good image caption evaluation performance?\n\n2. Since two rounds of prompts are required for the VLM to evaluating the image caption, resulting in a high time complexity of this evaluation method, which is not conducive to real-time evaluation. Can the authors provide a comparison of runtime with existing evaluation methods? \n\n3. Based on Table 3 of the ablation experiment, the enhancement brought by visual context does not seem to be particularly significant compared to the original prompt (Vanilla). Can the authors further analyze the reasons for this condition?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024vision,\ntitle={Vision Language Model Based Caption Evaluation Method Leveraging Visual Context Extraction},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2iPvFbjVc3},\nnote={under review}\n}" }, "abstract": { "value": "Given the accelerating progress of vision and language modeling, accurate evaluation of machine-generated image captions remains critical. In order to evaluate captions more closely to human preferences, metrics need to discriminate between captions of varying quality and content. However, conventional metrics fall short of comparing beyond superficial matches of words or embedding similarities; thus, they still need improvement. This paper presents VisCE2, a vision language model-based caption evaluation method. Our method focuses on visual context, which refers to the detailed content of images, including objects, attributes, and relationships. By extracting and organizing them into a structured format, we replace the human-written references with visual contexts and help VLMs better understand the image, enhancing evaluation performance. Through meta-evaluation on multiple datasets, we validated that VisCE2 outperforms the conventional pre-trained metrics in capturing caption quality and demonstrates superior consistency with human judgment." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Image Captioning", "Evaluation", "Vision and Language", "LLM as a judge" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/44f0ab56bf9829e1a7c6bf21e339d2d2f4db6e77.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Vision Language Model Based Caption Evaluation Method Leveraging Visual Context Extraction" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2iYVBqRHK4
DOPL: Direct Online Preference Learning for Restless Bandits with Preference Feedback
main
Active
Restless Multi-Armed Bandits;Preference Feedback;Online Preference Learning
reinforcement learning
5;5;6;6
4;3;4;3
3;2;3;3
3;2;3;4
2;2;2;4
5.5
3.5
2.75
3
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see the first box" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "see the first box" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work studies a new problem set-up, PREF-RMAB.\nFor me, the problem set-up is very incremental. It is quite similar to duelling bandits. The proposed set-up is more like duelling bandits with state transitions. \n\nThe writing needs to be improved. \n\nThe writing for the intro is too wordy. I wish to see more literature work discussions.\n\nI suggest putting the main theorem (Theorem 1) earlier. I can only see the theoretical result at Page 8. So, the structures for sections 4 and 5 are suggested to re-arrange. \n\nA minor thing: usually, RL or bandit theory works use $\\delta$ to be the failure probability. The greek letter $\\epsilon$ is for something else, like the error rate.\n\nI went through the proofs and some steps are novel, not that typical in bandit/RL literature. But I did not check the correctness of the proofs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "see the first box" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1.\tEstimating the whole preference matrix $F$ in DOPL algorithm requires large computational cost. Moreover, it would be beneficial to involve a thorough discussion on computational complexity of DOPL.\n2.\tIn experiments, the existing algorithms like MLE_WIBQL, MLE_LP fail to achieve sublinear regret. A detailed discussion on why these algorithms underperform in achieving sublinear regret would provide valuable insights." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe paper successfully integrates preference feedback within the RMAB framework, a novel approach that shifts away from traditional scalar reward dependency. Moreover, the presented algorithm DOPL achieves $\\tilde{O}(\\sqrt{T \\ln T})$ regret with theoretical analysis.\n2.\tThe relaxed LP-based direct index policy of DOPL is also given to tackle the limitations of computational intractability.\n3.\tThe writing is clean and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the Restless Multi-Armed Bandits with preference feedback named PREF-RMAB. The authors propose the Direct Online Preference Learning (DOPL) algorithm achieving an $\\tilde{O}(\\sqrt{T \\ln T})$ regret, which is the first to give the theoretical regret upper bound for PREF-RMAB. Moreover, the paper presents numerical experiments which further validate the efficacy of DOPL against various baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tEstimating the whole preference matrix $F$ in DOPL algorithm requires large computational cost. Moreover, it would be beneficial to involve a thorough discussion on computational complexity of DOPL.\n2.\tIn experiments, the existing algorithms like MLE_WIBQL, MLE_LP fail to achieve sublinear regret. A detailed discussion on why these algorithms underperform in achieving sublinear regret would provide valuable insights." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "•\t1. In the 11th step of Algorithm 3 in Section C.3, when inferring \\(\\hat{\\mathbf{F}}_n^{*,k+1,\\text{inf}}(\\sigma_{s_n}, \\sigma_{s_*})\\), only one intermediate column \\( j \\) in \\(\\mathbf{F}\\) is selected to compute \\(\\hat{\\mathbf{F}}_n^{*,k+1,\\text{inf}}(\\sigma_{s_n}, \\sigma_{s_*})\\) based on \\(\\hat{\\mathbf{F}}_n^{*,k}(\\sigma_{s_n}, j)\\) and \\(\\hat{\\mathbf{F}}_n^{*,k}(j, ((*-1)|S|+\\sigma_{s_*}))\\). However, after selecting \\( B \\) arms at each step, the duels are conducted randomly, resulting in many comparison data points beyond \\( j \\) that are not used in the inference process. Could this lead to substantial unutilized information? Could more comparison data, beyond just \\( j \\), be leveraged to infer the preference between \\( s_n \\) and \\( s_* \\)? Alternatively, rather than performing duels randomly, might strategically choosing duel pairs improve performance?\n\n•\t2. In Eq. (3), the authors define \\(\\pi^{opt}\\) as the solution of (1) with scalar rewards, but the footnote states that the regret is with respect to preference feedback, which seems contradictory. This part is unclear to me.\n\n•\t3. In Eq. (46), the suboptimality is accumulated over \\( K \\) episodes. However, since \\( \\omega^k_n \\) is a probability measure and \\( Q_n(s) \\) represents the relative reward of arm \\( n \\) in state \\( s \\), which involves the reward incurred at a specific step \\( h \\) within episode \\( k \\), why doesn’t \\( h \\) appear in this decomposition?\n\n•\t4. In Lemma 6, the authors use Lemma 11 to argue that term0 is negative. However, I find this reasoning unclear, as \\({\\pi^*}\\) does not appear to align with \\(\\tilde{\\pi}\\) as defined in Lemma 6. Specifically, \\(\\mu_{\\pi^*}\\) represents the optimal solution of Eq. (6)-(9), while \\(\\tilde{\\pi}\\) is the index policy developed from \\(\\mu_{\\pi^*}\\) to satisfy the hard constraint. Therefore, I am uncertain that Lemma 11 can indeed be used to prove Lemma 6, and concluding that term0 is negative is not straightforward.\n\n•\t5. In the proof procedure following Eq. (49), from the fourth to the fifth line, the inequality \\(\\sum_{k=1}^K\\sqrt{\\frac{1}{Z^k_n(s,a)}} \\leq \\sqrt{Z^K_n(s,a)}\\) appears incorrect. For instance, if the algorithm visits arm \\( a \\) at state \\( s \\) once at the beginning and never revisits this state, it would hold that \\( Z^1_n(s,a) = \\dots = Z^K_n(s,a) = 1 \\), yielding \\(\\sum_{k=1}^K\\sqrt{\\frac{1}{Z^k_n(s,a)}} = K\\), which is indeed greater than \\( \\sqrt{Z^K_n(s,a)} = 1\\). If I have misunderstood this part, please clarify.\n\n•\t6. In the sentence between lines 1398 and 1399, I think the statement \\(\\sum_{n=1}^N\\sum_{(s,a)}Z^T_n(s,a) \\leq NT\\) should instead be \\(\\sum_{n=1}^N\\sum_{(s,a)}Z^T_n(s,a) = T\\), as the total visits across all arms and states should sum to \\(T\\). In fact, only under this revised statement can the inequality (c) above this sentence be satisfied, otherwise it should be \\(\\sum_{n=1}^N\\sqrt{\\sum_{(s,a)}Z_n^K(s,a)}\\leq N\\sqrt{T}\\). This confusion also appears in the sentence from 1503 to 1505. If I have misunderstood this part, please clarify." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Although RLHF has recently gained significant attention due to its applications in large language models and robotics, this work is the first to consider a preference-based reward model in the restless bandit problem, opening the door for RLHF to be applied much more broadly.\n\n2. By establishing a connection between pairwise preferences and reward values, the authors transform the reward value of each arm and state into a measure based on the preference probability between this state and a reference arm and state, which is intriguing. Additionally, the algorithm can infer the preference between the element j in the preference matrix \\(\\mathbf{F}\\) and the reference elements \\(s_*\\) without directly comparing them, but rather through an intermediate comparator. This clever design reduces the complexity to that of directly observing the reward." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the PREF-RMAB model, which observes only the preference between two arms rather than the exact reward of each arm in the restless multi-armed bandit problem. By expressing the reward function of any arm n in any state as the sum of a reference reward and a function related to the preference probability between arm n in state s and a reference arm in a reference state, the authors develop a direct index policy based on preference data to choose which arms to activate in the online manner. They establish an O (√(NT|S|ln⁡〖(|S||A|NT)〗 )) regret bound for the proposed algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "•\t1. A question remains as to whether a preference-based model can outperform direct reward estimation, and whether we really need the preference-base model in RMAB problem. In the first two examples presented in the paper, APP MARKETING and CPAP TREATMENT, while reward data is challenging to estimate accurately and may contain substantial noise, it can still be estimated in some form. Conversely, since preference data inherently provides less information, it is unclear whether incorporating preference data can improve performance over direct reward estimation. Most papers on RLHF use preference feedback to shape the reward for different trajectories in robotics or large language models (LLMs), where trajectory rewards are inherently complex and require function approximation methods to estimate the reward function. However, the RMAB model studied in this paper is a tabular MDP, where rewards can be estimated through multiple sampling.\n\n•\t2. In Algorithm 3 of Section C.3, at each step \\( h \\), the algorithm performs \\( (B-1) \\) random duels to obtain the preference data. Then, when constructing the preference between \\(s_n\\) and the reference \\(s_*\\), only the preference between \\(s_n\\) and \\( j \\) and \\(s_*\\) is used. It appears that \\( s_n \\) could be compared with many other columns \\( i \\) in the \\( F \\) matrix, but the algorithm does not leverage this data. Consequently, Algorithm 3 makes inefficient use of the available information.\n\n•\t3. The MDP considered in the paper is essentially a tabular MDP, and the regret scales with the square root of \\( |S| \\) (the size of the state space), which may be inefficient for large state spaces." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.\tThe important results Lemmas 4 and 5 are based on Lemma 1. However, it is unclear why there is only a single reference arm * and a single reference state in Lemma 1. In the RMAB setting, DM selects B arms at each time slot, so the use of a single reference arm seems inconsistent.\n2.\tIn Eq. (4), if $\\epsilon=o(1)$, the confidence width becomes arbitrarily large. While if $\\epsilon=\\Theta(1)$, the probability becomes very small. How do the authors balance this trade-off? A more detailed discussion of the setting for $\\epsilon$ would be helpful." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe authors present novel approaches to address the PREF-RMAB problem. The results in Lemmas 4 and 5 are particularly interesting and have the potential to inform future algorithmic design.\n2.\tThe propose DOPL algorithm works well on the PREF-RMAB problem.\n3.\tI understand that analyzing the regret bound of the RMAB problem with preference feedback is challenging, so the inclusion of this theoretical bound is commendable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the restless multi-armed bandit (RMAB) problem with preference feedback (PREF-RMAB) rather than direct reward estimation, motivated by real-world applications like app marketing and CPAP treatment. To address the problem that some arms in some states may not be visited frequently, the authors propose a new method to infer the empirical average of arm preference via the other arms’ empirical preference estimations. Additionally, the authors transform the original reward-based optimization problem into another one in terms of preference feedback directly. Using this transformation, the authors develop a low-complexity index policy for decision making. They also provide theoretical analysis of the regret bound, establishing a regret bound of $\\tilde{\\mathcal{O}}(\\sqrt{T\\ln T})$. Finally, the authors conduct experiments to verify the performance of their proposed DOPL algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tAlthough the authors make a strong effort to illustrate potential real-world applications of the PREF-RMAB problem, the justification remains unconvincing. For instance, in the app marketing scenario, users’ state transitions do not satisfy the memoryless Markov chain property, as a user currently in state $s_4$ cannot directly transition to $s_1$, and time tracking is ambiguous. Similar concerns apply to the other examples.\n2.\tThe writing can be further improved. For example, \n\n (a) Adding more detail on the composition of the preference matrix $\\mathbf{F}$ would improve clarity.\n\n (b) Eq. (2) needs to be improved, as the notation is confusing. \n\n (c) The objective function (6) is not easy to follow. Please define $\\mu^{\\pi}$ and $\\mu_n$ first. I misunderstood it as a myopic problem until I saw the definition of mu^pi. \n\n (d) I think Lemmas 4 and 5 are more important than Lemmas 1 and 2. The authors can change them into propositions. \n\n (e) Lemma 2 can be improved by defining $Q_n(s)$ first and then present the result in Lemma 2." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024dopl,\ntitle={{DOPL}: Direct Online Preference Learning for Restless Bandits with Preference Feedback},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2iYVBqRHK4},\nnote={under review}\n}" }, "abstract": { "value": "Restless multi-armed bandits (RMAB) has been widely used to model constrained sequential decision making problems, where the state of each restless arm evolves according to a Markov chain and each state transition generates a scalar reward. However, the success of RMAB crucially relies on the availability and quality of reward signals. Unfortunately, specifying an exact reward function in practice can be challenging and even infeasible. In this paper, we introduce Pref-RMAB, a new RMAB model in the presence of preference signals, where the decision maker only observes pairwise preference feedback rather than scalar reward from the activated arms at each decision epoch. Preference feedback, however, arguably contains less information than the scalar reward, which makes Pref-RMAB seemingly more difficult. To address this challenge, we present a direct online preference learning (DOPL) algorithm for Pref-RMAB to efficiently explore the unknown environments, adaptively collect preference data in an online manner, and directly leverage the preference feedback for decision-makings. We prove that DOPL yields a sublinear regret. To our best knowledge, this is the first algorithm to ensure $\\tilde{\\mathcal{O}}(\\sqrt{T\\ln T})$ regret for RMAB with preference feedback. Experimental results further demonstrate the effectiveness of DOPL." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Restless Multi-Armed Bandits", "Preference Feedback", "Online Preference Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7567090398a5bc7e1a5ebaabc9c403025daeb472.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "DOPL: Direct Online Preference Learning for Restless Bandits with Preference Feedback" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2jEiFTLRwX
Enhancing Perception Capabilities of Multimodal LLMs with Training-Free Fusions
main
Active
Multimodal Large Language Model;Model Integration
foundation or frontier models, including LLMs
5;5;5;5
5;5;4;4
2;3;2;3
2;2;2;2
3;3;2;3
5
4.5
2.5
2
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Table 7, can you provide clarification on why there is a drop in numbers for MGM-SliME-LLaVA-7B ? Overall, why is the performance gain of 3 Models integration is not much compared to the 2 models ensemble ?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Makes 3 significant observations on VLM ensembles: multi-encoder pays attention different complimentary regions of the image, visual embeddings of the vision encoders are better aligned when trained with same LLM , delta parameter merging of different LLM's help leverage different vision encoders from different MLLM family. These observations help them devise VisionFuse method. \n- Training-Free Fusion Approach: VisionFuse addresses a critical need to enhance MLLMs’ perceptual abilities without incurring additional training costs. This \"training-free\" feature is a significant contribution that helps plugging in diverse models during deployment.\n- The authors conduct exhaustive experiments including inference time/ accuracy to show the effectiveness of multi-encoder, LLM ensemble, token pruning (which model to prune more from) and ablations on the LLM ensemble techniques." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a new method: \"VisionFuse\" that ensembles different MLLMs by concatenating vision tokens and delta parameters of LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the paper shows exhaustive experiments on combining two MLLM families : SLIME and MGM, it is unclear how the method will scale with more than 2 MLLM due to complexity of vision token length as noted in paper. Especially, as shown in Table 8, the VisionFuse will not work when there is huge difference in the delta parameters of the LLMs. This limits the scope of this method to generalize to different MLLMs. Can the authors propose planned solutions to make the fusion more robust for any MLLM ?\n- Novelty: In terms of novelty of the fusion method : the vision token concatenation was from [1], and the delta parameter integration for LLM is from [2]. Hence, the paper does not technically contribute towards the fusion methodology/ algorithm itself. \n- In fig.4, there is an ablation to show the importance of complimentary features of encoders. It is unclear how to choose encoders that have complimentary features ?\n\n\n\n1. Eagle: Exploring the design space for multimodal llms with mixture of encoders. arXiv preprint arXiv:2408.15998, 2024.\n2. Editing models with task arithmetic. In ICLR. OpenReview.net, 2023. URL" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- After fusion, the resulting MLLM processes much longer visual inputs compared to the base models, as it concatenates vision features from multiple vision encoders into a single sequence. A relevant question arises: what if, instead of fusion, we simply increase the length of visual tokens in the base model (e.g., by expanding the input resolution)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- VisionFuse is a training-free method that can be directly applied to different models within the same MLLM family.\n- The evaluation results in Table 1 demonstrate the effectiveness of the VisionFuse method.\n- The authors also perform extensive experiments and ablation studies to further assess the method's effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces VisionFuse, an integration framework designed to enhance the visual perception capabilities of multimodal large language models (MLLMs) by merging different models from the same model family. The approach is built on three key observations: (i) Different MLLMs focus on varying regions of the same visual input for the same query; (ii) The visual feature distributions of encoders within an MLLM family show closer alignment; and (iii) Merging language model parameters helps align the language model with different vision encoders. The authors evaluate VisionFuse on the MGM and SliME models, demonstrating a significant improvement achieved by merging the two models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- As shown in Table 1, the proposed method’s improvements on stronger MLLMs are more limited compared to smaller models. This suggests that the method may not perform as effectively on top-tier models. Additionally, the largest model evaluated in Table 1 is only 8B, which is relatively small compared to current state-of-the-art MLLMs. It would be beneficial for the authors to test the method on larger models with top-tier performance (such as LLaVA-OneVision-Qwen2-72B, LLaVA-NeXTVideo-Qwen2-72B, and Qwen2-VL 72B), as this would help demonstrate the scalability of the proposed approach.\n- The benchmarks chosen in this paper are mostly from general domains and are somewhat outdated. More recent and vision-centric benchmarks, as well as new MLLM benchmarks, are now available. These newer, more challenging benchmarks would better reflect the true capabilities of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The questions raised in this section are the same as the weaknesses outlined above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe paper introduces a training-free method called VisionFuse, designed to enhance the perception capabilities of MLLMs. This approach enables the utilization of multiple vision encoders from various MLLMs by merging the parameters of their language models. Experiments demonstrate that this method achieves a notable average improvement of over 4% across multiple benchmarks.\n2.\tThe article presents three intriguing insights that could inspire researchers in the development of MLLMs. The visualizations and discussions provided are comprehensive and insightful.\n3.\tThe significance of this work is good. By demonstrating a practical and efficient approach to integrating diverse vision encoders from various MLLMs into a cohesive framework through the merging of language model parameters, the paper not only advances MLLMs in multimodal applications but also enriches the broader field of vision-language integration with valuable insights." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces VisionFuse, a novel training-free method that efficiently utilizes multiple vision encoders from off-the-shelf MLLMs to enhance visual perception. It offers some intriguing insights. For instance, even when given the same query and image, different MLLMs focus on distinct regions. Furthermore, the authors discover that the feature distributions of vision encoders within an MLLM family are highly aligned. Leveraging these insights, they merge the parameters of language models from various MLLMs, enabling a single language model to effortlessly align with multiple vision encoders. Consequently, the proposed method achieves an average performance increase of over 4% when integrating MiniGemini-8B and SLIME-8B. Overall, the proposed VisionFuse method demonstrates the efficiency of merging parameters from multiple MLLMs, thereby harnessing the strengths of various different encoders in a unified approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe authors propose that integrating a Multimodal Encoder with VisionFuse enhances the capabilities of MLLMs, as indicated in Equation (4), which suggests the potential to handle more than two MLLMs. However, the primary experiments focus on the integration of two MLLMs, such as MGM-8B and SLIME-8B. Therefore, the question arises: when integrating more than two MLLMs, how should the fusion coefficients be balanced and optimized to ensure effective integration?\n2.\tDoes the paper discuss methods that can support the integration of scaled-up models, such as 65B or 72B models?\n3.\tFrom Figure 3, it appears that the enhancement through mixed parameter fusion is biased towards the selection of visual encoders. If an unsuitable visual encoder is used, it seems that performance could plummet. Are there any guidelines in practical applications for selecting the appropriate MLLMs for fusion enhancement without causing a decline in performance?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the reason for choosing MGM+SliME for most of the experiments instead of using simpler models like LLaVA-1.5?\n2. For the MGM-VILA-7B in Table-8, why do the performances on TextVQA and MME-p increase while others decrease?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The general idea of combining multiple MLLMs with limited additional cost is meaningful.\n2. The observations and discussions could provide some insights for the community.\n3. The overall writing is clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a training-free model ensemble method for multimodal LLMs by concatenating the vision tokens and merging the LLM weights. Through exploratory experiments, the paper makes several observations about MLLMs: 1) different MLLMs focus on different image regions; 2) Vision features from MLLMs from the same base LLM exhibit similar distribution; 3) Merging LLM weights is critical to combine vision tokens from different MLLMs. Based on these observations, the paper further proposes the VisionFuse method which combines vision tokens from different MLLMs by concatenation and merging LLM's weights. Experiments of combining MGM and SliME show the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The overall positioning of the paper is not very proper. The paper positions itself as combining different vision encoders, so I expect to see combining different types of vision encoders (e.g. CLIP, Siglip, DINOv2 ...) which are shown to be effective in Eagle, BRAVE, and Cambrian-1 by providing more complementary vision information. However, the overall method is more like a general MLLM merging method. The gain of the proposed method comes from different aspects: different vision information due to different preprocess and compression methods; LLM ensemble (different training data & training randomness); and different attention patterns from LLM to vision tokens.\n2. The generalization ability of the proposed method is not well verified, and experiments are mainly conducted based on MGM+SliME. The paper should include more experiments with different MLLM combinations and different numbers of MLLMS in each combination to show the effectiveness of the proposed method.\n3. The paper claims that one big advantage of the proposed method is that it does not require training. However, this relies on the assumption that you already have proper MLLMs with the same base LLM and distinct training data + vision processing structures. However, methods like Eagle only need to train the MLLM with different vision encoders once. \n4. One major disadvantage of the proposed method is the additional token length, which is especially severe when considering combining multiple MLLMs more than 2 or MLLMs with long token lengths. The token pruning method used as a mitigation approach is still not optimal and might hurt the performance in certain tasks (e.g. DocVQA, InfoVQA, ChartQA) or with MLLMs that already have compressed vision tokens with little redundancy." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing Perception Capabilities of Multimodal {LLM}s with Training-Free Fusions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2jEiFTLRwX},\nnote={under review}\n}" }, "abstract": { "value": "Multimodal LLMs (MLLMs) equip language models with visual capabilities by aligning vision encoders with language models. \nExisting methods to enhance the visual perception of MLLMs often involve designing more powerful vision encoders, which requires re-aligning these vision modules with the language model, leading to expensive and time-consuming training processes.\nIn this paper, we introduce VisionFuse, a novel integration framework that efficiently utilizes multiple vision encoders from off-the-shelf MLLMs to enhance visual perception without requiring additional training.\nOur approach is motivated by the observation that different MLLMs tend to focus on distinct regions of the same query and image. Moreover, we find that the feature distributions of vision encoders within an MLLM family, a group of MLLMs sharing the same pretrained LLM, are highly aligned.\nBuilding on these insights, VisionFuse enriches the visual context by concatenating the tokens generated by the vision encoders of selected MLLMs within a family. By merging the parameters of language models from different MLLMs, VisionFuse allows a single language model to align with various vision encoders, significantly reducing deployment overhead.\nWe conduct comprehensive evaluations across multiple multimodal benchmarks using various MLLM combinations, \ndemonstrating substantial improvements \nin multimodal tasks. Notably, when integrating MiniGemini-8B and SLIME-8B, VisionFuse achieves an average performance increase of over 4\\%." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Large Language Model", "Model Integration" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/211be681f851148ea56f4d78c617366bcef8f389.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Enhancing Perception Capabilities of Multimodal LLMs with Training-Free Fusions" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2jTdHYuguF
MMMU-Pro: A More Robust Multi-discipline Multimodal Understanding Benchmark
main
Active
Evaluation;Multimodal Understanding;Multimodal LLMs
datasets and benchmarks
5;5;5;6;6
5;3;4;3;4
2;3;2;3;3
2;3;2;3;3
3;3;3;3;3
5.4
3.8
2.6
2.6
3
-0.327327
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "A more thorough justification for the OCR requirement and a clearer explanation of the new benchmark's significance could enhance the paper’s impact." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Significance: MMMU-Pro addresses critical gaps in existing benchmarks, promoting deeper multimodal understanding over shallow pattern recognition. It sets a higher evaluation standard, likely to drive future research and model development.\n- Quality: The paper rigorously evaluates MMMU-Pro across multiple state-of-the-art models, showing significant performance drops that underscore the benchmark’s challenge. \n- Insights: Experiments with methods like Chain of Thought reasoning and OCR prompts enrich the analysis, verifying the benchmark’s effectiveness in highlighting model limitations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces MMMU-Pro, a more robust version of the MMMU benchmark. MMMU-Pro aims to more accurately assess multimodal models' true understanding and reasoning capabilities across diverse academic domains by addressing limitations found in the original MMMU. The authors achieve this through three main enhancements: (1) filtering out questions that can be answered using only text, ensuring models rely on multimodal input; (2) expanding the number of answer options to reduce reliance on guessing; and (3) introducing a vision-only input setting where questions are embedded in images, challenging models to integrate visual and textual information. These modifications result in a more rigorous benchmark that better approximates real-world scenarios. Experimental results demonstrate that MMMU-Pro challenges existing models more, revealing performance drops across multiple models and encouraging further exploration in multimodal understanding and reasoning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unclear Justification for OCR Requirement: One of MMMU-Pro's main contributions is embedding text within images to increase difficulty by requiring OCR. However, this addition may detract from the benchmark’s core goal of evaluating multimodal understanding, as it primarily tests the model’s OCR capabilities rather than its deeper multimodal comprehension. Although it is true that embedding text within images is more realistic, whether the extra difficulty from OCR is significant for LMMs needs more justification, as the extra focus on OCR could potentially obscure the true reasoning ability of models that struggle with OCR but perform well in multimodal integration tasks.\n\nLimited Impact on Model Performance Ranking: While it’s acceptable for a benchmark to yield similar performance scores, MMMU-Pro does not alter the ranking of current models, nor does it reveal new insights into their strengths and weaknesses. This lack of differentiation reduces the benchmark’s ability to provide fresh perspectives on model capabilities, potentially weakening its contribution as an evaluation tool." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to weakness section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper addresses key limitations in existing benchmarks like MMMU. By introducing a vision-only input mode, MMMU-Pro uniquely challenges models to process visual and textual information in a more realistic, integrated manner. This work also enhances question difficulty and mitigates model reliance on shortcuts, providing an essential tool for testing and advancing multimodal AI.\n\n- The clarity of the paper is strong, with well-organized sections detailing the benchmark's construction and evaluation.\n\n- Additionally, the paper examines the impact of Chain of Thought prompting and OCR on performance within the proposed benchmark, further investigating the limitations present in current MLLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces MMMU-Pro, an enhanced multimodal benchmark to rigorously test AI models’ understanding and reasoning by addressing limitations in the original MMMU benchmark. MMMU-Pro utilizes a three-step process to improve robustness: filtering questions answerable by text-only models, increasing candidate options to prevent guesswork, and implementing a vision-only input mode that embeds questions within images, thus requiring integrated visual and textual processing. Experimental results show a significant drop in model performance compared to MMMU, highlighting multimodal challenges. The study further investigates Chain of Thought prompting and OCR effectiveness, identifying areas where current models struggle, and setting directions for future multimodal research​." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- One limitation is that in the vision-only setting, images are manually captured photos and screenshots over a simulated display environment, but only differences in backgrounds, font styles, and font sizes are considered. However, the diversity of real images should also account for factors such as varied lighting conditions and different camera angles (e.g., rotated text in photos).\n\n- While the paper discusses the Chain of Thought (CoT) prompting and OCR’s impact, these evaluations could be expanded to clarify where CoT specifically improves performance. For example, breaking down CoT's impact across different question types or modalities could reveal deeper insights, guiding future model improvements.\n\n- Moreover, the analysis would benefit from more nuanced evaluation metrics that go beyond accuracy, such as tracking misinterpretation rates or identifying where models are most prone to visual-textual integration issues. This additional layer of analysis could provide more actionable insights for researchers looking to address specific multimodal weaknesses." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Regarding the quality of options: Expanding the number of options does indeed reduce model performance. How do you ensure the quality and diversity of these additional options? If there is a method, could you elaborate further on the validation process?\n\nGiven the high construction cost of the benchmark, is it possible to reduce human effort through automated data generation or other technical means? For instance, could models be used to create new visual inputs or options?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Rigorous Evaluation:Using multiple LLMs to vote and filter out questions that can be solved by text-only models does enhance the benchmark’s ability to reflect more accurate visual capabilities. Overall, this benchmark is indeed more complex and better demonstrates the model’s ability to integrate text and visual information.\n\nThe findings and detailed analysis suggest avenues for future improvements, such as better vision-text integration strategies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper is an extension of MMMU. The evaluation consists of three steps: 1) filtering out questions that can be answered by text-only models, 2) augmenting candidate options to reduce the chances of guessing correctly, and 3) introducing a \"vision-only input\" setting to challenge models to comprehend both visual and textual content simultaneously. \nExperimental results demonstrate a significant drop in model performance on MMMU-Pro" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Expanding the number of options is an obvious way to reduce model performance. The emphasis should be on demonstrating the quality of these expanded options. However, the authors only mention this briefly with a single example.\n2. The work appears straightforward, with most of the effort concentrated on non-technical human labor, making the overall contribution less innovative." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can the author privode more insight anaysis on the proposed benchmark?\n2. It's good to see the author's effort about how to improve the open-source model's performance on MMMU-Pro." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed benchmark assesses multimodal models’ understanding and reasoning capabilities in a more rigorous method.\n2. The data collection pipeline are confidential due to the engagement of human.\n3. Experiments are comprehensive, assessing current model's performance more accurate." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new benchmark MMMU-Pro for more accurately and rigorously assessment of a model’s true multimodal understanding and reasoning capabilities by filtering the question that can be answered by LLM dicrectly, adding more options and vision-only input. Experimental results using MMMU-Pro show a significant performance drop in existing model. This paper provides a more rigorous evaluation tool," }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As the main contribution of this paper is a benchmark. The authors should provide more data analysis on the collected MMMU-Pro and give more insightful conclusion.\n2. Apart from benchmark, the novelty is limited. This paper just tests many models on the proposed benchmark ( I don't think the exploration\n of OCR prompts and COT reasoning can be accounted as a novelty). On the other hand, the dataset collection pipeline is something more like engineering. That's the reason why I think this paper's novelty is limited. Of course, this is not the only criterion for determining whether this paper can be accepted. The proposed benchmark does make a contribution to MLLMs' understanding and reasoning capabilities. My main concern is that the workload and contribution may not be sufficient to be accepted by such a top-tier conference. I may change my rating based on the rebuttal." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the above weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1/ The paper is easy to follow\n\n2/ The three upgrades upon MMMU are reasonable and can better examine the MLLM's capability in making sense and reasoning about vision\n\n3/ The paper evaluates a wide range of existing MLLMs and share some insights." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper MMMU-Pro presents an upgraded version of MMMU. MMMU-Pro improves MMMU via (1) filtering out questions answerable by text-only models, (2) augmenting candidate options, and (3) introducing a vision-only input setting where questions are embedded within images. All these are reasonable upgrades upon MMMU, and the paper also presents some nice insights based on existing models' performances. But the upgrades seem to be incremental and not sure whether the efforts are enough for one standalone work." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1/ Fig 1 compares almost 20 models, is it necessary to compare so many models? We can probably derive similar conclusions & insights based on comparing just some representative models. In this era, new MLLMs come out every a few days. I understand from the view of maintaining a benchmark/competition, it is good to be inclusive while seems not helpful to have giant table and figures like Table 1 and Fig 1.\n\n2. Despite the 3 upgrades are reasonable, they seem to be incremental given the wonderful work of MMMU. I am not sure whether the efforts in this pro work are enough for one standalone paper. It could be just a nice technical report; while I guess if people want to pass it, I am also fine.\n\n3. The last upgrade of vision-only input is interesting and the analysis of OCR/COT are good. While it feels to be only just scratching the surface, and if the authors would like to create a more solid work, I would expect some deeper contributions e.g. design new model/algorithm that can better make sense of such purely vision input question, create a training dataset that can power existing MLLM to do much better job on this task." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mmmupro,\ntitle={{MMMU}-Pro: A More Robust Multi-discipline Multimodal Understanding Benchmark},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2jTdHYuguF},\nnote={under review}\n}" }, "abstract": { "value": "This paper introduces MMMU-Pro, a robust version of the Massive Multi-discipline Multimodal Understanding and Reasoning (MMMU) benchmark. MMMU-Pro rigorously assesses multimodal models' true understanding and reasoning capabilities through a three-step process based on MMMU: (1) filtering out questions answerable by text-only models, (2) augmenting candidate options, and (3) introducing a vision-only input setting where questions are embedded within images. This setting challenges AI to truly \"see\" and \"read\" simultaneously, testing \\textit{a core human cognitive skill of seamlessly integrating visual and textual information}. Results show that model performance is substantially lower on MMMU-Pro than on MMMU, ranging from 16.8\\% to 26.9\\% across models. \nWe explore the impact of OCR prompts and Chain of Thought (CoT) reasoning, finding that OCR prompts have minimal effect while CoT generally improves performance. MMMU-Pro provides a more rigorous evaluation tool, closely mimicking real-world scenarios and offering valuable directions for future multimodal research." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Evaluation", "Multimodal Understanding", "Multimodal LLMs" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c87b3772407351456c008b205be546b9684e8fd9.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MMMU-Pro: A More Robust Multi-discipline Multimodal Understanding Benchmark" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2jf5x5XoYk
GLoRa: A Benchmark to Evaluate the Ability to Learn Long-Range Dependencies in Graphs
main
Active
Graph Learning;Graph Neural Networks;Synthetic Benchmarks;Long-Range Dependencies
learning on graphs and other geometries & topologies
3;5;8;8
4;4;3;3
2;3;2;3
2;3;3;3
3;3;2;3
6
3.5
2.5
2.75
2.75
-0.942809
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The method is novel, and the paper is well-written. The methodology is easy to follow, and the experiment section is well-structured and clearly presented. Through dedicated experiments, the authors show that, in nearly all cases, the performance degradation with increasing dependency length cannot be attributed to any of the three phenomena: over-smoothing, over-squashing, or vanishing gradients. This finding opens up two directions for future research: identifying the true causes of this degradation and developing methods to address it." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an algorithm for generating a synthetic dataset for every dependency length and demonstrates how to use this benchmark to identify, with certain guarantees, the maximum dependency length that a graph learning system can learn. Additionally, the paper illustrates the application of the benchmark in the experiment section." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall, the paper is good; however, some improvements are needed in figure presentation. For example, the position of subgraph titles should be consistent across Figures 2, 3, and 4." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you clarify why Transformer-based models perform poorly on GLoRa, even at small dependency lengths? Have alternative positional encodings or adaptations been considered?\n2. How does GLoRa handle variations in the number of “holes” within paths, and would testing different numbers of interruptions provide further insight into model performance?\n3. Are there plans to test models that perform well on GLoRa against real-world benchmarks requiring long-range dependencies to validate GLoRa’s practical transferability?\n4. How do various types of implicit GNNs perform on the proposed benchmark?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **Novelty in Benchmark Design**: GLoRa provides a synthetic benchmark with strict, enforceable dependency-length requirements, filling an important gap in current graph benchmarks.\n- **Theoretical Guarantees**: The benchmark’s theoretical properties, including enforceable dependency lengths, are rigorously proven, making GLoRa a well-grounded tool for long-range dependency evaluation.\n- **Clarity and Structure**: The paper is well-structured, with clear explanations of the benchmark construction process and theoretical foundations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces GLoRa, a synthetic benchmark designed to evaluate the ability of graph neural networks (GNNs) to capture long-range dependencies. By generating controlled graph examples with enforceable dependency lengths, GLoRa addresses a key gap in current GNN benchmarks. The authors provide theoretical guarantees for GLoRa, showing that models must capture dependencies of exact lengths to perform well. The paper also presents an empirical evaluation of several GNN architectures (including vanilla, over-smoothing-mitigated, over-squashing-mitigated, and Transformer-based GNNs) on GLoRa, revealing that none of the models perform well beyond modest dependency lengths." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Disconnection Between Theory and Experiment**: The experiments do not fully validate the theoretical properties of GLoRa, such as the enforceable dependency lengths. Testing trained models across a range of dependency lengths or with varied “holes” in paths might provide empirical support for the benchmark’s theoretical claims.\n \n2. **Unexpected Performance of Transformer-Based Models**: Transformer-based GNNs perform poorly on GLoRa, even for small dependency lengths (e.g., \\(d = 3\\)). This contradicts their generally strong performance on other tasks, raising questions about whether GLoRa aligns with their strengths or if implementation details (like positional encodings) limit performance. Further exploration of encoding options or discussing the potential limitations of Transformers on GLoRa would clarify this discrepancy.\n \n3. **Limited Testing of Transferability and Practical Relevance**: GLoRa’s relevance to real-world tasks remains untested, as there are no experiments that transfer GLoRa-trained models to practical benchmarks requiring long-range dependencies. Testing transferability on benchmarks like social networks or molecular graphs would substantiate GLoRa’s practical utility.\n\n4. **Missing Important Evaluations for Implicit GNNs**: While the paper tests many GNN models, most of these GNNs do not claim to capture long-distance dependency. Various types of implicit GNNs have been demonstrated to capture long-distance dependency better, but the paper misses this important category of models. A more comprehensive evaluation on implicit GNNs will be helpful.\n\nWhile GLoRa is a theoretically grounded and novel benchmark for evaluating long-range dependencies in GNNs, the experimental design could be strengthened to better align with its theoretical properties and validate practical relevance. Specifically, testing models across varying dependency lengths, addressing the Transformer performance anomaly, and exploring GLoRa’s transferability to real-world tasks would greatly enhance the impact of this work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- **Q1**: The experimental setting is not fully clear to me. In L412 the authors state that they “generated 1000 positive and 1000 negative examples by Algorithm 1 for each $d \\in \\{3, \\dots, 15\\}$”. Does that mean that models are trained on $2000 \\cdot 0.8 = 1600$ training samples for each depth $d$ or do you construct a joint training set of size $2000 \\cdot 0.8 \\cdot 13 = 20,800$ training samples and merely evaluate the test accuracy separately for each depth $d$?\n- **Q2**: The authors state in L465-466: “Finally and not surprisingly, all types of graph transformers cannot learn even very short dependencies”. Can the authors provide a more detailed insight into why this result is unsurprising? The GPS model, for example, uses a local message-passing module that should at the very least match the performance of the vanilla GatedGCN. I find that this warrants further analysis. One possible reason could be the possibly low amount of data seen during training; see related **Q1**." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- **S1**: The authors give a clear definition of long-range dependency which is often lacking in prior work.\n- **S2**: The authors benchmark a variety of baselines, from simple GNNs to more involved approaches, as well as transformers.\n- **S3**: The finding that neither over-smoothing, over-squashing or vanishing gradients are the causes for the poor performance at long range is very interesting and deserves more attention in future research.\n- **S4**: The authors have identified a surprisingly simple problem setting that a variety of GNN architectures fail to solve. These findings could lead to interesting follow-up work which aims to understand why the models fail and how these limitation can be overcome." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work, the authors propose GLoRa, a benchmark generator for long-range dependency tasks on graphs. The authors overcome the limitations of existing work by precisely stating a definition for long-range dependencies and designing a benchmark that guarantees that models cannot solve the generated tasks unless they respect the long-range dependencies in a given graph. An empirical study on a variety of GNN and transformer baselines concludes that no architecture can perform well on the GLoRA tasks for dependencies longer than depth 10. Further, the authors find that neither over-squashing, over-smoothing or vanishing gradients are the cause for this poor performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **W1**: The authors argue for the need of a synthetic benchmark where long-range dependencies can be guaranteed. The authors argue that such guarantees are not given in real-world benchmarks. While I generally agree with the fact that in real-world benchmarks there may be shortcuts or simpler functions that avoid long-range dependencies but still correctly predict the labels, I am concerned that the present work proposes a benchmark for a problem they cannot identify in the real-world. In particular, the authors argue in the introduction that long-range dependencies are crucial in many applications (recommendation systems, traffic prediction, fake news detection). However, if long-range dependencies are crucial in these applications I would argue that it would be more sensible to derive benchmarks from real-world data in these domains. Further, if the authors are concerned that in real-world data one cannot verify whether long-range dependencies are truly needed to solve a task, I conclude that the authors also cannot guarantee that their proposed notion of long-range dependencies (Definition 1) is actually useful in real-world applications. Hence, I ask the authors to justify the relevance of long-range dependencies in real-world problems or to argue otherwise how progress on GLoRA contributes to solving real-world problems in graph learning.\n- **W2**: The theoretical contributions are formally imprecise and no full proof is given for Theorem 1 (only a proof sketch). First, the authors should clearly state in L385 that Theorem 1 is only supported by a proof sketch. The authors say “proof” in the main paper but “proof sketch” in the appendix. Second, let me expand on what I mean with formally imprecise. The statement of Theorem 1 says “[For every probability $\\mathcal{P}$] there exists a number $K$ such that a set $S$ of $K$ samples […] requires learning dependencies of length $d$ with probability at least $\\mathcal{P}$.” It is not formally clear what it means for a set of samples to require learning dependencies of some length. I could guess that the authors mean that a model cannot separate the set into positive and negative samples unless the model finds the corresponding path of length $d$. However, the statement in its current form is not formally precise. The authors should either formally state their theorem and prove it, or replace the theorem with an informal argument supported by the proof sketch. If the authors decide to formally state their theorem they should carefully define what they mean with the statement above. It is perfectly fine to give an informal (but intuitive) version of the Theorem in the main text and precisely formalize the theorem statement and proof in the appendix. In this case, I recommend to state the theorem in the main text as \"Theorem 1 (informal)\" and then write something like \"For a precise theorem statement and formal proof, see Appendix ...\"." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Figure 3 does not rule out the case of this type of oversmoothing: at the last layer of a GNN, it may be the case that most nodes in one graph have the same embedding. But, this embedding can be different across different graphs that you run a forward pass on.\n2. Why do the Transformers fail, and why do you say this is \"not surprising\"?\n3. Some important details, like how number of layers is chosen, is hidden in Appendix.\n4. What is the way forward for making GNNs that solve GLoRa?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Good criticism of existing benchmarks. Many of them can be solved in a graph independent way, or long-range-dependency independent way.\n2. Interesting that over-squashing is not a problem in directed GLoRa by design (number of paths constant and small).\n3. Experiments on many types of representative GNNs from different families.\n4. In my opinion, good synthetic data experiments were very much needed for this exact question (long-range dependences in graph learning). Doing this well can be quite helpful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a new synthetic benchmark, GLoRa, to measure the ability for graph machine learning methods to learn long-range dependencies. For different depths of dependencies and difficulty levels, GLoRa can make synthetic tasks that do not have simple shortcuts that other long-range benchmarks suffer from. Experiments are conducted across many different GNN architectures from different families. It is argued that oversmoothing, oversquashing, and vanishing gradients are not the issues with learning these long-range dependencies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. For the definition of path-aware dependencies, isn't it easy to satisfy this for node classification functions on complete graphs, even though the connections are very short. In particular, there is no requirement in this definition for non-existence of paths.\n2. Unrigorous proof of Theorem 1\n3. 80% in Figure 2 is a bit arbitrary of a threshold, but this isn't a huge issue.\n4. Algorithm block is a bit hard to understand, but Figure 1 is clear at least." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024glora,\ntitle={{GL}oRa: A Benchmark to Evaluate the Ability to Learn Long-Range Dependencies in Graphs},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2jf5x5XoYk},\nnote={under review}\n}" }, "abstract": { "value": "Learning on graphs is one of the most active research topics in machine learning (ML). Among the key challenges in this field, effectively learning long-range dependencies in graphs has been a particularly difficult problem. It has been observed that, in practice, the performance of many ML approaches, including various types of graph neural networks (GNNs), degrades significantly when the learning task involves long-range dependencies—that is, when the answer is determined by the presence of a certain path of significant length in the graph. This issue has been attributed to several phenomena, including, most prominently, oversmoothing, over-squashing, and vanishing gradient. A number of solutions have been proposed to mitigate these causes. However, evaluation of these solutions is complicated by the fact that existing benchmarks do not really test systems for their ability to learn tasks based on long-range dependencies in a transparent manner. In this paper, we design a synthetic benchmark that provably allows testing systems for this learning ability. We then evaluate state-of-the-art systems against it and conclude that none of them can claim that it can learn long-range dependencies well. We also observe that this weak performance cannot be attributed to any of the three causes, thus indicating that further investigation is necessary." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph Learning", "Graph Neural Networks", "Synthetic Benchmarks", "Long-Range Dependencies" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a2d43a3575f9a825b5747c899f702a79530a162a.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "GLoRa: A Benchmark to Evaluate the Ability to Learn Long-Range Dependencies in Graphs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2jzhImk4br
Strategic Exploration for Inverse Constraint Inference with Efficiency Guarantee
main
Active
Inverse Constrained Reinforcement Learning;Exploration Algorithm;Sample Efficiency
reinforcement learning
1;5;6;6
5;3;3;3
1;2;3;3
1;2;2;3
1;2;3;2
4.5
3.5
2.25
2
2
-0.980196
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Some questions are included in the weakness section. \n\nThe PCSE approach, described in Algorithm 1, obtains an exploration policy pi_k by solving the optimization problem in Equation 9. In Equation 9, $\\Pi^r$ (rewards, not costs) is defined as \n\n$$\\Pi^r = \\{ \\pi \\in \\Delta: \\inf_{\\mu_0} \\mu_0^T (V^{r, \\pi} - V^{r, \\hat{\\pi}^*}) \\geq \\mathcal{R}_k \\}$$\n\nBecause these are the value function of rewards, I am confused why the difference should not be flipped, such that:\n\n$$\\Pi^r = \\{\\pi \\in \\Delta: \\inf_{\\mu_0} \\mu_0^T (V^{r, \\hat{\\pi}^*} - V^{r, \\pi}) \\geq \\mathcal{R}_k\\}.$$\n\nIn other words, why should the order of the two value functions not be flipped, given it is an infimum." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents novel sample complexity guarantees on ICRL problems in the setting where the transition is unknown. While the paper presents substantial notation, Section 4 does a good job of describing the lemmas in understandable terms. Section 5, especially 5.1 and 5.2, would benefit from similar elaboration. The steps in the proofs are mostly explained well." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In applications such as robot learning, it is often the case that the learner (e.g. robot) must abide by certain safety constraints when learning to perform a task. Because such constraints can be hard to specify, methods of learning the constraints from demonstration have been proposed, an approach known as Inverse Constrained Reinforcement Learning (ICRL). Prior work has made one of the following assumptions: access to a known transition model or access to a generative transition model that can be queried at any state-action pair. The existing work that does not impose such assumptions has not examined efficiency and estimation errors. This paper proposes two algorithms for learning a set of feasible constraints that align with the expert preferences. Sample complexity bounds are presented for both algorithms. The algorithms are evaluated on Gridworld and Point Maze tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Several weaknesses are listed below, of varying importance.\n\nI believe the paper would benefit from a broader discussion of the Related Works. More specifically, how does the paper and the setting it considers compare to the following works?\n- Chou et al., “Learning constraints from demonstrations,” 2020\n- Kim and Oh, “Efficient off-policy safe reinforcement learning using trust region conditional value at risk”, 2022.\n- Moskovitz et al., “Reload: Reinforcement learning with optimistic ascent-descent for last-iterate\nconvergence in constrained mdps,” 2023.\n- Lindner et al., “Learning safety constraints from demonstrations with unknown rewards,” 2024\n- Kim et al., “Learning Shared Safety Constraints from Multi-task Demonstrations,” 2024\n\nNearly all of the results in the Empirical Evaluation section (Sec. 6) are visually difficult to parse. For example, the UCB results are almost entirely hidden in Figure 3’s top and middle rows (rewards and costs, respectively). While including numerous baseline comparisons is beneficial, considering including different plots in the appendix to make the comparison more interpretable. In addition to an unclear comparison to the baselines in terms of discounted cumulative rewards and discounted cumulative costs, neither BEAR nor PCSE appear to beat the Random algorithm in terms of WGloU score. Overall, it is unclear to me what the takeaways from the empirical results are.\n\nThe paper assumes finite state and action spaces, as well as an infinite horizon. In the experiments, these assumptions do not always hold (e.g. Point Maze is a continuous environment). There is a brief mention in Appendix D.3 about the density model in the continuous space, but overall, the discussion of how the theoretical assumptions translate into the practical settings considered is lacking.\n\nIn Theorem 5.6, the sample complexity of PCSE is given by the minimum of the sample complexity of BEAR and a term dependent on the minimum cost advantage function. In the proof of Theorem 5.6, the paper states that the sample complexity of BEAR (Theorem 5.5) applies to PCSE because it is optimizing a tighter bound. The justification is Corollary C.6. Examining the proof of Corollary C.6, it is not clear how one is a tighter bound than the other. \n\nThe paper would benefit from further discussion of the optimality criterion. The first constraint, with the Q difference for completeness, “tracks every potential true cost function.” The second constraint, focused on accuracy, expresses that the learned cost function must be close to a true cost function. How does it “[prevent] an unnecessarily large recovered feasible set?” At a higher level, the paper would benefit from more motivation/discussion of why the estimation should be included in the optimization problem. In other words, why can we not naively solve the problem as though we had perfect estimates, and then handle the estimation errors exclusively in the analysis? As discussed above, Section 5 (especially 5.1 and 5.2) would benefit from non-technical elaboration in the style of Section 4.\n\nIn Equation 9, which defines the optimization problem of PCSE, the supremum is over distributions over the state space, rather than the state and action space. More specifically, it is\n\n$\\Pi^r = {\\pi \\in \\Delta: \\inf_{\\mu_0 \\in \\Delta^S} \\mu_0^T (V^{r, \\pi} - V^{r, \\hat{\\pi}^*}) \\geq \\mathcal{R}_k} $\n\nrather than\n\n$\\Pi^r = {\\pi \\in \\Delta: \\inf_{\\mu_0 \\in \\Delta^{S \\times A}} \\mu_0^T (V^{r, \\pi} - V^{r, \\hat{\\pi}^*}) \\geq \\mathcal{R}_k} $" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) Please discuss challenges that you anticipate in scaling your approach to environments were both state and action spaces are continuous, and potential solutions to the challenges. \n2) Please include a discussion of how you might adapt your theoretical analysis and sample complexity bounds for high-dimensional continuous spaces in your future work." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Concrete theoretical analysis that is well detailed, along with good empirical results for the provided environments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduce two exploratory algorithms, BEAR and PCSE, to solve Inverse Constrained RL problem setting, where constraint signals are learnt from expert policies. The approach recovers a set of feasible constraints that align with expert preferences. A theoretical analyses of these algorithms is provided including sample complexity bounds." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although some experiments were performed on a continuous setting, it is unknown (not even addressed) how the algorithms scales in both continuous state and action spaces. The current test case is a simple environment with discrete actions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I reported some of my questions in the comments above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper tackles an interesting problem setting that may have practical upside for relevant applications;\n- The paper addresses the strategic exploration problem in ICRL, which it has been previously studied in settings with known dynamics or a generative model;\n- The paper provides two algorithmic solutions and corresponding sample complexity results;\n- The paper includes a numerical validation, which is not that common in purely theoretical RL papers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses inverse constrained reinforcement learning, a problem in which we aim to infer a cost constraint by looking at expert's behaviour only. The specific setting works as follows: We can deploy a policy in the MDP to get a rollout of state transitions and expert's actions, but we cannot see the cost. We aim to infer a set of costs compatible with the expert's actions, which are assumed to be optimal, while minimizing the samples taken from the environment. The paper proposes two algorithmic solutions for this setting, a bonus-based exploration strategy called BEAR and a further refined version called PCSE, together with the analysis of their corresponding sample complexity and a brief empirical evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I summarize below some of my main concerns about the work. More detailed comments are below.\n- The paper dives into a theoretical analysis of a rather peculiar setting without providing proper motivations;\n- The paper seems to have some presentation issues: I understood (most of) the interaction protocol at the start of Section 5, but some details are still obscure, whereas the notation does not look always sharp and detailed. The sample complexity results include some terms for which the dependence with $S, A, \\gamma$ is not obvious;\n- The paper lacks a in-depth discussion of the results, e.g., how they relate with prior works on IRL or ICRL with generative models, the considered assumptions (especially deterministic policies), computational complexity of the algorithms, the technical novelty of the presented analysis;\n- The numerical validation does not seem to be conclusive. Most of the curves are not separated with statistical significance.\n\n**COMMENTS**\n\nMOTIVATION. The formulation of the setting could be more clearly motivated. While it is roughly clear the kind of applications that are target, it is less clear why some choices are made.\n- Is the discounted setting more interesting than finite-horizon for ICRL?\n- In which kind of applications we can expect to have unconstrained access to the MDP online, even though the expert acts under constraints? Do the authors have any example of an application with cost constraints that allows for unconstrained exploration?\n- Also the PAC requirement is somewhat questionable: Under approximation errors of the MDP and expert's policy we are not guaranteed that the optimal policy for the costs in the feasible set is \"safe\" to use in the true MDP (i.e., it would respect the true constraint). This is common in other inverse RL paper, but while some sub-optimality can be acceptable in unconstrained RL, some violations of the constraints are less acceptable in a constrained setting.\n\nPRESENTATION. The presentation is not always sharp in the paper. I am listing below some questions, suggestions on how I think it could be improved.\n- Some broader context could be added to first sentence of the abstract, e.g., that we want to optimize an objective function under cost constraint(s);\n- The equation at l. 143 likely includes a typo. The Definition 3.1 would also benefit from more context and a more formal introduction to the notation (e.g., what do the value functions mean exactly?). It requires quite a lot of time to be processed;\n- I could not fully comprehend Eq. 2. Are $\\zeta$ and $E$ defined somewhere? If that is the case, perhaps it is worth recalling their meaning here;\n- The interaction setting shall be introduced earlier than Sec. 5.1 and some aspects are still not clear then. How is the reward accessed/estimated?\n- Sec. 5.5: \"The above exploration strategy has limitations, as it explores to minimize uncertainty across all policies, which is not aligned with our primary focus of reducing uncertainty for potentially optimal\npolicies.\" This is not fully clear and would benefit from further explanation. I thought the goal was to achieve the PAC requirement with minimal sample complexity. In general, the description of PCSE is not easy to process.\n\nTECHNICAL NOVELTY. BEAR looks like a rather standard bonus-based exploration approach, in which the main novelty seems to come from adapting the bonuses expression to the ICRL setting. Can the authors describe if other uncommon technical challenges arise from the specificity of the setting (especially w.r.t. prior works) and how are they addressed? I am not very familiar with the related literature in solving ICRL with a generative model, but in theoretical RL is sometimes straightforward to get a \"strategic exploration\" result from a \"generative model\" result.\n\nDETERMINISTIC POLICY. Assuming the expert's policy to be deterministic in MDPs is reasonable, a little less in CMDP. Can the authors discuss this point? It looks like they think determinism is necessary. Can they prove that formally?\n\nCOMPARISON WITH PRIOR WORK. The paper shall discuss how the presented sample complexity results compare with prior works in IRL, reward-free exploration, and, especially, ICRL with a generative model. Is the PAC requirement significantly different from prior works? Moreover, the $\\sigma$ terms in the sample complexity may have hidden dependencies in $S, A, \\gamma$...\n\nOTHER COMMENTS\n- ll. 175-181. Those considerations look informal if not incorrect. One can easily imagine optimal trajectories that do not fully overlap with the expert's one in the given example, whereas not all of the sub-optimal trajectories are necessarily satisfying constraints!\n- How can the C_k bonus be computed in BEAR? It seems to include the advantage, but the estimation of the reward is not mentioned anywhere;\n- Are the described approaches computationally tractable?\n- Another alternative yet interesting setting is the one in which the cost is also collected from the environment, but the constraint (threshold) is not known;\n- Some additional related works on misspecification in IRL https://ojs.aaai.org/index.php/AAAI/article/view/26766, https://arxiv.org/pdf/2403.06854 and sample efficient IRL https://arxiv.org/pdf/2402.15392, https://arxiv.org/abs/2409.17355, https://arxiv.org/pdf/2406.03812 could also be mentioned.\n\n**EVALUATION**\n\nThe addressed setting looks technically challenging and of practical interest. I am currently providing a slightly negative evaluation to account for my confusion over some aspects of the paper, which the authors may resolve with their response." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "None" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The problem setting of designing provably-efficient algorithms for ICRL is really interesting in my opinion." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper analyses the problem of learning the constraints underlying some expert demonstrations in a provably efficient manner. Given that multiple constraint functions are compatible with the observed behavior, the learning target is the set of cost functions compatible with them." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper is based on Definition 3.1, which defines the notion of feasible cost set. However, because of the typos and of the absence of explanation provided, it is impossible to understand what is the feasible cost set in a formal manner. Without this formal definition, the following results, and in particular Lemma 4.3, cannot be understood. Since all the theorems proved are based on Lemma 4.3, then it is no possible to understand whether the results are correct or not.\n\nTYPOS:\n- 103: what is a space?\n- 107: there is an additional dot . not needed\n- 137: $r$ should not be present in the tuple\n- 138: no need to write \"CMDP without knowing the cost\", because this notation has already been defined earlier\n- 139: the cost should be defined as bounded\n- 139: simbol $\\Pi^*$ never defined\n- 141,143: bad definition of set\n- ...: definitely too many typos. Have the authors read the paper after having written it? Why me and the other reviewers have to waste time in reading your paper if not even you have read it?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper introduces a strategically efficient exploration framework for Inverse Constrained Reinforcement Learning problems with theoretically tractable sample complexity." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024strategic,\ntitle={Strategic Exploration for Inverse Constraint Inference with Efficiency Guarantee},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2jzhImk4br},\nnote={under review}\n}" }, "abstract": { "value": "In many realistic applications, the constraint is not readily available, and we need to infer the constraints respected by the expert agents from their behaviors. The problem is known as Inverse Constraint Inference (ICI). A common solver, Inverse Constrained Reinforcement Learning (ICRL) seeks to recover the optimal constraints in complex environments in a data-driven manner. Existing ICRL algorithms collect training samples from an interactive environment. However, the efficacy and efficiency of these sampling strategies remain unknown. To bridge this gap, we introduce a strategic exploration framework with guaranteed efficiency. Specifically, we define a feasible constraint set for ICRL problems and investigate how expert policy and environmental dynamics influence the optimality of constraints. Motivated by our findings, we propose two exploratory algorithms to achieve efficient constraint inference via 1) dynamically reducing the bounded aggregate error of cost estimation and 2) strategically constraining the exploration policy. Both algorithms are theoretically grounded with tractable sample complexity. We empirically demonstrate the performance of our algorithms under various environments." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Inverse Constrained Reinforcement Learning", "Exploration Algorithm", "Sample Efficiency" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c155d5b60e01b9b3d5bb7a1cbb40df06117cd3b4.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/89a6450b4ba947ad8a36f36d69b27786821a195f.zip" }, "title": { "value": "Strategic Exploration for Inverse Constraint Inference with Efficiency Guarantee" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2kGKsyhtvh
Towards hyperparameter-free optimization with differential privacy
main
Active
Differential privacy;optimization;hyper-parameter tuning
alignment, fairness, safety, privacy, and societal considerations
3;5;6;8
3;3;3;3
2;2;3;4
3;3;3;3
2;2;2;4
5.5
3
2.75
3
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I have to read GeN Bu et al. (2023) again to understand the proposed method in this paper. And I cannot find Eq (5) of this draft in GeN Bu et al. (2023). What is \\omega in Eq (5) and (6)? Could you write the closed form solution for estimating \\eta? If not, why? \n\nI request the authors to clarify privacy accounting of their proposed method. Starting from the DP definition, e.g., what are their mechanism input and output? How are their “Loss Privatization” and “Gradient Privatization” composed? It looks to me Line 9 in Alg 1 is data dependent, but it is unclear whether it is considered in the DP algorithm or accounting. It is OK to use GDP and/or autoDP library, but I request the authors to detail how GDP/autoDP is used for this specific algorithm.\n\nMinor: the authors might consider the usage difference of \\citep and \\citet in writing." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Setting hyperparameters in DP optimization is an important topic for both modeling and privacy. \n- Experiments demonstrate the advantage of the proposed method compared to naively applying parameter free optimization methods like D-adaptation in DP optimization, and DP-hyper style algorithm by differentially privatizing hyperparameter search." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a method to estimate hyperparameters (i.e., learning rate) in differentially private optimization with gradient normalization (instead of gradient clipping). As learning rate is the main tuning parameter, the proposed optimizer is hyperparameter free. The proposed additionally differentially privatizes the loss (a scalar) for estimating the learning rate." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The technical part of the paper is generally hard to read. I am not confident the proposed method is technically correct." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "For DP hyper-parameter optimization, have the authors considered using gradients from backpropagation w.r.t learning rate to tune the learning rate privately?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed algorithm works pretty well in the experiment with not much additional cost in computation cost and privacy.\n2. The idea is simple yet effective, factorizing learning rate tuning during training using quadratic approximation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to tune learning rate privately based on quadratic approximation during DP-SGD training. It is shown that the proposed algorithm can achieve comparable performance with non-DP learning rate search." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The proposed algorithm seems to still require a initial learning rate and the algorithm's sensitivity to the initialization seems missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) The line numbers are missing in the pdf, the authors may want to fix this. In the following I will try my best to describe the location of the typo\n\n\n2) In equation 6, I believe you mean to also use the privatized loss for L(w), currently it is un-privatized and hence the objective is not private. I can see from the algorithm you input to equation 6 privatized losses, so I presume this is a typo in equation 6\n\n3) In the opening paragraph of section 4.1, I do not believe saying Auto Clipping is equivalent to Rg = 0 is correct. This would mean you always clip to 0 gradient norm. I believe you can replace this by saying you always clip to gradient norm 1, which is consistent with equation 1.\n\n4) In algorithm 1 could you write the inputs to algorithm, which I take as initial values for: $\\eta$, $R_l$. This would help with reading/understanding the algorithm\n\n5) In line 8 of algorithm 1, can you replace $(-\\eta,0,\\eta)$ with set notation {$\\eta, 0,\\eta$} to be clearer how equation 6 is instantiated; at first I thought it was run over an interval.\n\n6) In line 9 of algorithm 1 I find it vague as stated; more precisely can you say you minimize the previous fitted quadratic?\n\n7) (Major) In section 5.1 experiment setup paragraph, can you explicitly state the deltas used in the experiments; I could not find this in the appendix either.\n\n8) (Major) In table 2, can you add error bars for the experiments? E.g., report 1 standard deviation\n\n9) (Major) Can you report the grid search range in the appendix?\n\n10) Can you explain what BitFit is briefly in the experimental setup paragraph; I believe this will clarify better the differing results for full finetuning\n\n11) I found the figures hard to read when I printed the paper; consider increasing the size and possibly moving one figure to the appendix." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) Hyperparameter selection in DP does not inherit the same rules of thumb as non-DP, and hence understanding hyperparameter selection for DP training is a core problem\n2) The results are strong and seemingly near optimal (given the non DP grid search results)\n3) The observation that a specific generic ML hyperparameter selection approach translates well to the DP setting seems important and novel to the DP community" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper discusses how to improve the performance of DP optimization by improving the private selection of hyperparameters. This is done by always scaling the gradient to norm 1 and adjusting the learning rate for each training step privately. The selection of learning rate is done by adapting a rule used in non-DP literature which solves an approximate objective for the best learning rate, but making this private by privatizing the loss evaluation used in this learning rate objective. Extensive experiments shows this improves performance over alternative DP hyperparameter selections, and is often comparable to a non-DP grid search of hyperparameters. Further experiments show the computational overhead is minimal. Certain experimental details are missing in the text, though I believe this can be easily addressed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) In my opinion the presentation can be improved: in the questions I try and clarify several possible typos. I emphasize this as I originally had more issues with the claims made in the paper but was able to answer these by cross-examining statements elsewhere, which could have been more easily resolved with some changes to writing.\n\n2) Certain details about the experimental setup are missing, such as exact DP $\\delta$ values used, and the range of hyperparameter grid search. I ask questions about these in the questions section (labelled Major), and believe they can be easily addressed, and am willing to increase my score given they are addressed in the rebuttal." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. In Theorem 1, when $R_l \\approx L$, the clipping bias is close to zero, where $L = \\frac{1}{B} \\sum_i L_i$ is the public per-sample gradient (right?), which seems to be a CLT-based result (please correct me if I’m wrong). My questions are: \n- (1) Could the authors provide guidance on minimum batch sizes needed (to make CLT works) in practice, based on their empirical observations? Although one always wants large batch sizes but due to limited computational resources, one often can't afford super large batch sizes.\n- (2) I understand why you set $\\tilde{L}\\_{t-1}^{(0)}$ as $R\\_l$ for the next iteration, given that the loss values are similar. However, while $L\\_{t-1}$ might be close to $L\\_{t}$, I worry that $\\tilde{L}\\_{t-1}$ could differ significantly from $\\tilde{L}_{t}$ because of the clipping and noising, which might not give bias $\\approx 0$. Some discussion or empirical results on this would be valuable.\n- (3) What was the rationale for choosing $\\tilde{L}\\_{t-1}^{(0)}$ specifically? Did the authors experiment with other options like $\\tilde{L}\\_{t-1}^{(+1)}$ or $\\tilde{L}\\_{t-1}^{(-1)}$, and if so, what were the results?\n\n2. In the main algorithm, I assumed $\\eta_i \\in \\\\{-1, 0, +1\\\\}$ represents the series of potential lrs. Is there a specific reason for this choice? I understand the need for at least two $\\eta_i$'s, but $\\{-1, +1\\}$ seems more intuitive to me...? Could the authors explain the rationale behind including 0 in the set of potential learning rates? Are there specific benefits for this choice? Also, I’m unclear about how to fit eqn. (6). In Section 4.4, the authors mention that solving this is quite efficient, with the cost \"mainly arising from additional forward passes.\" Could the authors provide more details on the practical implementation of solving equation (6), and specifically, what optimization method was used, and how much computations were typically required to find a solution?\n\n3. Could the authors provide insights into why D-adaption and Prodigy struggle in low-$\\epsilon$ regimes for full finetuning, as seen in the first table of Table 2 and Table 3? Are there specific aspects of these methods that make them less suitable for differentially private optimization? Also, for clarity, could the authors specify the value of $K$ used for HyFreeDP results in Tables 2 and 3? I assumed $K=10$ throughout these experiments, but If it varies, a note explaining the choice for each experiment would be helpful.\n\n4. I noticed in Table 2 that NonDP-GS w/ LS outperforms HyFreeDP, especially on CIFAR-10, and in Table 3, NonDP-GS and HyFreeDP show similar performance. Do authors have any intuitions behind? I’m particularly curious why NonDP-GS w/ LS performs so well on CIFAR-10 dataset - is it because the task is too simple? If I understand correctly, NonDP-GS does not account for privacy loss from hyperparameter tuning, so the $\\epsilon$ values for NonDP-GS might be underestimated. It would be great to include the results for NonDP-GS, considering the privacy cost of tuning. I imagine that HyFreeDP would then strictly outperform it...?\n\n5. It seems to me that this method works for per-batch clipping (since it also ensures dp [1]) as well, except that eqn (7) needs to be modified. It would be particularly useful for differentially privately training models with non-deomposbale loss [1, 2].\n\n[1] Huang, Alyssa, Peihan Liu, Ryumei Nakada, Linjun Zhang, and Wanrong Zhang. \"Safeguarding data in multimodal ai: A differentially private approach to clip training.\" arXiv preprint arXiv:2306.08173 (2023).\n[2] Kong, William, Andrés Muñoz Medina, and Mónica Ribero. \"DP-SGD for non-decomposable objective functions.\" arXiv preprint arXiv:2310.03104 (2023)." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This is a well-written paper that effectively present its methods. The motivation is clear, the connections to previous work are discussed, and the experimental results are comprehensive and convincing. The method is simple yet effective in terms of efficiency and utility. The theoretical results are presented clearly, avoiding unnecessary complications, and the experiments are solid." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method for differentially private training that eliminates the need for hyperparameter tuning, addressing a core challenge in DP deep learning. The authors provide clear discussions on the method’s efficiency, privacy guarantees, and utility. Both theoretical and empirical analyses are well-founded and straightforward." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "See below." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a hyperparameter-free differential privacy training method that automatically adjusts the learning rate, reducing the need for extra tuning efforts in a privatized, efficient, and scalable manner for language and vision tasks." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards hyperparameter-free optimization with differential privacy},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2kGKsyhtvh},\nnote={under review}\n}" }, "abstract": { "value": "Differential privacy (DP) is a privacy-preserving paradigm that protects the training data when training deep learning models. Critically, the performance of models is determined by the training hyperparameters, especially those of the learning rate schedule, thus requiring fine-grained hyperparameter tuning on the data. In practice, it is common to tune the learning rate hyperparameters through the grid search that (1) is computationally expensive as multiple runs are needed, and (2) increases the risk of data leakage as the selection of hyperparameters is data-dependent. In this work, we adapt the automatic learning rate schedule to DP optimization for any models and optimizers, so as to significantly mitigate or even eliminate the cost of hyperparameter tuning when applied together with automatic per-sample gradient clipping. Our hyperparamter-free DP optimization is almost as computationally efficient as the standard non-DP optimization, and achieves state-of-the-art DP performance on various language and vision tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Differential privacy", "optimization", "hyper-parameter tuning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a78eed81cef4add5960facc7c545357e395ba5ea.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Towards hyperparameter-free optimization with differential privacy" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2kfpkTD5ZE
Multi-Modal Foundation Models Induce Interpretable Molecular Graph Languages
main
Active
multimodal foundation models;molecular design;interpretability
applications to physical sciences (physics, chemistry, biology, etc.)
1;3;5;6
3;3;4;3
2;2;2;3
1;2;2;2
1;1;2;3
3.75
3.25
2.25
1.75
1.75
0.375823
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the \"Weaknesses\" section above for specific questions." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(S1): The paper is highly novel, exploring a quite unusual research direction. The writing is relatively clear and easy to follow. \n\n(S2): Apart from providing the main experiments, the authors also ablate their method quite thoroughly, replacing all the FM-based components with reasonable heuristics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method to induce a DSL for building molecules from a given subdomain by casting the DSL construction as a sequence of steps and using a large multimodal pretrained model to make those intermediate choices. The authors then show promising results on a few relevant molecule classes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(W1): I am not sure if the main experiments in this work are representative of real-world use. Is being able to simply generate/sample molecules from a given subdomain useful in itself, or would it only be useful if paired with molecular optimization? \n\n(W2): It's not clear to me how the VAE baselines are set up. Are these models pretrained and then fine-tuned on the (small) dataset in question, or trained on the latter directly? Would it make sense to instead use a frozen pretrained VAE and steer it to sample around a given subdomain by inferring the right region of latent space to sample from? Alternatively, for motif-based models such as HierVAE, one could also constrain the set of motifs to those that appear in the given dataset describing the domain. \n\n=== Other comments === \n\nIn the line of VAE-based models there's also MoLeR (from \"Learning to Extend Molecular Scaffolds with Structural Motifs\"), which is a more modern extension of JT-VAE/HierVAE, often shown to perform better than the latter. \n\n \n\n=== Nitpicks === \n\nBelow I list nitpicks (e.g. typos, grammar errors), which did not have a significant impact on my review score, but it would be good to fix those to improve the paper further. \n\n- Top of page 3: \"notations like SMILES or SELFIES are mainly for representation purposes and can lead to issues (…). This may hinder LLMs’ understanding as they lack sufficient pre-training on these notations compared to SMILES\" is confusing \n\n- Lines 189-191: it's not clear how \"u, v share an atom\" should be interpreted given that context suggests u and v are atoms/nodes? \n\n- Line 403: \"We first observe in Tables 1 and that\" - something is missing \n\n- Line 407: the authors refer to \"dimensions\" without explanation of what this means (I assume each dimension is one of the datasets?) \n\n- Line 426: \"surprising considering.\" - something is missing" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the *Weaknesses* section for my main concerns.\n\nFor now, I’m leaning toward borderline reject, but I’ll be glad to raise the score when all the questions are fully addressed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Overall, the paper was easy to follow. The writing and the concept figure were clear.\n- An ablation study was conducted for the MMFM module." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Through this paper, the authors propose Foundation Molecular Grammar (FMG), a method that constructs domain-specific languages (DSLs) in a data-efficient manner using multi-modal foundation models (MMFMs). Specifically, FMG eases the MMFM’s task by casting the DSL construction into the problem of constructing a tree decomposition for the molecular graph." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I will combine the *Weaknesses* section and the *Questions* section. My concerns are as follows:\n- Some abbreviations are used without explanation of the full term. For example, the full term for DSL, FM, and MMFM should be provided in Introduction. The full term for the proposed method, FMG, is also only in Abstract and not in the main text.\n- The main weakness of this paper is that the experiments are not extensive and robust. Why only grammar-based and VAE methods were selected as a baseline out of the vast molecular generative methods? Moreover, only small and medium datasets were used in the experiments. It would be great to provide results using more popular and larger datasets such as ZINC250k or MOSES for a broader comparison with previous methods.\n- Interpretability is a major advantage of the proposed method, but this advantage is not properly explained and emphasized in the experiment section. I strongly recommend devoting a few paragraphs to interpretability of FMG with a case study.\n- The authors did not provide the codebase to reproduce the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Can you provide more details or examples of the generated molecules and DSL? This would help readers better understand the practical outcomes of your method.\n\n2. The introduction lacks citations for many of the claims made. Could you provide evidence or references to support these statements, particularly regarding the challenges and current state of molecular generation?\n\n3. Regarding your claims about SMILES and SELFIES in Section 2.3, could you address the fact that SELFIES was designed specifically for molecular generation and ensures valid molecules? How does this impact your argument?\n\n4. You mention that alternatives to FMs for molecular generation require extensive training resources. Can you provide evidence or comparisons to support this claim, particularly in relation to your method's computational requirements?\n\n5. Could you clarify the notation used in Tables 1 and 2, particularly the meaning of numbers in parentheses (e.g., Isocyanates (11))? What do these represent?\n\n6. In the results section, can you provide citations for each of the methods listed and clarify what each column in the tables represents?\n\n7. Your analysis refers to dimensions \"2)\" and \"3)\" without clear explanation. Could you elaborate on what these refer to and how they relate to the metrics presented?\n\n8. There seems to be an incomplete sentence in your analysis: \"FMG appears to do exceptionally well for PTC (halides) but poor for HOPV (thiophenes), which is surprising considering.\" Could you complete this thought?\n\n9. Have you considered comparing your method against state-of-the-art language models trained on SMILES for molecular generation? How does your approach compare in terms of efficiency and effectiveness?\n\n10. Can you discuss how your work relates to recent research on improving sample efficiency in molecular generation?\n\n11. Could you elaborate on the details of the method? e.g. what temperature was used for gpt-4o, what image parameters were used (e.g. image resolution, size, etc). Does any of these variables have any influence on the results?\n\n12. For Table 1 and 2, it's not clear how many molecules were generated. Could you please specify this?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The work is certainly original in their use of MMFMs for describing molecular substructures and then using them again for proposing how to step-by-step build molecules from those motifs. The idea here is that the MMFM can guide and rationalize the generation of molecules in a given subfield.\n\nThe authors make a good point that LLMs lack abilities to understand chemical objects such as reactions and molecules, especially when these are given in SMILES format which is the most common thing to use, as graphs cannot be directly fed into LLMs. However images depicting the molecules and other things are a good idea to elicit correct chemical analyses from MMFMs, and it seems to work well to describe motifs, molecules, and perform other tasks such as suggesting combinations of motifs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the potential of multi-modal foundation models (MMFMs) to craft Domain Specific Languages (DSLs) for chemistry. The key argument is that DSLs are very useful, and it's a good idea to build DSLs on specific domains as they facilitate rules for better explaining decisions from models, in this case the decoding process they follow allows them to generate molecules while also providing explainations. This is useful because domain-experts typically trust more something they can rationalize.\nThe authors finally show the performance of their method on some molecular generation benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "### Writing\n\n- It is not very clear what the goal of the paper is. Is it molecular generation, or DSL generation? In either case, very little insight is given into how the generated molecules look like, or how the generated DSL looks like. This is important provided that the paper is so strongly focused on applications.\n\n### Potentially false or misleading claims, lack of evidence/citations.\n- In general the whole introduction section misses a lot of citations. Most of the claims made there are not based on evidence, excepting 3 citations on popular LLM papers, and 1 (Makatura, 2023) that works on LLMs for aid in design. \n- Section 2.3, where the role of FMs for molecular generation is discussed. The authors make several claims that are either false or misleading:\n - \"SMILES or SELFIES are mainly for representation purposes and can lead to issues in the context of generation\". The SELFIES system was specifically designed for molecular generation, one of the advantages being that every SELFIES string represent a valid molecule, tackling any concerns regarding validity [1].\n - The authors state that the alternative to FMs for molecular generation are \"GNNs or million-parameter language models for text\" which \"require extensive training resources\". No evidence or citation is provided for this, and furthermore the current work presents no analysis of the computational resources used by the presented method.\n - The state of the art for molecular generation are indeed language models trained on SMILES [2-4]. Regarding the computational efficiency of these methods, there's a lot of active research focusing on improving the sample efficiency of these methods [5], however none of these works has been considered when making the claims above, nor does the work compare against them in any way.\n\n### Results\n- It is not clear in the results section where each result is coming from, as no citation is linked to each of the methods listed.\n- The notation is very unclear in Table 1 and 2. In particular, the notation Isocyanates (11), does it mean that the dataset of Isocyanates contains 11 samples? This is not clearly stated. Are the results aggregated from the dataset containing Isocyanates, Acrylates and chain extenders? why is this dataset designed like that? \n- It's very unclear what each column represents in these tables. The caption should at least specify this.\n- The analysis is not clear. Example \"...methods do better on 3), but struggle across dimensions 2) and 3).\", what is meant by \"2)\" and \"3)\"? is it refering to Novelty and Diversity? this is not clear and never stated\n- \"However, FMG still leaves some to be desired across 3).\" this sentence is not clear.\n- \"FMG appears to do exceptionally well for PTC (halides) but poor for HOPV (thiophenes), which is surprising considering. As we...\" this sentence is incomplete? \"which is surprising considering...?\"\n\n\n### References\n[1] Krenn, M., Häse, F., Nigam, A., Friederich, P., & Aspuru-Guzik, A. (2019). SELFIES: a robust representation of semantically constrained graphs with an example application in chemistry. arXiv preprint arXiv:1905.13741, 1(3).\n[2] Blaschke, T., Arús-Pous, J., Chen, H., Margreitter, C., Tyrchan, C., Engkvist, O., ... & Patronov, A. (2020). REINVENT 2.0: an AI tool for de novo drug design. Journal of chemical information and modeling, 60(12), 5918-5922.\n[3] Öztürk, Hakime et al. “Exploring Chemical Space using Natural Language Processing Methodologies for Drug Discovery.” Drug discovery today (2020)\n[4] Özçelik, R., de Ruiter, S., Criscuolo, E. et al. Chemical language modeling with structured state space sequence models. Nat Commun 15, 6176 (2024). https://doi.org/10.1038/s41467-024-50469-9\n[5] Guo, J., & Schwaller, P. (2023). Augmented memory: Capitalizing on experience replay to accelerate de novo molecular design. arXiv preprint arXiv:2305.16160." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "In terms of weaknesses, I find it challenging to fully understand or verify the details of the proposed method. The current lack of clarity and the absence of key methodological details make it difficult to assess the approach’s validity and potential for replication. I strongly believe that the paper requires substantial revision to address these gaps. Adding detailed explanations and structural improvements would better support the work’s contributions, as it currently does not seem ready for publication." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Utilizing MLLM as a decision-maker in the molecular tree composition process is a strong approach, and using rendered molecular images as input is a clever choice.\n\n2. The experimental results appear to perform as well as domain expert annotations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work employs a multimodal language model (MLLM) as a decision-maker in the molecular graph language learning process. In this procedure, each molecule is rendered as an image for MLLM input, and the model outputs decisions and descriptions based on specific prompts. The resulting learned molecular grammar is then applied to generate new molecules within certain classes, demonstrating strong performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall, I think the use of MLLM as a decision-maker in the graph grammar learning, or “tree decomposition graph construction” process, is promising. However, the paper’s presentation and writing lack clarity, making it difficult to follow and understand. Additionally, many critical experimental details are missing, which limits the reproducibility and applicability of the method.\n\n1. Lack of Definitions: In the abstract, terms like MMFMs and DSLs are introduced without explanation. I suspect that these abbreviations are under-defined. In the methods section, it would help if the authors included explanations or examples of these terms.\n\n2. Lack of Structure: This method appears aimed at addressing a molecular domain-specific language learning task. However, the introduction section offers no information about molecular language, which surprisingly only appears in the Related Work section. This organization feels unusual and somewhat illogical.\n\n3. Lack of Model and Experimental Details: Both the methods and experiments sections lack fundamental details. For example, which MMFM does this approach employ? What prompts are specifically used? What is the dataset description and training cost? How are the baselines evaluated? I am particularly curious about the training and inference procedures, as the method seems to rely on MLLMs to decide the tree decomposition construction of clique graphs, yet it’s unclear how this process is applied to generate new molecules. Was fine-tuning involved, or was it entirely prompt-based?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multimodal,\ntitle={Multi-Modal Foundation Models Induce Interpretable Molecular Graph Languages},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2kfpkTD5ZE},\nnote={under review}\n}" }, "abstract": { "value": "Recently, domain-specific languages (DSLs) for molecular generation have shown advantages in data-efficiency and interpretability. However, constructing such a DSL requires human expertise or significant computational costs. Multi-modal foundation models (MMFMs) have shown remarkable in-context abilities for tasks across vision and text domains, but not graphs. We explore an unconventional solution: we render the molecule as an image, describe it using text, and cast the DSL construction into an equivalent problem of constructing a tree decomposition for the molecular graph. The MMFM performs a chain of discrete decisions to replace traditional heuristics used within the execution of the decomposition, enabling the smooth integration of its prior knowledge without overstepping the limits of the soundness of the algorithm. Furthermore, we collect MMFM’s reasoning for each decision into a design story, have non-expert agents evaluate stories for correctness and persuasiveness, and close the feedback loop to improve the DSL. Our method, Foundation Molecular Grammar (FMG), demonstrates significant advantages in synthesizability, diversity, and data-efficiency on molecule generation benchmarks. Moreover, its compelling chemical interpretability offers built-in transparency over the molecular discovery workflow, paving the way for additional feedback and oversight." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "multimodal foundation models", "molecular design", "interpretability" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b2a84387681c5079e8e343175625f9205203237b.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/4326ddc419b79c58b24297e0909398555dca7df8.pdf" }, "title": { "value": "Multi-Modal Foundation Models Induce Interpretable Molecular Graph Languages" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2kje23LSOE
Moment Constrained Optimal Transport for Control Applications
main
Active
Optimal Transport;Mean Field Control;Signal Tracking
optimization
3;3;3;3;3;6
3;2;2;3;3;3
1;2;2;2;2;3
2;2;2;2;2;3
1;1;1;1;2;3
3.5
2.666667
2
2.166667
1.5
0.316228
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Some comparison against existing EV charging algorithms (there are a lot) would be useful." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The problem being considered is an interesting one. \n- The idea of controlling an distribution to look like another (more optimal) distribution is certainly useful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers using optimal transport (OT) in mean field control, with constraints on the moments of the distributions. An algorithm (Sinkhorn algorithm) is proposed and an example of EV charging is considered." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It's important to note that I'm not that familiar with the field of optimal transport. But I felt the following are the weaknesses:\n- The abstract, intro conclusion seem to promise a lot more than what the math actually delivers? The algorithm relies on Gibbs kernels, which feels pretty standard. How broadly applicable is this? \n- The EV problem presented is somewhat strange. The paper seems to say that the controllable variables are the EV arrival time and state-of-charge? But these are typically the main source of randomness in EV problems. The controllable knobs are typically the charging profiles. \n- How many EVs does there need to be for a mean field approximation to be valid? At any single charging station, there won't be that many EVs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please see my comments about the weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The EV Charging problem has received much attention recently. This work aims to optimize the consumption while satisfying the grid constraints." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the application of optimal transport to mean-field control. The main contribution is a representation of the mean-field control, Moment Constrained Optimal Transport for Control (MCOT-C), that aims to coordinate the agents and enforce constraints. The authors propose a variant of the Sinkhorn algorithm and apply the proposed algorithm to an EV charging application." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The theoretical part of this paper is hard for me to follow. The introduction starts with the mathematical problem settings without sufficient discussion about the background, significant challenges, and the motivation for using optimal transport in mean-field control. Besides, the theoretical problem setting, assumptions, and propositions are difficult to interpret. I suggest the authors add more discussions about the connections between the general framework and a specific example (e.g., the EV charging problem) that is easier to understand. Discussing the intuition/significance after stating each proposition or lemma is also helpful.\n\nFor the experiment part, I encourage the authors to compare the problem setting and the performance with related works on EV charging (e.g., [1]). Such comparisons can help the readers understand the advantages/limitations of the proposed approach.\n\n[1] B. Alinia, M. H. Hajiesmaili, Z. J. Lee, N. Crespi, and E. Mallada, \"Online EV Scheduling Algorithms for Adaptive Charging Networks with Global Peak Constraints,\" in IEEE Transactions on Sustainable Computing, vol. 7, no. 3, pp. 537-548, 1 July-Sept. 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- What is $l$ in equation (7)?\n- Could you provide what the core observations and messages are that you want to share through the experiment sections?\n- Could the authors reorganize the contributions they want to claim in an itemized format? It would be helpful if the reorganized contributions included a clear explanation of the new approaches, the challenges faced, and the advantages compared to previous approaches. Additionally, if the authors feel there are any points that the reviewer may have overlooked, highlighting those would be welcome." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The approach of leveraging computational techniques from optimal control theory for control problems and the observations obtained from experiments applying the approach can be interesting. They provide the background theoretical derivation of such an approach. The approach focuses on a finite set of moments, so it could be more tractable in practice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces moment constrained optimal transport for control (MCOT-C), which leverages computational techniques from optimal control theory for control problems. They provide an algorithm obtained by modifying the Sinkhorn algorithm by replacing the update on the second marginal with gradient descent on the dual. Then, they provide how their proposed approaches apply to mean field control applications, further providing an online version of MCOT-C." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Writing:**\nThe reviewer thinks the writing of this paper needs to be improved. The reviewer was confused by the abstract and couldn't understand what contributions were made in this paper at first. The authors barely use phrases like 'this paper' or 'we,' so the actions taken in the paper were not distinguished clearly. It seems this issue occurs throughout the paper as well. The reviewer feels that the authors didn't clearly articulate the prior approaches, what they did new, and what the advantages are. This makes it challenging to understand the contributions they are claiming.\n\n**Contribution:**\nTo the best of the reviewer's understanding, the contribution of this paper is that they are establishing the theoretical background to leverage computational techniques from optimal control theory to the mean field control problem, as presented in sections 2.1 and 2.2. In Proposition 2, they provide the calculation of derivatives for their problem and introduce a gradient descent-based algorithm in section 2.3. Then they directly jump to the experiments, and sections 3 and 4 consist of explanations of their experiments. However, the reviewer believes they should have provided more discussion about the method before proceeding to the experiments, for example, the motivation behind the specific design of the algorithm, or theoretical analysis, or some justifications for why they expect it would work well.\n\nAdditionally, the reviewer couldn't grasp what the authors were trying to claim with the experiments. The reviewer believes the authors should have provided guidance on interpreting their experimental results and offered clear conclusions or messages derived from the experiments. However, most of the content in the experiment section seems to focus on the details of the experiments.\n\n**Minor issues:**\n- Typo in line 072; a space between \"S.It\" is required.\n- Typo in line 077; \"litterature\" should be corrected.\n- Typo in line 419; \"(i)\" should be changed to \"(ii).\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Why is the complexity in section 3.1 $N_t^3 \\times N_b$, when the state consists of only two variables with time, the arrival time and the plugging time?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors propose an interesting and relevant control application.\nThey bridge the fields of optimal transport and mean field control by modifying the Sinkhorn algorithm, where the update on the second marginal is replaced by gradient descent." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a solution for an application concerning the charging of a fleet of electric vehicles, where a central planner decides on the plugging time of the vehicles. Their proposed method applies a modified version of the Sinkhorn algorithm, typically used in optimal transport problems, to the considered control problem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of the paper is its structure.\nInstead of introducing the considered problem in the introduction, the authors start with their definitions of optimal transport and mean field control. The considered problem is formulated later. This makes it hard for the reader to follow.\nWhen presenting a paper which mainly focuses on one application, it would be better to first clearly describe the application and then introduce the math and methods needed for the solution.\n\nAnother major issue is the lack of consistency in the notation. Some examples:\nIn Eq. 7 there is the variable $l$ which has not been introduced before.\nIn Eq. 16 there is $h$ on the left hand side and $f$ on the right hand side. Are these equivalent?\nIn section 3.1 it say the gradient is calculated on $\\mathcal X \\times \\mathcal W$. Should this be $\\mathcal S \\times \\mathcal W$?\n\nAdditionally there are many typos, missing punctuation and missing comma placement, further reducing the readability of the paper.\n\nFinally the experiments do not compare to a baseline, apart from a naive decision rule, making it hard to evaluate the efficacy of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I am not sure I have specific questions; perhaps one is \"is the control specification in your EV charging case study truly realistic?\". My issues are not necessarily fixable by a small change in the paper." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- the paper seeks to connect two important areas of applied mathematics, and does it in an elegant fashion that is amenable to an analytical solution\n- the paper is largely well-written and I suspect that it would be considered fairly easily readable by experts in the field\n- the rigorous approach of the paper is refreshing and exposes the mathematical meat of the problem" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This is a half-theoretical, half-application paper which initially seeks to propose and solve the moment-constrained optimal transport for control problem, a variation of the constrained optimal transport problem for mean field control. After solving its first objective, the paper seeks to illustrate, and partially adapt, the developed results on an example of charging a large fleet of electric vehicles." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In short, I like this paper a lot, but I am just not convinced that ICLR is the right venue for it. I urge the authors to consider once again whether the ICLR audience is what this paper is shooting for (without any offense at all to either the paper or the ICLR audience). For instance, the L in ICLR stands for \"learning\". Yet, any connection to learning -- if there is any -- remains unexplained. My issues largely stem from this perceived mismatch:\n\n- relegating the proofs, including the central results (Proposition 1/2), to the appendix probably does not make the paper more readable to the non-experts (who will find notation burdensome to start with), and only signals (possibly incorrectly) that the authors do not see these results as their primary contribution. In that case, I am not sure what is the primary contribution\n\n- there seems to be some dissonance between the claimed contributions (which speak about control of an ensemble of agents), the actual technical contributions (which can indeed serve this goal, but the connection is not really described in detail), and the application (which is farfetched at best: why should the power consumption exactly track a reference trajectory?)\n\n- the \"second\" technical part of the paper, coming within the application section, applies largely domain-agnostic mathematical theory to a problem that is tailored to the application (which is, as I already mentioned, farfetched). My suggestion would really be to split this paper into two: the first one dealing with the general problem of MCOT-C (and its online version in as much generality as possible), with perhaps only a small academic example if there is no room for more, and the second one applied, with a realistic case study and a detailed description of the algorithm implementation and possible minor modifications\n\n- partly because of decoupling of the proof (and a weird ordering of the proofs in the Appendix), it is not clear to me how challenging the main result actually is: the duality seems rather straightforward. If this is not the case, that should be emphasized. If it is, perhaps it would be good to try to develop online results on MCOT-C in more generality" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please see the \"Weaknesses\" section for my questions." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "-\tThe paper tackles the important, contemporary problem of charging EVs and uses a dataset to do so.\n-\tThe combination of mean field control and optimal transport to optimize EV charging appears to be interesting.\n-\tThe paper contains theoretical results to complement empirical findings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper combines the fields of mean field control and optimal transport to solve multi agent control problems. When doing so, the authors constrain some of the marginal distributions to specific moment classes and adopt the Sinkhorn algorithm to their setup. The theoretical and algorithmic considerations are complemented by an extensive example of EV charging in the Netherlands, based on a real dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tIntroduction: In my opinion, the introduction should be less technical in the sense that there shouldn’t be several extensive mathematical expressions. In this way, especially non-expert readers can get a first impression of the paper’s contributions without being confronted with mathematical details.\n-\tLine 74: EVs are mentioned for the first time here (excluding the abstract). In my opinion the authors should lead with a motivating example and then move on to the technical tools like MFC. In in its current form, the introduction reads like a random collection of mathematical concepts. The authors should put more emphasis on their goals and high-level ideas.\n-\tLine 84-90: The contributions are formulated far too vague, for example, ‘‘Coordination of an ensemble of agents to achieve a desired goal’’ basically describes any cooperative multi-agent problem. Similarly, a discussion and comparison to the existing literature is missing. What sets the contributions of this paper apart from the existing literature?\n-\tAssumptions (A1) to (A3): The assumptions are neither explained nor is there a discussion of how realistic or restrictive they are.\n-\tProposition 1, Proposition 2, Lemma 1: Like the assumptions, the theoretical results are just stated but not discussed or explained. This presentation style makes it very hard to follow the train of thought in this paper.\n-\tSection 2.2: It would be helpful for first time readers to explain why the dual problem can be useful for solving these types of problems. Just stating that it is “needed for the algorithm” (line 140) does not provide any intuitive insight.\n-\tSection 2.3: In this section it is hard for me to understand the algorithmic contributions of the paper. If the contribution, compared to the existing Sinkhorn algorithm, is just the update of $\\zeta^k$, the authors should explain when and why this update makes an important difference.\n-\tSection 3: I am not very familiar with the EV charging literature, but I wonder if there are not any existing papers that focus on similar use cases. Since there is not a single reference in Section 3, it seems like this model is completely new and has no connections to existing work. Is this really the case?\n-\tSection 4 (like the previous concern): Aren’t there any existing methods to compare against? What exactly are the advantages of the proposed approach?\n\n\nMinor Comments:\n\n-\tLine 35: Is it supposed to be “… common state space $\\mathcal{X}$ …’’? How is this state space defined? Are there any restrictions on $\\mathcal{X}$ or is it completely arbitrary? (Line 101 seems to contain the precise definition)\n-\tLine 36: the extensive mathematical definitions should appear later in the paper, but not in first paragraph of the introduction. That aside, I think that $\\mu_1$ and $\\mu_2$ are not properly defined here.\n-\tLine 58: What values can the variables $S_k$ and $W_k$ take?\n-\tLine 73: space missing at “… $S$.It“\n-\tLines 79-82: Although the sentence “Inspired by … of optimal control solutions” is somewhat vague, it is nevertheless informative about the goals of this paper. I think it needs to appear earlier in the introduction.\n-\tLine 103: Are the marginals defined correctly? Shouldn’t the two $dx$ for the first marginal just be $x$? The same question applies to the second marginal and $dy$.\n-\tLine 104: there is one word “problem” too many.\n-\tLine 104: How are the probability kernels $T^\\lambda$ in this family defined?\n-\tLine 118: I think the notation is wrong here. It should be “… \\leq 0 for all 1 \\leq m \\leq M}” instead of “… \\leq 0: 1 \\leq m \\leq M}”\n-\tLines 119-120: While I do understand that an equality can be equivalently written as two inequalities, I am unsure how this applies to the previously defined moment class. Does that mean that for equalities, we would define a moment class with more than $M$ inequalities?\n-\tLine 383: Why is a quadratic penalization chosen? If it is standard in the literature, corresponding references should be added.\n-\tLine 397: Why is the infinite norm a good candidate?\n-\tLine 419: It should be “(ii)” instead of “(i)”, right?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a new formulation of Mean Field Control as a Moment Contrained Optimal Transport problem and illustrate it on a use case of EV charging, using real data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024moment,\ntitle={Moment Constrained Optimal Transport for Control Applications},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2kje23LSOE},\nnote={under review}\n}" }, "abstract": { "value": "This paper concerns the application of techniques from optimal transport (OT) to mean field control, in which the probability measures of interest in OT correspond to empirical distributions associated with a large collection of controlled agents. The control objective of interest motivates a one-sided relaxation of OT, in which the first marginal is fixed and the second marginal is constrained to a “moment class”: a set of probability measures defined by generalized moment constraints. This relaxation is particularly interesting for control problems as it enables the coordination of agents without the need to know the desired distribution beforehand. The inclusion of an entropic regularizer is motivated by both computational considerations, and also to impose hard constraints on agent behavior. A computational approach inspired by the Sinkhorn algorithm is proposed to solve this problem. This new approach to distributed control is illustrated with an application of charging a fleet of electric vehicles while satisfying grid constraints. An online version is proposed and applied in a case study on the ElaadNL dataset containing 10,000 EV charging transactions in the Netherlands. This empirical validation demonstrates the effectiveness of the proposed approach to optimizing flexibility while respecting grid constraints." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Optimal Transport", "Mean Field Control", "Signal Tracking" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/74c9a4cc1dc4629d467576a91c3e7c4f04c61df6.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6e10727c28fcf7a8f40c3efe3610ae0c7e13dfb2.zip" }, "title": { "value": "Moment Constrained Optimal Transport for Control Applications" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2l301qUdor
BOSE-NAS: Differentiable Neural Architecture Search with Bi-Level Optimization Stable Equilibrium
main
Active
Neural Architecture Search;Stable Equilibrium State;Equilibrium Influential
other topics in machine learning (i.e., none of the above)
3;5;5;6
4;4;5;3
2;3;3;4
2;2;2;3
2;2;3;3
4.75
4
3
2.25
2.5
-0.324443
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I think overall this paper is good. Currently I give 6 since I have not checked the proof very carefully. I am willing to raise the score to 8 if the proof is proved to be right by other reviewers." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. I think this paper focuses on a very important problem. DARTS is a very crucial framework in NAS, but it has some well-known problems. It is very important to have some theoretical analysis on this framework. \n2. This author provides large-scale theoretical analysis, focusing on very important aspects, such as the stability of bi-level optimization, the loss trajectory, etc. I think the analysis is insightful. \n3. The proposed method can reduce the search costs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on Differentiable Architecture Search (DARTS). They conduct theoretical analysis over DARTS and propose a concept called Stable Equilibrium State. Upon it, they propose an effective framework called BOSE-NAS to identify the optimal state during the searching procedure. Experiment results show that the proposed method shows competitive results over state-of-the-art methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I think the figures in this paper can be polished to be more clear (maybe in the camera ready version). \n2. The accuracy of the proposed method is just comparable with sota, but not superior to sota. I think it is not a serious problem, but I just list it as one weakness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Although the problems within the bi-level optimisation process of differentiable NAS have been widely studied for years, e.g., BONAS [1], the proposed EI metric and Stable Equilibrium State still bringing some new insights to the NAS research. But differentiable NAS are often sensitive to the hyper-parameters, I wonder how sensitive is the Stable Equilibrium State identification process to the choice of hyper-parameters such as the learning rate and batch size? Can authors provide some ablation studies? It would be helpful to understand how the proposed method handles changes in the hyper-parameters, as well as its robustness.\n\n2. The proposed methods are only applied to the differentiable NAS, however, the interest of NAS research has been largely shifted to training-free NAS methods, as they are offering more flexibilities to different search algorithms and search spaces, as well as better performance and much less computational overhead compare with differentiable NAS, e.g., Zen-NAS [2] and SWAP-NAS [3]. Can author discuss the potential adaptation that extend the concept the Stable Equilibrium State and EI metric to non-differentiable NAS methods? \n\n\n\n[1] Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James T. Kwok, and Tong Zhang. Bridging the gap between sample-based and one-shot neural architecture search with bonas. NeurIPS 2020.\n\n[2] Ming Lin, Pichao Wang, Zhenhong Sun, Hesen Chen, Xiuyu Sun, Qi Qian, Hao Li, and Rong Jin. Zen-nas: A zero-shot NAS for high-performance image recognition. ICCV 2021.\n\n[3] Yameng Peng, Andy Song, Haytham. M. Fayek, Vic Ciesielski and Xiaojun Chang . SWAP-NAS: Sample-Wise Activation Patterns for Ultra-fast NAS. ICLR 2024." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The introduction of Stable Equilibrium State is somewhat novel and interesting, the theoretical analysis of architecture parameter dynamics provides a solid foundation for understanding the bi-level optimisation in differentiable NAS.\n\n2. The Equilibrium Influential (EI) metric for operation evaluation is an innovative approach and offers a more reliable measure of operation importance to the bi-level optimisation process in the differentiable NAS. \n\n2. The proposed BOSE-NAS achieves competitive performance as well as less computational overhead in benchmark datasets like CIFAR-10 and CIFAR-100, compare with other differentiable NAS methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose BOSE-NAS, a novel differentiable neural architecture search method that addresses critical challenges in existing differentiable architecture search (DARTS). The core idea of BOSE-NAS is around the the concept of a ‘Stable Equilibrium State’, which offering insights into the validation loss trajectory across architectural spaces to stabilise the supernet’s bi-level optimisation process. The proposed method introduces a novel metric called Equilibrium Influential (EI) to evaluate the importance of operations during the architecture search phase. By choosing operations based on the EI metric at the Stable Equilibrium State, BOSE-NAS uses bi-level optimisation to find the optimal architecture operations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The propose method heavily depends on the accurate identification of the Stable Equilibrium State, specifically, the EI metric evaluates each operation independently, which could overlook potential dependencies among network operations within the architecture. This could make the proposed method not always generalise well.\n\n2. The biggest concern of the proposed method, e.g., EI metric and the concept of Stable Equilibrium State, are the limited use scenario. It may not be easily applicable to non differentiable NAS methods, e.g., the evolutionary or pruning-based search algorithms." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "pls see weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe paper is easy to read.\n2.\tThe problem of DAS is clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Differentiable Architecture Search (DAS) often faces the issue where the magnitude of architecture parameters fails to reflect the true importance of operations. This paper addresses this problem by proposing BOSE-NAS, a DAS method guided by the Stable Equilibrium of architecture parameters (i.e., the point where the rate of change of the architecture parameters is minimal). The authors provide relevant experiments to support their method. However, the experimental section has several issues, such as limited improvement in performance and a lack of ablation studies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper was submitted to NeurIPS 2024, compared with NeurIPS 2024, there are still some important issues that need to be addressed.\n1. The ablation studies are not convincing. To be specific, in Figure 3, we can clearly see that the proposed method is sensitive to hyperparameters.\n2. There still exist some typos/grammatical errors in the paper.\n3. The format of references is still wrong.\n4. Exploring the reasons behind the success of these techniques and providing intuitive explanations would contribute to the overall scientific contribution of the work.\n5. I don't understand the theoretical analysis. Why use \" Influence Function\"? What relationship between \" Influence Function\" and your method? why validate the \"reliability\" of your proposed metric? Please provide detailed motivation and clear proven process in step by step. What is the difference between stability and reliability? Please provide a step-by-step proof process for validating their metric. And, clarification on the relationship between the Influence Function and their method.\n6. In page 7, \"I(z, L)\" denotes?\n7. The main limitation of this paper is that proposed method lacks comparison with larger datasets (i.e., COCO2017, VOC), and more competitors (i.e., β-DARTS++, Λ-DARTS).\n8. Pls to prove your statement of generalizability.\n\n[1] β-DARTS++: Bi-level Regularization for Proxy-robust Differentiable Architecture Search\n[2] Λ-DARTS: MITIGATING PERFORMANCE COLLAPSE BY HARMONIZING OPERATION SELECTION AMONG CELLS" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ The experimental results clearly show the effectiveness and the efficiency of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new operation importance evaluation metric in network architecture search. The authors first introduce the concept of stable equilibrium state, which shows the stability of the bi-level optimization process in differentiable NAS. By analyzing the supernet training dynamics, the metric named equilibrium influential is proposed for fair differentiable NAS. The experimental results show that the proposed metric and search method can achieve competitive accuracy with significantly reduced search cost." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The writing can be improved. The abstract and the introduction are redundant. For the abstract, there are too many contents to introduce the background. For the introduction, many details especially the experimental results don’t have to be elaborated. I think demonstrating the main results is enough to show the effectiveness of this method.\n\n- The technical soundness can be further verified. There are some strong assumptions without verification or explanation. For example, the assumptions to transit (6) to (7) should be verified. Why they have little effect on $\\alpha$?\n\n- Some exact calculations can be put in the Appendix part.\n\n- The reason why the proposed method has less search cost should be analyzed in the result analysis, which is an important benefit from the new metric.\n\n- The performance of the proposed method underperforms the SOTA NAS methods such as IS-DARTS. More clarification is required for the performance analysis." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper clarifies the ambiguities surrounding the actual role and impact of architecture parameters in DARTS and leveraging this insight proposes a more effective and robust NAS method." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024bosenas,\ntitle={{BOSE}-{NAS}: Differentiable Neural Architecture Search with Bi-Level Optimization Stable Equilibrium},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2l301qUdor},\nnote={under review}\n}" }, "abstract": { "value": "Differentiable Architecture Search (DARTS) has gained prominence in the neural architecture search community for its efficiency and simplicity, achieved through optimizing architecture parameters via gradient descent. However, the magnitude of these architecture parameters frequently fails to accurately represent the true significance of the corresponding operations, adversely affecting the performance of the resultant architectures. While numerous studies have introduced alternative metrics to evaluate operation significance, the actual role and impact of architecture parameters remain inadequately explored. This lack of understanding creates critical ambiguity in the architecture search process. Resolving these ambiguities is essential for the effective utilization of architecture parameters, thereby facilitating the development of more effective differentiable NAS methodologies. In this work, we first conduct a rigorous theoretical analysis, revealing that the change rate of architecture parameters reflects the sensitivity of the supernet’s validation loss in the architecture space. Building on this foundation, we introduce the concept of the ‘Stable Equilibrium State’, which offers essential insights into the validation loss trajectory across architectural spaces and elucidates the stability of the supernet’s bi-level optimization process. We further investigate the supernet training dynamics to assess the influence of operations on the Stable Equilibrium State, leading to the proposal of a novel metric for evaluating operation importance, termed Equilibrium Influential ($E_\\mathcal{I}$). Integrating these elements, we introduce BOSE-NAS, an effective differentiable NAS method that utilizes the Stable Equilibrium State to identify the optimal state during the search process, subsequently deriving the final architecture based on the $E_\\mathcal{I}$ metric. Extensive experiments conducted across diverse datasets and search spaces demonstrate that BOSE-NAS achieves competitive test accuracy compared to state-of-the-art methods while significantly reducing search costs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Neural Architecture Search", "Stable Equilibrium State", "Equilibrium Influential" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5b882efc6f53263f72ffd7a98245c0507bbad0a9.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "BOSE-NAS: Differentiable Neural Architecture Search with Bi-Level Optimization Stable Equilibrium" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2m5XI3nM46
Improved Localized Machine Unlearning Through the Lens of Memorization
main
Active
Machine Unlearning;Memorization;Localized Unlearning
other topics in machine learning (i.e., none of the above)
1;3;5;6
3;3;3;3
1;2;3;3
1;2;2;3
1;3;3;3
3.75
3
2.25
2
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Please address the concerns in the weakness section." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- I liked the initial idea of investigating localized unlearning based on memorization. \n- The proposed method was partially successful on some forgetting benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work attempted to tackle the problem of localized unlearning by investigating it based on the memorization assumption and proposed DEL for some parameters with resetting and fine-tuning. The proposed method showed promising results on forgetting on a couple of benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The method is based on a lot of assumptions without much justification, but with intuition. Thus, it is very hard to see if the proposed method is indeed ok in terms of unlearning (while preserving the rest!). \n- It is very hard to see the core contribution clearly due to poor writing. It was very hard to read and follow.\n- Experiments look quite limited in terms of benchmarks (datasets, compared methods). I am afraid that the localized unlearning approach may hurt the preservation of remaining parts, but it is unclear if it is true." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1: From the Appendix A.4's algorithmn, the localization strategy is mainly from the magnitude of each weighted gradient for each mini-batch. Is the localization mask determined by each mini-batch? Is the localization mask fixed for different networks? If the mask is not accurate, does it affecting the accuracy? How sensitive is DEL to different choices of localization strategy.\n\nQ2: Does the DEL method has any specific limitations when facing more complex or diverse data distributions?\n\nQ3: Can DEL method adapted to other network architectures? What's the differences if it adapted to a customized network structure?\n\nQ4: Does the performance different if using different hyper-parameters, such as learning rate, batch size, etc?\n\nQ5: In Table 7, the accuracy is getting better with higher percentage of parameters. Will the accuracy still getting better with 40%/50%?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This method achieves state-of-the-art unlearning performance while requiring a small modification on a subset of model parameters.\n\nThis method also minimized unnecessary parameter while preserving the model efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduced Deletion by Example Localization (DEL) method, which aimed at enhancing the machine unlearning by focusing on localized, a targeted data subset in neural networks. The traditional unlearning methods are removing the influence of certain data, making the model performance worse or requiring extensive retraining. However, DEL method used a selective approach by identifying a small subset of parameters that influenced by specific data points. This method can effectively remove the memory of specified data subset while persevering the model accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The weakness of this method is limited experiments on the public dataset, only applied on CIFAR-10 and SVHN datasets, as well as the limitation on larger models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I think the memorization property can vary from model scale. So, I am wondering if this memorization and proposed algorithm is available for most models since the evidence provided is empirical findings." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The localized unlearning is new and meaningful research area and the motivation to leverage the memorization is reasonable and insightful.\n\n2. The experiments and findings are validated with various metrics and existing unlearning algorithms and show consistently good results.\n\n3. The paper is well formatted and arranged so that easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the local unlearning algorithm Deletion by Example Localization, leveraging the memorization issue. The proposed algorithm first resets the parameters that are most critical based on the localization strategy and then finetunes them. The algorithm can be paired with various existing unlearning algorithms. The author validates experiments on different datasets with different metrics to show that the performance achieves state-of-the-art." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There are several mathematical definitions such as the Unlearning and Label memorization. However, I did not find close connections or logical relations between them. If necessary, I expect the author to use these definitions to derive some theorems closely based on the proposed algorithm. For example, it is difficult to see theoretically or empirically if the proposed algorithm can make distribution the same as the model trained without that data.\n\n2. Following the above, I understand in this area, most justifications are more empirical. So, I think it's better to use some metrics that can support the definition (I.e., the same distribution as a retrained model)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "none" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Section 5.1, your paper presents several hypotheses. Could you provide a more detailed explanation of how your results support these hypotheses?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tThe paper is well-written, with a clear and concise logical flow. It begins by introducing localization as a preferred approach for model unlearning and then presents a cohesive and insightful perspective—that unlearning can be viewed as an extreme form of no memorization (lines 165-169)—which lends coherence and unity to their proposed method.\n2.\tThe paper provides a comprehensive review of existing methods, thoroughly examining current approaches and establishing its own assumptions, such as the advantages of data-dependent over data-agnostic methods and the reasoning for utilizing gradient information. These insights serve as the foundation for their proposed method, DEL.\n3.\tThe proposed method is both simple and effective, achieving state-of-the-art performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of machine unlearning in a localized context by introducing a novel approach based on the concept of memorization. Following a comparison of existing methods, the authors identify data-dependent and gradient-dependent techniques as particularly effective. They refine the current criticality-based localization strategy, resulting in a new unlearning algorithm, “Deletion by Example Localization” (DEL). DEL enables localized unlearning by resetting and fine-tuning parameters identified as essential based on the calculated criticality of the parameters." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper extensively discusses related work and motivations, primarily focusing on comparisons between existing methods. The proposed approach appears to be a straightforward combination of existing techniques, which may limit its novelty.\n2. The results in Section 3 do not necessarily support the hypotheses in Section 5.1, as the observed improvements could be attributed to other factors. Thus, a more thorough theoretical explanation of the proposed method is needed.\n3. This paper focuses exclusively on classification models, but I believe that “unlearning” in LLMs (i.e., model or knowledge editing) is a more pressing concern. It remains uncertain whether the conclusions drawn from vision classifiers in this paper can be directly applied to LLMs.\n4. There are a few typos, although they don’t impact comprehension. For instance, in line 159, “$f(; \\theta)$” might be intended as “$f(x; \\theta)$.”" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024improved,\ntitle={Improved Localized Machine Unlearning Through the Lens of Memorization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2m5XI3nM46},\nnote={under review}\n}" }, "abstract": { "value": "Machine unlearning refers to removing the influence of a specified subset of training data from a machine learning model, efficiently, after it has already been trained. This is important for key applications, including making the model more accurate by removing outdated, mislabeled, or poisoned data. In this work, we study localized unlearning, where the unlearning algorithm operates on a (small) identified subset of parameters. Drawing inspiration from the memorization literature, we propose an improved localization strategy that yields strong results when paired with existing unlearning algorithms. We also propose a new unlearning algorithm, Deletion by Example Localization (DEL), that resets the parameters deemed-to-be most critical according to our localization strategy, and then finetunes them. Our extensive experiments on different datasets, forget sets and metrics reveal that DEL sets a new state-of-the-art for unlearning metrics, against both localized and full-parameter methods, while modifying a small subset of parameters, and outperforms the state-of-the-art localized unlearning in terms of test accuracy too." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Machine Unlearning", "Memorization", "Localized Unlearning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/150921e5d33db3b86c10e698cedf4711f038cb96.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Improved Localized Machine Unlearning Through the Lens of Memorization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2mGFmAQWUI
ControlAgent: Automating Control System Design via Novel Integration of LLM Agents and Domain Expertise
main
Active
Automated Control System Design;LLM Agent
applications to computer vision, audio, language, and other modalities
3;5;6
3;3;4
1;2;2
1;2;3
2;2;4
4.666667
3.333333
1.666667
2
2.666667
0.755929
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- How does ControlAgent handle model uncertainty? While you discuss robustness through phase margin, could you elaborate on whether the framework considers parametric uncertainties or unmodeled dynamics?\n- For higher-order systems, you mention manual design of 50 cases. Could you explain your methodology for ensuring these cases are representative and unbiased? What criteria guided your selection?\n- For the history and feedback module, how do you handle the context window limitations of LLMs? Could you provide more details about the memory management strategy?\n- Could you provide a more detailed analysis of failure cases, particularly for higher-order systems where performance was lower? Understanding these cases would help assess the framework's limitations." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The core strength of this paper lies in how it successfully addresses the fundamental performance-robustness trade-offs inherent in classical control theory. The framework intelligently uses loop-shaping and PID tuning methodologies, employing settling time and phase margin as key tuning parameters - a sophisticated approach that mirrors established control engineering practices. The iterative design process is noteworthy for its theoretical soundness. Rather than treating controller design as a single-shot optimization problem, ControlAgent mimics the systematic approach used by human experts, progressively refining controller parameters while managing the complex interplay between performance metrics. The empirical results validate this approach, showing success across various system types and complexity levels, with particularly impressive results in handling unstable and higher-order systems. The framework's ability to achieve 100% success rates for first-order and stable second-order systems, while maintaining high performance even for complex higher-order and unstable systems, demonstrates its robust theoretical foundation and practical effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces ControlAgent, a framework that automates control system design by integrating large language model (LLM) agents with domain expertise. The framework uses multiple collaborative agents to emulate human iterative design processes, gradually tuning controller parameters to meet user-specified requirements for stability, performance, and robustness. ControlAgent consists of a central agent that analyzes tasks and distributes them to specialized agents, task-specific agents that handle detailed controller design for different system types, a Python computation agent that performs control calculations and evaluations, and a history and feedback module that enables iterative refinement of designs. The system addresses the inherent complexity of control design by breaking down the process into manageable steps and incorporating domain knowledge into the decision-making process. The authors also develop ControlEval, an evaluation benchmark comprising 500 control tasks across various system types including first-order, second-order, systems with delay, and higher-order systems, with different response modes and specific performance criteria. This benchmark serves as a standardized way to evaluate control design workflows." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The evaluation methodology raises several concerns. While ControlEval includes 500 control tasks, the paper doesn't clearly justify the distribution of these tasks or demonstrate their representativeness of real-world control problems. The generation process for higher-order systems is particularly problematic - the authors admit to manually designing these cases, which could introduce bias and may not reflect the true complexity of higher-order system control.\n- The comparison with baselines is somewhat limited. The paper primarily compares against relatively simple LLM-based approaches (zero-shot, few-shot) and a single traditional tool (PIDtune). Modern control design often employs more complex methods like robust control, model predictive control, or optimization-based approaches, which are notably absent from the comparison. The performance metrics are also relatively basic, focusing mainly on settling time and phase margin while overlooking other important characteristics like disturbance rejection and noise sensitivity.\n- The iterative design process lacks theoretical guarantees of convergence or optimality. The paper doesn't provide analysis of when or why the iteration process might fail, nor does it establish bounds on the number of iterations needed for convergence. \n- The framework's heavy reliance on proprietary LLM models raises questions about reproducibility and practical deployment. The authors don't thoroughly explore how the system's performance might vary with different base LLMs or how it might degrade with smaller, more practical models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As mentioned above, I suggest the authors improve their academic writing skills and design specific application scenarios, such as robotics and transportation, to verify their framework.\n\nI recommend several papers, as shown below, in which authors can learn how to improve academic writing skills and organize corresponding ideas from them.\n\n1) Yang, Q., & Parasuraman, R. Bayesian strategy networks based soft actor-critic learning. ACM Transactions on Intelligent Systems and Technology (TIST).\n\n2) H. Hamann and H. Wo ̈rn, “A framework of space–time continuous models for algorithm design in swarm robotics,” Swarm Intelligence, vol. 2, no. 2-4, pp. 209–239, 2008." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper proposes a new paradigm that automates control system design via novel integration of LLM agents and control-oriented domain expertise to bridge the the complexity and specificity in control system design." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new paradigm that automates control system design via novel integration of LLM agents and control-oriented domain expertise. However, the writing style is confusing, making it hard to follow their ideas. I suggest the authors improve their academic writing skills by making the abstract more precise and brief, adding the approach section, and reorganizing the corresponding method section. Moreover, I do not know what scenarios the authors implemented or simulated for the experiments. There is no background information or introduction. Generally, this paper needs to improve largely." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper's writing style is confusing, making it hard to follow their ideas. I suggest the authors improve their academic writing skills by making the abstract more precise and brief, adding the approach section, and reorganizing the corresponding method section. Moreover, I do not know what scenarios the authors implemented or simulated for the experiments. There is no background information or introduction. Generally, this paper needs to improve largely." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "How much sampling is done of LLM-generated designs? e.g. is the budget 10 designs?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "This paper addresses the issue of designing controllers using LLMs, in particular with specific stability, phase margin, and settling times. \n\nThe overall system runs in a loop where a the designed controller is run and the system provides feedback based on a history of designs and how well they performed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper describes a composite LLM-based system for control tasks which attempts to design controllers, represented as Python code, for control problems with specific requirements, namely stability, phase margin, and settling time. \n\nWhile this paper is decently presented and seems to achieve decent results, I am uncertain about recommending it for ICLR. Primarily, the paper seems highly domain-specific and engineering-focused, rather than more general cutting-edge academic research. Still, it is a good engineering system. Secondly, I am uncertain about the evaluation. \n\nThe proposed method is essentially a domain-specific application of LLM-modulo, e.g. an interative prompt with a verifier and critiques [1].\n\n[1] Kambhampati, S., Valmeekam, K., Guan, L., Verma, M., Stechly, K., Bhambri, S., ... & Murthy, A. B. Position: LLMs Can’t Plan, But Can Help Planning in LLM-Modulo Frameworks. In Forty-first International Conference on Machine Learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It seems guarantees would be desirable when working with control systems, and I assume the problem requirements are meant to be guarantees. However, I feel the paper would be made a lot stronger by discussing guarantees at length. \n\nThe evaluation methods seem like they could be improved, in particular I would like the authors to clarify about \"a system is considered successfully designed if at least one of the multiple independent trials results in a successful design\". It seems this would greatly skew the statistics, since failures are being filtered out. I also don't see reporting of how many samples are taken to achieve the reported success rates. \n\nGiven the unpredictable and error-prone nature of LLMs, I am skeptical that the overall system can work without a human in the loop or method for filtering correct answers. Also, it seems like intermediate mistakes in generation (e.g. a hallucinated constant) would collapse the entire system, so I would expect it to be rather fragile. To the extent that the proposed method works, I am curious what the authors attribute it to?\n\nWhile the method is interesting, it seems to be an incomplete solution to a highly domain-specific problem, so I'm unsure about the larger impact of the work, e.g. the paper doesn't give much insight into designing general LLM-based systems." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024controlagent,\ntitle={ControlAgent: Automating Control System Design via Novel Integration of {LLM} Agents and Domain Expertise},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2mGFmAQWUI},\nnote={under review}\n}" }, "abstract": { "value": "Control system design is a crucial aspect of modern engineering with far-reaching applications across diverse sectors, including aerospace, automotive systems, industrial processes, power grids, and robotics. Despite advances made by Large Language Models (LLMs) in various domains, their application in control system design remains limited due to the complexity and specificity of control theory. To bridge this gap, we introduce **ControlAgent**, a new paradigm that automates control system design via novel integration of LLM agents and control-oriented domain expertise. ControlAgent encodes expert control knowledge and emulates human iterative design processes by gradually tuning controller parameters to meet user-specified requirements for stability, performance (e.g. settling time), and robustness (e.g., phase margin). Specifically, ControlAgent integrates multiple collaborative LLM agents, including a central agent responsible for task distribution and task-specific agents dedicated to detailed controller design for various types of systems and requirements. In addition to LLM agents, ControlAgent employs a Python computation agent that performs complex control gain calculations and controller evaluations based on standard design information (e.g. crossover frequency, etc) provided by task-specified LLM agents. Combined with a history and feedback module, the task-specific LLM agents iteratively refine controller parameters based on real-time feedback from prior designs. Overall, ControlAgent mimics the design processes used by (human) practicing engineers, but removes all the human efforts and can be run in a fully automated way to give end-to-end solutions for control system design with user-specified requirements. To validate ControlAgent's effectiveness, we develop **ControlEval**, an evaluation dataset that comprises 500 control tasks with various specific design goals. Comparative evaluations between LLM-based and traditional human-involved toolbox-based baselines demonstrate that ControlAgent can effectively carry out control design tasks, marking a significant step towards fully automated control engineering solutions." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Automated Control System Design", "LLM Agent" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2187e1c331af5f074bbada711de5db6d110db9b7.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ControlAgent: Automating Control System Design via Novel Integration of LLM Agents and Domain Expertise" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2mbDATzUOt
Do Large Language Models have Lateral Thinking in Puzzle-Solving Games?
main
Active
Large Language Models;Lateral Thinking;Puzzle-Solving Games
datasets and benchmarks
3;3;5;6
4;3;3;4
2;1;3;3
2;2;2;3
2;3;3;3
4.25
3.5
2.25
2.25
2.75
0.19245
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "- Possibly harmful content in the part of the dataset not validated by humans \n- 3 Volunteers reviewed around 194,100 examples (30% of the total 642,700.) That's a significant time investment on the part of volunteers without compensation" }, "flag_for_ethics_review": { "value": [ "Yes, Privacy, security and safety", "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1: What was the human score distribution for the 30% of data that was validated on those 8 metrics? \n\nQ2: Could you elaborate the choice of threshold in validating the data? \n\nQ3: What percentage of those 30% data was invalidated due to significant harmful content by humans? A similar fraction of such harmful content could still be a part of the remaining 70% remaining data. \n\nQ4: Do the mentioned LLMs perform perfectly on those 647 original chinese puzzles? If not they could be used to test the generalizability of the puzzleverse framework. \n\nQ5: Is the 30% data from test split the samples that were validated by volunteers?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "S1: The authors propose an automated synthetic data generation approach for evaluating and inculcating lateral thinking in LLMs \nS2: The generated dataset is significantly larger than previous works" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper contributes a new dataset of Lateral Thinking Puzzles for training and evaluation of the lateral thinking abilities of LLMs in the Chinese language. They further introduce the Puzzleverse framework where LLMs are instruction fine-tuned and aligned with a reward model on 70% of the dataset. Training with Puzzleverse shows improved performance in the created dataset and other reasoning tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1: The GPT-4 model is used to create, and evaluate the quality, consistency and correctness of most of the data limiting the upper bound of the performance of any model trained on this data to the GPT-4 model. Previous work [1] shows that even the GPT-4 model performs poorly on lateral thinking limiting the potential of this dataset. \n\nW2: There is no human verification of whether the puzzles included in the dataset created using GPT-4 can actually be solved. There is no human performance on the test set reported. \n\nW4: During inference, there's a 70:30 split of the training set. Since a large amount of data is generated using an LLM there could be significant overlap between questions across the dataset. \n\nW5: In a setting like Lateral thinking, an LLM's performance might differ a lot if evaluated multiple times on the same question. There are no variance studies or standard errors across multiple trials reported. \n\nW6: Only 30% of the total data validated for correctness by humans. Within this filtered data - \"Puzzles scoring below 6 are discarded, resulting in a final average score of 6.65\" The justification for this threshold is unclear, as the questions should absolutely satisfy all those conditions for the puzzle to be lateral thinking puzzle. Furthermore the exact distribution of the scores is missing. \n\nW6: The models with and without puzzleverse are not evaluated on existing lateral thinking datasets like [1]. \n\n\n[1] LatEval: An Interactive LLMs Evaluation Benchmark with Incomplete Information from Lateral Thinking Puzzles" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How to make sure the qulity of GPT-4 generated puzzles? Since these puzzles are quite challenging to GPT-4. With in-content learning, GPT-4 is able to creatively create new puzzles? Any evaluation on the qulaity?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Novel Lateral Thinking Puzzles Dataset: The paper introduces the largest lateral thinking puzzle dataset. Each puzzle includes a riddle, unconventional solutions, a sequence of yes-or-no questions, answers, and clues. The dataset is carefully constructed to capture the nuances of lateral thinking and is validated through both automated and manual review processes to ensure high quality and coherence.\n2. The PuzzleVerse framework combines supervised fine-tuning with reinforcement learning, utilizing a reward model that ranks questions based on relevance and coherence with the puzzle solution.\n3. Experiments demonstrate significant performance gains, with LLMs achieving an average improvement of 101.9% after PuzzleVerse training on lateral thinking tasks. These results are benchmarked against powerful LLMs like GPT-4, providing a robust comparison." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores the lateral thinking abilities of LLMs in puzzle-solving scenarios, where solutions require creative, non-linear approaches. The authors introduce the “Lateral Thinking Puzzles” dataset. It includes unconventional riddles designed to test LLMs' lateral thinking. They propose a framework, PuzzleVerse, to improve LLMs' performance on these tasks through question generation and reinforcement learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. LLM-judge/metrics might be a good help other than only relying on human evaluation. BLEU/ROUGE is not useful here. \n2. The data creation part is not quite convincing since the challenging puzzle is not a easy task to generate. Some evaluation or human quality check might be needed.\n3. Language is Chinese only." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The paper mentions that the dataset is designed with a focus on the Chinese language. However, the inclusion of GPT-4 in the benchmark raises a question regarding its suitability. Given that GPT-4 is known for its superior performance in English, it would be beneficial for the paper to discuss the rationale behind incorporating a model that excels in a different language context. This discussion could provide insights into how the model's strengths in English might influence the results within a Chinese-centric dataset or whether there are specific reasons for expecting GPT-4 to perform well despite the language discrepancy.\n2. When assessing the performance of various LLMs on the dataset, it is crucial to consider the impact of model size and complexity. The paper compares the performance of different LLMs but does not explicitly mention the number of parameters for each model. Model performance can be significantly influenced by the number of parameters, which affects their capacity for learning and generalization. It would greatly enhance the analysis if the paper could provide details on the parameter count for each model included in the comparison." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Lateral thinking promotes creative reasoning in LLMs, helping them move beyond straightforward logical solutions and explore unconventional answers, which could be valuable for complex problem-solving. \n2. The performance of the framework is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To test and enhance lateral thinking in LLMs, the paper introduces a large dataset called Lateral Thinking Puzzles (LTP), composed of riddles with unconventional solutions. It also proposes a framework, PuzzleVerse, which guides LLMs in incrementally questioning and deducing answers through yes-or-no questions, designed to stimulate creative problem-solving strategies. In experiments, LLMs trained with PuzzleVerse demonstrate significant improvements in solving puzzles creatively, thus providing a new perspective to reasoning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper asserts that its approach significantly enhances the creativity of LLMs by extending the scope from text-based riddles to a broader category of puzzles. However, this claim might be overstated.\n2. The dataset and framework's aim is commendable in seeking to bolster LLM creativity through lateral thinking. However, the use of clues in the SFT and RL training processes seems to contradict this goal. By providing clues, there's an implicit guidance that may limit the LLMs' ability to explore solutions outside of the predefined parameters." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. For baseline evaluations the authors choose a zero-shot setting. I am curious why experiments with few-shot settings are not done? The dataset is novel as puzzles like these are not commonly seen but I would assume that these puzzles follows some intrinsic patterns as of how the solution is derived from the question. In other words in zero-shot settings the model might not grasp what kind of questions are good to ask but this problem is instantly solved in few-shot settings (similar to how human would quickly get better in \"HaiGuiTang\").\n2. (This question might be somewhat vague and I'm not being critical just curious to see what the authors think) How does the author justify the idea of language models even being ABLE to do lateral thinking? The training objective of LMs naturally leads to models selecting the most possible outcomes so I would be surprised to see LLMs thinking out of the box to such extreme extent as shown in these puzzles." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors nicely justify the importance of lateral thinking as a crucial ability for LLM reasoning. The paper is well-written and clarifying.\n2. The author novelly yet carefully curated a large set of lateral thinking puzzles which can effectively measure lateral thinking abilities. They also proposes a comprehensive set of creativity metrics for evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors constructs the largest lateral thinking puzzle dataset by far as well as a novel set of metric to evaluate lateral thinking ability. They also proposes a PuzzleVerse framework that consists of SFT, RM, and RL. Extensive experiments are conducted to evaluate performance on the LTP dataset as well as to evaluate performance in other similar tasks like story generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The dataset is only available in Chinese due to a loss of cultural context during translation. This limits the use case of using this dataset for more extensive comparison of LLM reasoning capability as cultural context will be crucial for solving puzzles in this dataset (for example models trained using English dataset would not understand \"square dancing\"). I would suggest the authors to develop a culture-neutral subset.\n2. The evaluation dataset chosen outside of the LTP dataset seems debatable. I would not really consider story understanding or reading comprehension task to be using lateral thinking. One immediate way to improve this is simply evaluating the framework on previous LTP dataset." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Evaluation and Enhancement of Lateral Thinking in Puzzle-Solving Games of Large Language Models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024do,\ntitle={Do Large Language Models have Lateral Thinking in Puzzle-Solving Games?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2mbDATzUOt},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) show exceptional skills in a wide range of tasks, with their ability in lateral thinking standing out as a particularly intriguing area. Lateral thinking in LLMs allows them to understand deeper or suggested meanings from the context, which is essential for making sense of complex scenarios, especially in puzzle-solving games. To delve deeper into and improve the lateral thinking capabilities of LLMs in the realm of puzzle-solving, we introduce the ``Lateral Thinking Puzzles'' and construct the accompanying dataset.\nOur novel $\\mathcal{P}$uzzle$\\mathcal{V}$erse framework aims to enhance LLMs' lateral thinking in puzzle-solving games. Complementing this, we propose a creativity metric to ensure comprehensive evaluations. \nExperiments show that the selected LLMs, after being trained with $\\mathcal{P}$uzzle$\\mathcal{V}$erse, have an average improvement of 101.9\\% compared to their performance before $\\mathcal{P}$uzzle$\\mathcal{V}$erse training among all metrics. \nWe also validate the robustness of $\\mathcal{P}$uzzle$\\mathcal{V}$erse that trained LLMs perform better in other reasoning tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Lateral Thinking", "Puzzle-Solving Games" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/eafa610166559d96f4c7a221b601dfaadaf3a345.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Do Large Language Models have Lateral Thinking in Puzzle-Solving Games?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2mg5FvBz0J
Query-Aware Learnable Graph Pooling Tokens as Prompt for Large Language Models
main
Active
Graph Neural Network;Large Language Model;Continuous Prompting;Sf
learning on graphs and other geometries & topologies
3;3;5;6
4;2;4;3
2;2;2;2
1;2;2;2
2;2;3;3
4.25
3.25
2
1.75
2.5
0.174078
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Regarding Weaknesses 1 & 4: Since GNP is a model for solving Multi-Choice QA tasks, a direct comparison with our work is not appropriate. However, we utilized GNP’s core module, the Cross-Modal Pooling Layer, to conduct an indirect performance comparison.\n\nAdditionally, there seems to be a misunderstanding regarding the term \"efficiency.\" The efficiency we described refers to representing only the information relevant to the query in a distributed manner (not concerning time or space complexity).\n\nWe address the comparison of efficiency with other baselines in terms of time complexity with the same response given to reviewer M81S's question.\n\nLet n denote the number of nodes, t the number of prompt text tokens, g the number of GNN layers, and k the number of LGPTs. The time complexity of our Graph Encoder, which utilizes 3 GNNs, is therefore O(3g(n+k)). Meanwhile, the time complexity of G-Retriever and GraphToken is O(gn). Considering that k << n, the time complexity of our method and that of other baseline graph encoders remain the same at O(gn).\n\nThe time complexity required for LLM computation is proportional to the square of the prompt length due to the self-attention mechanism. In our model, t+k tokens are passed in the prompt, while t+1 tokens are passed in GraphToken and G-Retriever. We set k=8 so that k << t, ensuring that the time complexity of LLM computation in our model is identical to that in other baseline models at O(t^2).\n\n----------\n\nRegarding Weakness 2: There are two main approaches to combining LLMs and GNNs. One approach uses LLMs for solving Graph Centric Tasks, such as Node Classification or Link Prediction, while the other uses Graph Encoders for solving general NLP tasks, such as QA (https://arxiv.org/pdf/2312.02783). Our study focuses on the latter.\n\nAs you suggested, investigating whether our methodology could work in the context of Graph Centric Tasks would be a valuable research direction and a worthwhile topic for future work. One of our key contributions is the development of a Graph Encoder model that can adapt to changing queries. Since Graph Centric Tasks typically involve less diverse queries than NLP tasks, further exploration is necessary to assess its applicability in that context.\n\n\n----------\n\nRegarding Weakness 3: We regret that we could not explore a wider range of settings. Due to constraints on computational resources and time, we prioritized verifying our core modules. As you pointed out, further experiments on various hyperparameters will be essential in future studies.\n\n\n----------\nRegarding Questions 1 & 2: We did not conduct an ablation study on the Text Encoder. Although, as you noted, experimentation with different Text Encoders could yield valuable insights, we did not pursue this as it does not critically impact the function of the core modules we propose.\n\n\n----------\nRegarding Question 3: We focus on information transmission at the embedding level. Thus, visualizing the amount of information contained in an embedding is very challenging. At our current knowledge level, we do not have a method to visualize and compare the degree of information loss. Could you suggest a visualization approach?\n\n\n----------\nRegarding Question 4: S_g is represented as a graph rather than a single matrix, so expressing it in dimensions may be challenging. We assume your question pertains to the dimensionality of the Node Embeddings in S_g. S_g consists of n nodes, each embedded as a vector of dimension d, resulting in an nxd Node Embedding matrix.\n\n\n----------\nRegarding Question 5: S_p is structured similarly to S as a graph containing Node Embeddings after Pooling. However, we only project the LGPT Tokens to the LLM, so S_p information is not utilized.\n\n\n----------\nYour insightful feedback has greatly contributed to the improvement of our research. Thank you for dedicating time to review our work. We hope that our study will be accepted, leading to further discussion. We kindly request you to reconsider your evaluation score." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Regarding Weakness 1: GNP is a model designed for addressing Multi-Choice QA tasks. Since our task is not a Multi-Choice QA problem, direct comparison with GNP is not feasible. However, we incorporated the core module of GNP, the Cross-Modality Pooling Module, to perform comparative experiments.\n\n----------\n\nRegarding Weakness 2: Let n denote the number of nodes, t the number of prompt text tokens, g the number of GNN layers, and k the number of LGPTs. The time complexity of our Graph Encoder, which utilizes 3 GNNs, is therefore O(3g(n+k)). Meanwhile, the time complexity of G-Retriever and GraphToken is O(gn). Considering that k << n, the time complexity of our method and that of other baseline graph encoders remain the same at O(gn).\n\nThe time complexity required for LLM computation is proportional to the square of the prompt length due to the self-attention mechanism. In our model, t+k tokens are passed in the prompt, while t+1 tokens are passed in GraphToken and G-Retriever. We set k=8 so that k << t, ensuring that the time complexity of LLM computation in our model is identical to that in other baseline models at O(t^2).\n\n----------\nRegarding Weakness 3: Due to page limitations, we specified the source of our dataset (https://arxiv.org/pdf/2402.07630) as a substitute. However, this is a very relevant point, and we will address this in the Appendix.\n\n----------\n\nRegarding Weakness 4: We plan to organize the code and make it available on GitHub once the anonymous review period concludes. We apologize for the lack of a comprehensive README file due to concerns about premature exposure of our results. Thank you for your patience.\n\n----------\n\nRegarding the Question: It appears a typo occurred before submission. We apologize for any confusion this may have caused.\n\n----------\n\nYour insightful comments have significantly contributed to the improvement of our research. We appreciate the time you dedicated to reviewing our work, and we hope this paper will be accepted to enable further discussions. We kindly ask you to reconsider your evaluation score." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Regarding Weaknesses: Our Early-Fusion approach adopts the methodology of QA-GNN (https://arxiv.org/pdf/2104.06378), as described in the main text. Although it is not a novel concept, we believe our application of this method in conjunction with LLMs represents a significant contribution as the first instance of its kind.\n\nThe concern of potential information loss when conveying information through a single parameter has been previously discussed in studies that introduce attention mechanisms to seq2seq models (https://arxiv.org/abs/1409.0473). In our experiments, we quantitatively verified this by comparing the results with LGPT configured with 1, 8, and 32 instances. However, as you pointed out, further analytical studies are required to understand precisely why this approach is effective and what specific types of information loss it mitigates.\n\n----------\n\nRegarding Question 1: We observed that performance improvements were more pronounced when applying LGPT to larger graph samples. However, since we lack a clear quantitative method to validate this, it was not included in the main text. We plan to conduct additional experiments on large-scale graph cases as soon as sufficient computational resources are available.\n\n----------\n\nRegarding Question 2: ExplaGraph showed similar trends to SceneGraph, and for WebQSP, performance was better with 32 tokens than with 8 tokens. We attribute this to the larger graph size in WebQSP compared to the other two datasets, but as this is only an assumption, we did not mention it in the main text due to the difficulty of quantitative validation.\n\n| Prompt Token | Expla Graphs | SceneGraphs | WebQSP | Average |\n|--------------|--------------|-------------|--------|---------|\n| 1 | 88.17 | 83.51 | 70.33 | 80.67 |\n| 8 | **88.62** | **85.19** | 70.70 | 81.50 |\n| 32 | 80.32 | 84.21 | **70.86** | 78.46 |\n\n\n----------\n\nYour comments and questions align closely with the insights we gained during our research process, and we plan to investigate these areas further. However, we hope that this paper, as an interim work, will stimulate further discussions and contribute to the advancement of this research topic. We kindly ask you to reconsider your evaluation of our study." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Regarding Weakness 1: \n\nThank you for your valuable comments. As you rightly pointed out, the use of Graph Embedding as prompts for LLMs is indeed a well-established research topic. However, our study introduces the concepts of Early Query Fusion and Learnable Pooling specifically for LLM prompts, which we believe constitute key contributions of our research.\n\nAlthough each module was inspired by prior works, as you and the cited research have noted, our approach to integrating these modules in the context of combining LLMs and GNNs is novel, and we consider it a primary contribution. It seems that we did not sufficiently highlight these aspects in our writing, which we will address in the revised version.\n\n----------\nRegarding Weakness 2: \n\nTo ensure a fair comparison, we minimally modified the G-Retriever while adding our proposed model. We also preserved the hyperparameters from the official code of G-Retriever. Our results show an improvement beyond the standard deviation range of G-Retriever’s average performance (1.96–5.33 standard deviations, depending on random seeds). While our sample size limits statistical testing such as t-tests, given the improvement relative to standard deviations, we believe these findings are not the result of cherry-picking.\n\n----------\nRegarding the Question: \n\nWe are unsure if we have understood your question accurately. If our answer does not align with your intent, please feel free to ask us again.\n\nIn Figure 3, we report the performance of fine-tuning the LLM without a GNN. The addition of a GNN without fine-tuning the LLM resulted in higher performance compared to fine-tuning the LLM without a GNN. Furthermore, the combination of GNN addition and LLM fine-tuning showed superior performance over all other baselines.\n\n----------\nYour thoughtful comments have greatly contributed to enhancing our research, and we are deeply grateful for this. However, we would appreciate further feedback on the rationale behind your lower rating to better address these areas. We kindly ask you to reconsider your evaluation after reviewing our response." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How sensitive is the model's performance to the choice of text encoder in Equation 7?\n2. Have the authors experimented with different text encoders (e.g., BERT variants, RoBERTa, T5) and observed any significant variations in performance?\n3. Regarding Equation 5, how does the choice of graph encoder architecture impact the model's performance?\n4. Can the authors provide case studies or visualization analysis demonstrating how LGPT addresses information loss compared to baseline methods?\n5. In Equation 9, please clarify the definition and dimensionality of $S_g$\n6. For Equation 10, please provide a detailed explanation of $S_p$ and its role in the architectur" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper introduces an innovative early fusion mechanism that addresses a fundamental challenge in graph-language modeling: the seamless integration of structural and textual information; The learnable pooling tokens (LGPT) provide a flexible and adaptive approach to graph representation, offering advantages over traditional static pooling methods. \n\n2.The authors conduct extensive experiments across three diverse graph QA datasets, demonstrating the robustness and generalizability of their approach. The method achieves competitive performance compared to state-of-the-art baselines, while potentially offering improved computational efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel approach for integrating graph representations with large language models (LLMs), addressing the critical challenge of efficient graph-text interaction. The primary contributions are twofold: (1) an early fusion mechanism that performs message passing between sub-graph node representations and query text embeddings, and (2) a learnable pooling strategy utilizing dedicated tokens (LGPT) that act as information aggregators within the graph structure.\nThe early fusion mechanism is particularly noteworthy as it enables direct interaction between textual and structural information at the embedding level, potentially capturing more nuanced relationships compared to traditional late fusion approaches. The authors implement this through message passing operations that allow bidirectional information flow between the sub-graph nodes and query text representations.\nThe learnable pooling strategy introduces fully-connected LGPT tokens that serve as dynamic information hubs within the graph. These tokens effectively aggregate information from all nodes through message passing, potentially creating a more comprehensive and adaptable graph representation. This approach appears to offer more flexibility than static pooling methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper's scalability argument lacks sufficient comparative analysis against existing methods like G-retriever and GraphToken; The authors do not provide a detailed complexity analysis or empirical benchmarks to substantiate their efficiency claims; While the authors assert improved efficiency compared to Tian et al. 2024 (Line 210), this claim requires further scrutiny since: a). The dominant computational cost typically lies in the LLM inference; b). The relative improvement in message passing efficiency may be marginal in the overall computational pipeline; c) No concrete timing or memory usage comparisons are provided. \n2. The evaluation is primarily confined to GraphQA tasks, leaving several important questions about generalization unexplored: a). The method's effectiveness on standard graph learning tasks (node classification, link prediction) remains unvalidated; b) The paper lacks a theoretical or empirical bridge between GraphQA performance and the claimed improvements in node-level and graph-level information integration. A broader evaluation across diverse graph-based tasks would strengthen the paper's contributions. \n3. The hyperparameter analysis in Section 4.4 shows significant gaps in the experimental design: The LGPT token count investigation only examines extreme values (8 and 32), omitting crucial intermediate points; The impact of other critical hyperparameters (e.g., message passing steps, fusion layer configurations) is not thoroughly explored. \n4. The paper should improve the methodological clarity from a). a more rigorous theoretical justification for the chosen LGPT architecture; b). Clear computational complexity analysis compared to baseline methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What's the perfomance of LGPT in figure 1 without GNN and fine-tune lanaguage model (i.e. GraphToken with LLM fine-tuning)? It would be interesting to see whether design of graph pooling is still neccessary when LLM is tunable given that GNN introduces additional parameters." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper identifies a critical disadvantage of graph pooling method; the granularity control is either graph-level or node-level. \n2. On this pain point, the proposed multiple tunrable prompt (LGPT) effecvtively imrpove the performance on benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper leverages graph neural networks and large language models for task of knowledge graph question answering. Based on recent proposed techniques include graph soft prompt and query-aware graph prompting. The author proposed query-aware graph pooling to overcome the limitations of node-level and graph-level representations. In experiments, it shows competitive performance on recent proposed graph QA benchmarks in different domains." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of the paper is questionable. As the author mentioned, recent work such as G-Retriever;Graph Token and GNP (Graph Neural Prompting) has covered most of the techniques used in the paper except the graph prompt paramters. However, the learnable graph prompt is proposed in multiple related work including [1] and supernodes (connect every node to a virtual node for pooling) in graph pooling [2] literature.\n\n2. The proposed work re-uses most of the component of G-Retriever, which also causes my concern on cherry-picking hyperparameters given the performance improvements over G-retriever is subtle.\n\n\n\n\n[1] Universal prompt tuning for graph neural networks, Neurips 2023\n[2] Understanding Pooling in Graph Neural Networks," }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. “ However, the key difference from these methods is that, instead of pooling into a single graph embedding, our approach uses multiple learnable tokens for pooling, thereby reducing information loss” - Is there a pattern in the information loss. Is there a way to quantify this loss other than looking at the accuracy? What kind of data samples perform better when we increase the number of LGPT? \n\n2. How does the number of LGPT performance vary with the different datasets?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is easy to read and understand. Extensive experiments and analysis have been shown to prove the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the problem of Textual-Attributed Graph QA, divided into two main steps: sub-graph retrieval and answer generation. For answer generation, their approach transforms the sub-graph into textual embeddings through a prompt, generates embeddings, and then uses a graph encoder with learnable parameters to process them. The paper highlights scalability issues in node-level prompting (where each node is treated as a separate token in the language model) and information loss in graph-level projection (where the entire graph is compressed into a single vector). To address this, the authors propose Learnable Graph Pooling Tokens (LGPT), a pooling method that introduces learnable parameters (tokens) that connect to all nodes and perform message passing. This method allows for flexible, efficient graph representation that balances fine-grained and global information, achieving improved performance on Graph QA tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The idea of “early fusion” by forming an external node and fully connecting to other nodes in the graph is not novel to the field. The LGPT idea seems intuitive that increasing the number would increase the performance but would like to see more analysis here." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What is the meaning of Sf in the author keywords?\n- See weaknesses and make some revisions to the paper." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The combination of LLM and GNN is an important research topic.\n- The design of this paper is reasonable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- This paper proposes a learnable graph pooling module to enhance LLM-based GraphQA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The novelty seems to be limited in this paper because authors only made a new incremental design in the graph encoder. The core paradigm of graph QA is preserved compared with other baselines.\n- Some important GNN+LLM baselines are missing in the experiments. For example, GNP [1].\n- The training/inference efficiency of the method should be compared with other baselines.\n- The detailed information about the graphs in each dataset is not reported.\n- The original dataset and README instructions are not provided in the code, making it difficult to reproduce the performance.\n\n\n\n\n[1] Graph neural prompting with large language models" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper proposes a novel method using Learnable Graph Pooling Token (LGPT) and Early Query Fusion techniques to enable efficient graph representation in large language models, achieving a 4.13% performance improvement on the GraphQA benchmark." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024queryaware,\ntitle={Query-Aware Learnable Graph Pooling Tokens as Prompt for Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2mg5FvBz0J},\nnote={under review}\n}" }, "abstract": { "value": "Graph-structured data plays a vital role in numerous domains, such as social networks, citation networks, commonsense reasoning graphs and knowledge graphs. While graph neural networks have been employed for graph processing, recent advancements have explored integrating large language models for graph-based tasks. In this paper, we propose a novel approach named Learnable Graph Pooling Token (LGPT), which addresses the limitations of the scalability issues in node-level projection and information loss in graph-level projection. LGPT enables flexible and efficient graph representation by introducing learnable parameters that act as tokens in large language models, balancing fine-grained and global graph information. Additionally, we investigate an Early Query Fusion technique, which fuses query context before constructing the graph representation, leading to more effective graph embeddings. Our method achieves a 4.13\\% performance improvement on the GraphQA benchmark without training the large language model, demonstrating significant gains in handling complex textual-attributed graph data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph Neural Network", "Large Language Model", "Continuous Prompting", "Sf" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bc08711e4442572f09eb4695d652f57667d5d0d3.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1c7c3d7ac9421d0e6e88d2e60aba453c0df8bc0c.zip" }, "title": { "value": "Query-Aware Learnable Graph Pooling Tokens as Prompt for Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2miMc8FR0j
SCALE: Augmenting Content Analysis via LLM Agents and AI-Human Collaboration
main
Active
Content Analysis;Large Language Model;Multiagent;Simulation;Computational Social Science;AI for Science
other topics in machine learning (i.e., none of the above)
3;3;5;5
4;5;3;5
1;2;3;2
2;1;2;3
2;2;3;2
4
4.25
2
2
2.25
-0.301511
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See \"Weaknesses\"." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "SCALE introduces a multi-agentic approach to content analysis, enhancing scalability and efficiency by reducing the human resources required to make large-scale, high-quality analysis feasible. SCALE demonstrates high flexibility, adapting across multiple datasets without modifications. The paper might have a contribution to social sciences by enabling large-scale analysis traditionally constrained by labor-intensive methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose SCALE, which is a tool to perform content analysis in social science. By automating text coding and facilitating multi-agent discussion, SCALE approximates human judgment in text annotation tasks. The framework integrates AI-human collaboration, which mitigates algorithmic bias. The paper evaluates SCALE on diverse social science datasets, showing its effectiveness in improving large-scale content analysis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "One big concern about the paper is that it does not provide prior benchmarks. The datasets used by the authors are not very commonly used in this literature. I recommend that the authors use at least one dataset with prior benchmarks on multi-label classification (e.g., COCO, Peskine et al. 2023) or apply prior methodologies of multi-label classification on your datasets. How does the plain-vanilla GPT-4o perform on your dataset without including multiple agents?\n\nIt is well-known that agentic approaches improve LLMs’ performance. However, the approaches typically require more computational resources and time. It would be helpful if the authors could include each ablation's cost and processing time. The authors acknowledge this issue in Section 6, but it will be helpful for the readers to see the informational gain of this framework along with computational requirements.\n\nIn Section 5.4.3, the authors might want to include some desired codebook structures in their prompt. They could add layer of agents that review the final product by including several instructions, e.g., examining whether there are overlapping categories by providing some theory backgrounds. They might even try fine-tuning the LLMs using some domain knowledge instead of using the plain-vanilla versions.\n\nMissing citations: Several works have already explored how the discussion among LLM agents can improve overall performance. For example, see Chan et al. (2023) and Kim et al. (2024). I’m surprised that the authors do not acknowledge any of these studies. At a high level, what this paper shows is similar to the value of a multi-agentic approach in classification problems.\n\nReferences\nPeskine, Youri, et al. \"Definitions Matter: Guiding GPT for Multi-label Classification.\" Findings of the Association for Computational Linguistics: EMNLP 2023. 2023.\nChan, Chi-Min, et al. \"Chateval: Towards better llm-based evaluators through multi-agent debate.\" arXiv preprint arXiv:2308.07201 (2023).\nKim, Alex, Keonwoo Kim, and Sangwon Yoon. \"DEBATE: Devil's Advocate-Based Assessment and Text Evaluation.\" arXiv preprint arXiv:2405.09935 (2024).\n\nMinor Comments\n1)\tYou list five contributions but say the contributions are fourfold on page 2.\n2)\tWhy are some datasets benefiting heavily from discussions while others are not (Figure 4)? It would be helpful to include some insights on where the discussions will likely improve the model performance more and why.\n3)\tIn Table 3, it is concerning that you achieve the highest accuracy when human intervention is frequently made, and the LLM strictly follows it. Doesn’t this suggest that human interventions are required and LLMs alone cannot perform the task reliably?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "It appears that some names used as example scenarios (see row 237) actually exist and refer to real-life situations (as confirmed by a simple web search). In my opinion, these can and should be omitted (e.g., by replacing them with placeholder nicknames)." }, "flag_for_ethics_review": { "value": [ "Yes, Other reasons (please specify below)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How do the authors ensure that SCALE is grounded in social science knowledge beyond simple prompting?\n- Similarly, the authors claim (row 233) that agents [...]do not rely on external knowledge or data beyond what is provided in the codebook [...]. How do they ensure that agents do not leverage their own knowledge/biases in conducting content analysis, going beyond the received guidelines?\n- Do the authors experiment with different initializations for the agents? That is, what is the effect of specifying agents' instruction, gender, and experience within prompts?\n- As hallucinations are likely to occur with LLMs, how do the authors handle them?\n- What is the default behavior when agents do not reach agreement within k iterations?\n- As the temperature is not enough to reduce randomness in LLMs, which values did the authors use for top_p and top_k?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The core idea of this manuscript to leverage LLMs for augmenting content analysis is interesting, as can lead to improvements in capabilities (as the LLMs' intrinsic word model is rather rich and varied) and scalability (e.g., by alleviating the human burden in annotating large-scale content)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes SCALE, a multi-agent framework to simulate content analysis using LLMs. The overall idea is to incorporate different phases of content analysis, such as text coding, inter-agent discussion, and codebook updating, in a comprehensive framework carried out by LLMs as a multi-step process. Additionally, the authors allow the framework for human intervention, to enhance AI-human expert collaboration. The SCALE framework was tested on five real-world datasets, spanning seven multi-class and multi-label classification tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite focusing on an interesting and promising idea, this manuscript presents different criticalities, as follows:\n- There is much emphasis on the capability of SCALE to incorporate domain knowledge of social science, yet this process seems limited to one (of many possible) prompting strategies, undermining the robustness and technical depth of the proposed framework. \n- The experimental setup is not appropriate, as there is no comparison with baseline models (e.g., ML-based ones for sentiment analysis). Indeed, it is just confined to testing different prompting strategies, with two commercial models (i.e., GPT-4O and 4O-mini). Similarly, some experimental choices (e.g., the very low number of agents despite the sensitivity results) are not adequately motivated.\n- The experimental results turn out to be particularly weak for 3 out of 7 tasks, with very low coding accuracies. Also, some additional quantitative measures (e.g., inter-agent agreement) would be beneficial for a better understanding of how SCALE handles the annotation processes.\n- Despite aiming at fostering better human-AI interaction in content analysis, as well as strong capabilities, there is no human qualitative evaluation of the SCALE's capabilities. This would be needed to further validate the helpfulness of the proposed framework.\n- The entire study relies solely on the GPT family of models. Experimenting with other (e.g., open) models would be beneficial for a broader applicability and adoption of the proposed framework.\n- There are no technical details on the agents deployment and interaction. This is a key aspect for multi-agent systems, and should be stated in the manuscript to also foster reproducibility. Similar considerations hold for the human-in-the-loop setting.\n- To properly validate how SCALE complements humans, there should be some more emphasis on the patterns occurring within it, and critical analysis on how the different phases differ or resemble humans. For instance, for RQ2, certain datasets see limited to no improvement after agents' discussion, why?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The process of content analysis is always subjective isn't it? How does the method reduce subjectivity and is that even the goal? \n- The appendix provides an overview of the different prompts associated with the different steps. How much manual effort is involved when applying the framework? Is the codebook really updated automatically or do the researchers have to manually extract codebook changes and copy them into their codebook? \n- The framework is designed to iteratively update a codebook and use it as base for the coding. There are labels for each dataset. Were these labels the starting point for the coding task? How exactly were the experiments conducted? Did you only evaluate the coding step or did the experiments include the development of a codebook for each dataset? \n- Why did you conduct the first experiments with only 2 agents? \n- Are all tasks used for the experiments multi label tasks?\n- Does the average accuracy of 0.7 refer to all models? You could also add a new column where you could plot the average.\n- How much prompt engineering was involved in the process of building the framework? How did you come up with the different prompts? Do the results depend much on the wording of the prompts?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper has a clear strucure and demonstrates strengths and limitations of the method through several experiments. The topic is very relevant for the Social Science." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a framework for automatic content analysis of unstructured data with LLMs called SCALE. The framework includes the steps of automated text coding, inter-agent discussion and dynamic codebook updates, while also allowing for human interventions. The goal is to develop a tool for social scientists that is able to support the process of content analysis at a large scale." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The central Figure (Figure 2) is not that easy to understand. It should be self-explanatory by reading the caption. It would be very helpful to know which dataset is taken as an example here. Unfortunately, the paper does not always read fluently. Sometimes articles are missing and there are some grammatical errors. Some technical details are missing, such as the implementation of the chain of thought and tree of thought baselines (or at least the references are missing - see below). Also the formula for the used accuracy measure should be written out (in my opinion). \nThe human intervention experiment is not really explained well. How much did the humans intervene? Is there a certain number of rounds? Is it the same setup as in the previous experiments?\nOverall the idea of the framework and the process of inter-agent discussion for automated content analysis is good, but some important details are missing. It is also not clear from the paper how much manual effort is required to apply the whole framework. What are the necessary steps (e.g. developing personas, a first version for a codebook..)? \nAs the authors note at the end the inter-agent discussion introduces significant computational overhead. This leaves the question how practical the framework is. \n\nMissing references:\n- Chain of thought prompting as introduced by Wei et al (2022): Wei, Jason, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 2022, 35. Jg., S. 24824-24837.\n- Tree of thougt prompting: Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T., Cao, Y., & Narasimhan, K. (2024). Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36. \n- Self Consitency: Wang, X., Wei, J., Schuurmans, D., Le, Q. V., Chi, E. H., Narang, S., ... & Zhou, D. Self-Consistency Improves Chain of Thought Reasoning in Language Models. In The Eleventh International Conference on Learning Representations.\n\n\nSmall errors:\n* Line 088 should probably have a period at the end of \"Human Intervention\" to be consistent with the other items.\n* line 190 is missing a period before (c)\n* line 208: it should be N personas P, which ..., are derived from.. s\n* The abbreviation NES is not introduced in the text\n* line 241: \"a K-round discussion\" instead of \"an K-round..\" \n* Line 320: It would be very nice if the Hamming loss was explicitly written out here in the formula. \n* Line 461: lLM > LLM" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea of using LLM agents and ai-human collaboration for content analysis is interesting.\n2. The paper is easy to follow. For example, Fig. 2 is pretty detailed to explain the overall workflow of SCALE framework.\n3. The paper could attract a large number of audience interested in using LLM agents to simulate social science research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents SCALE, an interesting multi-agent framework designed to simulate and augment the content analysis process using LLMs and AI-human collaboration. Automating key phases of content analysis, including text coding, inter-agent discussion, and codebook evolution could reduce the time, human resources, and costs traditionally required for content analysis. It also incorporates human intervention to mitigate algorithmic bias and improve contextual sensitivity. The paper suggests that SCALE could transform social science research by providing an efficient tool for analyzing large volumes of data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In the introduction, the authors mention that one of the drawbacks of using humans experts is the time and labor cost. The analysis of the proposed framework would benefit significantly if there is any analysis in terms of time/cost spent by humans for annotation vs LLMs.\n2. I think Section 5.3 SUPERIOR PERFORMANCE OF SCALE should emphasize the overall quality of the whole framework instead of the single coding accuracy. The classification task is relatively trivial for LLMs.\n3. Human evaluation (or detailed results) might be needed to assess the overall quality of using LLM agents to simulate content analysis beyond Codebook Update Phase." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose SCALE, a novel LLM multi-agent framework to automate content analysis, traditionally labor-intensive in social science, while integrating human oversight, enabling scalable, high-quality annotations approximating human judgment." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024scale,\ntitle={{SCALE}: Augmenting Content Analysis via {LLM} Agents and {AI}-Human Collaboration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2miMc8FR0j},\nnote={under review}\n}" }, "abstract": { "value": "Content analysis is a fundamental social science research method that breaks down complex, unstructured texts into theory-informed numerical categories. It has been widely applied across social science disciplines such as political science, media and communication, sociology, and psychology for over a century. This process often relies on multiple rounds of manual annotation and discussion. While rigorous, content analysis is domain knowledge-dependent, labor-intensive, and time-consuming, posing challenges of subjectivity and scalability. In this paper, we introduce SCALE, a transformative multi-agent framework to $\\underline{\\textbf{S}}$imulate $\\underline{\\textbf{C}}$ontent $\\underline{\\textbf{A}}$nalysis via large language model ($\\underline{\\textbf{L}}$LM) ag$\\underline{\\textbf{E}}$nts. This framework automates key phases including text coding, inter-agent discussion, and dynamic codebook updating, capturing human researchers' reflective depth and adaptive discussions. It also incorporates human intervention, enabling different modes of AI-human expert collaboration to mitigate algorithmic bias and enhance contextual sensitivity. Extensive evaluations across real-world datasets demonstrate that SCALE exhibits versatility across diverse contexts and approximates human judgment in complex annotation tasks commonly required for content analysis. Our findings have the potential to transform social science and machine learning by demonstrating how an appropriately designed multi-agent system can automate complex, domain-expert-dependent interactions and generate large-scale, quality outputs invaluable for social scientists." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Content Analysis", "Large Language Model", "Multiagent", "Simulation", "Computational Social Science", "AI for Science" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b1368c5045677d98b05b10cf240a54a95c3ee81c.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SCALE: Augmenting Content Analysis via LLM Agents and AI-Human Collaboration" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2mqb8bPHeb
T-Stitch: Accelerating Sampling in Pre-Trained Diffusion Models with Trajectory Stitching
main
Active
diffusion model;transformers;model stitching
generative models
5;6;8;8
3;4;3;5
3;3;3;3
3;3;3;3
3;4;2;3
6.75
3.75
3
3
3
0.406181
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does the method perform when architectural differences between small and large models are more significant? Are there specific architectural compatibility requirements?\n- The improved prompt alignment for stylized models is intriguing. Could you provide more analysis of why this occurs and how generally applicable this finding is?\n- What are the primary failure modes of T-Stitch? Are there specific scenarios where the method consistently underperforms?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "### Novel & Foundational Insight\n - Deep understanding of diffusion models' behavior across timesteps\n - Thorough empirical validation of latent space similarity between models\n - Clear frequency analysis supporting the theoretical foundation\n - Novel perspective on leveraging model size differences temporally\n\n### Practicality\n- Training-free nature enables immediate deployment\n- Compatible with existing acceleration techniques\n- Works across various architectures and model families\n- Clear implementation guidelines and deployment considerations\n\n\n### Comprehensive Empirical Validation\n- Extensive experiments across multiple architectures\n- Thorough ablation studies covering various aspects\n- Clear demonstration of speedup-quality tradeoffs\n\n\n### Broader Impact & Applications\n- Unexpected benefits in prompt alignment for stylized models\n- Natural interpolation between style and content\n- Practical applications in Stable Diffusion ecosystem\n- Potential implications for efficient model deployment" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces T-Stitch, a training-free approach to accelerate sampling in diffusion models by strategically utilizing different-sized models across the denoising trajectory. The key insight is that small and large models trained on the same data distribution learn similar encodings, particularly in early steps where low-frequency components dominate. By leveraging this property, T-Stitch uses smaller models for early steps (global structure) and larger models for later steps (fine details), achieving significant speedup without quality degradation. The method demonstrates broad applicability across various architectures (DiT, U-Net, Stable Diffusion) and shows interesting benefits for stylized models' prompt alignment. Extensive experiments validate the effectiveness across different settings, samplers, and guidance scales." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "### Critical Absence of Limitations Analysis\n- Paper lacks a dedicated section for discussing limitations\n- No systematic analysis of failure cases\n- Insufficient discussion of edge cases and potential risks\n- Missing critical self-reflection on method boundaries\n\n### Theoretical Gaps\n- No mathematical justification for the 40% threshold\n- Lack of theoretical guarantees for quality preservation\n- Missing analysis of optimal model size ratios\n- Incomplete understanding of feature compatibility requirements\n\n### Architectural Considerations\n- Limited analysis of cross-architecture compatibility\n- No clear guidelines for multi-model (>2) scenarios\n- Insufficient investigation of feature space alignment\n- Missing discussion of architecture-specific optimization\n\n### Practical Implementation Challenges\n- Memory overhead management not thoroughly addressed\n- Pipeline complexity implications understated\n- Limited guidance for scenarios without suitable small models\n- Deployment considerations in resource-constrained environments lacking\n\n\n### +)\n- The absence of a dedicated limitations section limits the paper's completeness" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Your idea reminds me a bit of works on early-exit diffusion models [1,2] where the depth on the denoising network is made adaptive based on the estimated difficulty of the sampling step. Could be interesting to draw further parallels between early-exit and your stitching approach\n\n\n[1] [AdaDiff: Accelerating Diffusion Models through Step-Wise Adaptive Computation](https://arxiv.org/abs/2309.17074)\n\n[2] [DuoDiff: Accelerating Diffusion Models with a Dual-Backbone Approach](https://arxiv.org/abs/2410.09633)" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Their main idea of leveraging model of various sizes throughout the diffusion sampling process is simple, yet it is shown to be effective. The simplicity is an added benefit in my opinion, as it makes the method more reproducible and more likely to be adopted\n- I also believe their idea to be novel (though I am not fully up to date with the diffusion literature due to its high pace)\n- The experiments are very comprehensive, they try out their trajectory-stitching approach on various backbone architectures (DiT, UNet), with various samplers, for unconditional/conditional cases etc.\n- Also, I like how instead of proposing yet another new efficient diffusion model (and thus contributing to the model zoo), the authors find a smart way to combine/reuse the existing models via their trajectory-stitching approach" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method to accelerate sampling in diffusion models by using a sequence of two denoising networks, a smaller one followed by larger one (instead of using the same network for all sampling steps as is traditionally done). In their experiments, they show their method can lead to meaningful computational savings at little to no cost to the quality of generated images." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I think the writing can be improved. For camera-ready it would make sense to move the background/preliminaries to the main text and perhaps to move some of the experiments to the appendix. Also I find Section 3 quite chaotic (it talks about too many different things, from motivation to model design and connection to the other efficiency techniques like speculative decoding)\n- It is not clear how to select the switching point/threshold between the small and large model (r1). While I understand that by varying it you can get a Pareto frontier, however, that still requires running/evaluating a large number of candidate thresholds." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The work primarily relies on the observation of the alignment of noise predictions between models of different scales during the early stages of the denoising process (Fig. 3). While this is an intriguing phenomenon, the paper does not provide sufficient explanation for why this occurs. Furthermore, the magnitude of the latent vectors is also important. Does the $L^2$-distance exhibit a similar pattern as shown in Fig. 3?\n\nI believe that the requirement for a shared latent space is a strict condition for this method. It is unclear whether this method is also robust for models trained with different configurations, such as varying noise schedules (like variance-preserving versus cosine) and different diffusion steps (e.g., T=100 versus T=500).\n\nIs it possible that small models trained only over larger T steps (for example, $t \\sim [70, 100]$ with a total $T=100$) yield better results?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Overall this is a well-written paper which presents a simple but effective approach for accelerating sampling speed of large diffusion models. The authors convey their ideas clearly and support their approach through extensive experiments. I guess the key significance of this stitching approach is that it is orthogonal to other techniques, like model distillation or improved ODE solvers, allowing it to be easily combined with other methods to further reduce inference time." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a new method to speed up the sampling efficiency of diffusion models. By using smaller models for the initial denoising steps and larger models in later stages, this approach greatly boosts sampling speed while keeping quality on par. Notably, this method is distinct from existing training-based or training-free techniques aimed at improving sampling efficiency. The authors demonstrate the effectiveness of their approach through extensive experiments, highlighting improved quality-efficiency trade-offs with a clear Pareto frontier." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Even if we ignore the inference time needed for running the small models, the time to generate samples of comparable quality can still be reduced by 30-40% at most. It is hard to say this method is a groundbreaking technique for improving sampling efficiency in diffusion models. While the paper presents a comparison of the Pareto frontiers between T-stitching and M-stitching, it might be more insightful to compare it with methods like progressive distillation, which can be much faster and does not need store multiple models.\n\nAdditionally, the approach uses models of different scales along the same denoising trajectory, which necessitates that both the small and large models predict score functions in the same latent space. This requirement may limit its applicability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "I do not think I have any concerns here." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please see the two points about limitations I have raised in my Weakness section" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Generally, I believe this is a good paper backed by solid experimentation. It has extensive comparative analysis involving various timesteps and samplers, and also compares itself against other methods, including those that are training-based, training-free, and search-based.\n\nThe paper is also well written and clearly motivated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a training-free acceleration technique named T-Stitch for diffusion models. The core spirit of the approach is to employ a compact model for early timesteps and a more substantial model for later stages. The authors have provided empirical evidence that model performance remains unaffected even when the lighter model is employed for 40% of the initial steps. While the proposed method is simple and efficacious, parts of its evaluation appear to rely heavily on empirical evidence, and in my opinion, falls to a typical trap of this type of papers of not including further limit studies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In my view, theoretical insights and empirical studies hold equal value; the simplicity of an idea does not de-value from its merit if it is proven effective, especially if it is a topic like Efficient AI. However, my primary issue with such papers lies in my preference for a clear explanation of the method's limitations, also through sets of experiments.\n\nFirst, the authors of the T-Stitch paper state that 40% is an appropriate cutoff for switching models, a decision purely grounded in empirical evidence. This raises the question of how well-founded this choice is. If I were to apply this switching method to a different set of diffusion pairs of the models, would the 40% value still be relevant? Intuitively, the cutoff point likely hinges on the performance disparity between the more and less powerful models. From that perspective, if you put the model difference (Strong model FLOPs - Weak Model FLOPs) on x-axis, and the best cut-off point on y-axis, do you simply expect a flat line at 40% cut-off?\n\nSecond, although the authors did claim that the method can go beyond pari-wise, and have demonstrated how switching (I would maybe actually call this switching rather than stitching) can happen across 3 models, it is unclear about the limitation on this. Clearly the increased number of models would complicate the decision making on where to switch, and potentially make this method becomes a search-based one. More importantly, there must exhibits certain limitation on this switching especially when one has limited diffusion time steps. When you have N models to stitch/switch with M time steps. When N becomes larger or M becomes smaller, the return of this optimization should inevitably becomes diminishing. \n\nAlso something minor: Figure 5: the bottom of the images are cropped." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose Trajectory Stitching, a simple but effective technique that leverages small pretrained diffusion models to accelerate sampling in large pretrained diffusion models without training." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024tstitch,\ntitle={T-Stitch: Accelerating Sampling in Pre-Trained Diffusion Models with Trajectory Stitching},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2mqb8bPHeb},\nnote={under review}\n}" }, "abstract": { "value": "Sampling from diffusion probabilistic models (DPMs) is often expensive for high-quality image generation and typically requires many steps with a large model. In this paper, we introduce sampling Trajectory Stitching (T-Stitch), a simple yet efficient technique to improve the sampling efficiency with little or no generation degradation. Instead of solely using a large DPM for the entire sampling trajectory, T-Stitch first leverages a smaller DPM in the initial steps as a cheap drop-in replacement of the larger DPM and switches to the larger DPM at a later stage. Our key insight is that different diffusion models learn similar encodings under the same training data distribution and smaller models are capable of generating good global structures in the early steps. Extensive experiments demonstrate that T-Stitch is training-free, generally applicable for different architectures, and complements most existing fast sampling techniques with flexible speed and quality trade-offs. On DiT-XL, for example, 40% of the early timesteps can be safely replaced with a 10x faster DiT-S without performance drop on class-conditional ImageNet generation. We further show that our method can also be used as a drop-in technique to not only accelerate the popular pretrained stable diffusion (SD) models but also improve the prompt alignment of stylized SD models from the public model zoo. Finally, the explicit model allocation strategy of T-Stitch significantly reduces the need of training or searching, delivering high deployment efficiency." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "diffusion model", "transformers", "model stitching" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e4a67cf762625cede46883126e136009211af00b.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "T-Stitch: Accelerating Sampling in Pre-Trained Diffusion Models with Trajectory Stitching" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2o58Mbqkd2
The Superposition of Diffusion Models
main
Active
generative modelling;protein generation;image generation;diffusion models
generative models
5;6;8
2;3;5
3;2;3
2;3;4
2;4;2
6.333333
3.333333
2.666667
3
2.666667
1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "**Major Questions**\n1. I am curious why text-based evaluation metrics such as Clip Score were not used. It seems like an obvious choice to do.\n2. In section 2.1, how were the mixing coefficients $wj$ actually set? Is the model capable of adjusting the weights for mixing? I am also curious about how $N$ for the individual forward process was actually set.\n3. The method overview on page 5 mentions that pre-trained diffusion models can be used, but I am curious if the only one actually used is CIFAR-10, as shown in Table 1. (The experiment by providing the models with CIFAR-10 with two sets of labels divided into five and five) I think if the authors provide the results using the output of various datasets, the paper will be stronger.\n\n**Minor Questions**\n1. I think there should be punctuation after *\"...a superposition of elementary vector fields\"* on page 3, lines 140 and 141.\n2. I think the introduction of the abstract is too long. This could be reduced since the intro occupies 1/3 of the entire amount.\n3. It would have been interesting if there was a comparison according to the distance of the disjoint set." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written and easy to understand. There are almost no grammatical errors. By developing the idea of superposition and using theoretical principles, the authors prove the idea's potential and present a reasonable result.\n2. They apply their work to two individual tasks, which could be divisive among readers, but I found it interesting.\n3. Also, it is interesting that the authors discover their model follows traditional operators such as logical OR and logical AND, making it intuitive. Similarly, the background of explaining how the superposition emerged from the diffusion models by using the vector fields and propositions is interesting.\n4. They use nine propositions, two theorems, and one lemma to support their idea, which helps readers understand why their algorithms work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose the method to combine the multiple pre-trained diffusion models at the inference time without retraining the models. They come up with the theoretical principles using the continuity equation to show how diffusion models can be viewed in the superposition of elementary vector fields. Here, they implement two algorithms to combine pre-trained diffusion models. One is a mixture of densities (sampling from one model OR another), and the other is equal densities(samples that are likely to belong to one model AND another). They also overcome the challenges with existing diffusion models, such as (1. Differences of Marginal super-positional vector field between different models) and (2. Divergence operation’s time complexity) by introducing their density estimator. They apply their approach in various ways, such as combining models trained on disjoint datasets, concept interpolation in image generation, and improving the structure of protein design." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**(Main) Qualitative Results and the Quantitative Results with figures**\n1. Figure 1 is weak to verify the novelty of the model. I also think the generated images in the appendix, as well as the qualitative results, are mediocre.\n2. The author only uses the AND operation (sampling equal densities) for qualitative results, and OR operation for the quantitative results. I believe that including the results for the OR operation in qualitative results and the AND operation in quantitative results would strengthen the paper. This would provide a more comprehensive view of the statement on line 104 on page 2: \"improvements in designability and novelty generation\".\n3. Figure 2 does not show how the generated images are actually arranged. It is necessary to verify if the same results occur when directly arranging the generated images with the trained datasets.\n\n**Evaluation metrics and ablation study**\n1. The comparative group for the paper's qualitative results is insufficient. Comparisons with other recent models that produce dual images, such as factorization diffusion or visual anagram (Geng et al.,2024), could be added. Since it is clear that the latent diffusion result for just adding the prompt 'that looks like' would indeed be worse than the proposed method.\n2. Similarly, in the process of making baselines for concept interpolation, I wonder if the value of the ablation study would have increased if the direction of A->B and B->A was changed and the comparison group was chosen by using the better result.\n3. The execution times for the experiments were not provided. The authors claim to have solved the computational expense issue, but no results support this claim.\n\n**Clarity of the paper**\n1. Proposition 8 appears to be quite important but confusing because it was cut off the page. Listing the individual terms of $Aκ = b + o(Δt)$ on the same page would improve comprehension.\n2. The related work section comes out almost at the end of page 10, and I think this part should come out more front. It comes out so out of the blue that it somewhat interferes with understanding.\n3. The protein generation part is not clearly introduced. The authors compare Designability, Novelty, and Diversity, and there is no separate explanation of how this part is meaningful in protein generation. I didn't feel that logic was connected smoothly." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Why are there no quantitative results on SD, and detailed discussion of other very relevant methods as referenced earlier?\n* FID statistics on CIFAR-10 are computed on the whole dataset. Is it fair to evaluate models trained on a partial dataset using such statistics, especially when the two partitions are generated by splitting the classes?\n* What are the practical implications of the OR operator, especially in the field of image generation?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The theoretical framework is solid.\n* The method is well-motivated and supported by the theory.\n* The method is training-free, and could be applied to diffusion models with different architectures.\n* The results of protein generation outperform other baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel algorithm for combining multiple pre-trained diffusion models at inference time, by the principle of superposition of vector fields. The method demonstrates more diverse generation results, better prompt following on image data, and improved structure design of proteins as well." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The practical implications of AND, and OR operators are not explained clearly in both image and protein generation settings. What effect will the OR operator create on images, compared to the AND operator?\n* Lacks quantitative results on SD. Could have used metrics such as TIFA Score [1] and Image Reward [2]. I wonder if there is any reason that no such metric was used. \n* Lacks comparison against other relevant methods [3-6]. In particular, [3,4,6] are all inference-time methods that sample from some sort of mixture of scores and demonstrate multiple practical uses, such as composing objects, styles, scenes, or improving text-image alignment. Need more discussions on the capabilities of the proposed method versus others: besides the different theoretical perspectives, how SUPERDIFF performs differently, the strengths and weaknesses of SUPERDIFF than the other methods. If experiments are not possible, please include a more detailed discussion. The comparison could help readers understand the proposed method in a broader context. \n\n[1] Hu, Y., Liu, B., Kasai, J., Wang, Y., Ostendorf, M., Krishna, R., Smith, N.A.: Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. arXiv preprint arXiv:2303.11897 (2023)\n\n[2] Xu, J., Liu, X., Wu, Y., Tong, Y., Li, Q., Ding, M., Tang, J., Dong, Y.: Imagereward: Learning and evaluating human preferences for text-to-image generation. Advances in Neural Information Processing Systems 36 (2024)\n\n[3] Du, Y., Durkan, C., Strudel, R., Tenenbaum, J.B., Dieleman, S., Fergus, R., SohlDickstein, J., Doucet, A., Grathwohl, W.S.: Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc. In: International conference on machine learning. pp. 8489–8510. PMLR (2023)\n\n[4] Golatkar, A., Achille, A., Swaminathan, A., Soatto, S.: Training data protection with compositional diffusion models. arXiv preprint arXiv:2308.01937 (2023)\n\n[5] Biggs, Benjamin, et al. \"Diffusion Soup: Model Merging for Text-to-Image Diffusion Models.\" arXiv preprint arXiv:2406.08431 (2024).\n\n[6] Liu, Nan, et al. \"Compositional visual generation with composable diffusion models.\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How do the authors explain the source of their numerical improvements using SuperDiff OR?\n- What density is being sampled from when using SuperDiff AND?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The main strength of the paper is the important observation that the probability density function of generated images can be efficiently evaluated without the need for computing the divergence of the score. It is leveraged to sample from mixtures of densities, where the weights can be defined implicitly and adaptively (in the case of the logical AND operator as defined here). The experimental results convincingly demonstrate the effectiveness of the resulting approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel, principled, and efficient way to combine diffusion models trained on different datasets (or conditioned on different prompts) to generate images from the mixture and the \"intersection\" of the corresponding distributions. It is based on a clever way to evaluate the densities $\\log p^i_t(x_t)$ of the current iterate $x_t$ under each (noisy) distribution $q^i_t$ during synthesis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In my opinion, the main weakness of the paper is in the clarity of the presentation of the central theoretical result (culminating in Proposition 7) and the motivation for the approach. I believe it can be significantly improved, which could enhance the impact of the paper. \n- I found section 2.1 to be unnecessary complicated and rather irrelevant for the rest of the exposition. To my understanding, the main ideas are (1) that SDEs define linear equations on the densities, so that a mixture of clean distributions $\\sum w_i p^i$ leads to a mixture of noisy distributions $\\sum w_i p_t^i$ and (2) the relationship $\\nabla \\log (\\sum w_i p^i_t) = \\sum w_i p^i_t \\nabla \\log p^i_t / \\sum w_i p^i_t$. These motivate the need for evaluating $p^i_t$ to combine scores in the correct way to sample from mixtures.\n- The equations are obscured by the use of general schedules with arbitrary $\\alpha_t$ and $\\sigma^2_t$. I encourage the authors to state the results in the main text with e.g. $\\alpha_t = 1$ and $\\sigma_2^t$ (known as the variance exploding SDE) to simplify the exposition and relegate the general case to the appendix. \n- Some results are also less intuitive (in my opinion) due to the choice to work in discrete time. For example, Proposition 6 and Theorem 1 are nothing but approximating the kernels $k_{\\Delta t}$ and $r_{\\Delta t}$ with Euler-Maruyama discretizations of the corresponding forward or backward SDEs (and analyzing the discretization error in Theorem 2). Similarly, Proposition 7 can be obtained in continuous time first (and then discretized) by applying Itô's formula to $\\log q_t(x_t)$ where $x_t$ is a solution of the backward SDE (and using the fact that $q_t$ solves a Fokker-Planck equation). As an example, in the variance-exploding case, one obtains that $\\mathrm{d} \\log q_t(x_t) = \\frac{\\mathrm{d}t}2 ||\\nabla \\log q_t(x_t)||^2 + \\langle \\mathrm{d}x_t, \\nabla \\log q_t(x_t)\\rangle$, which is the $\\Delta t \\to 0$ limit of Proposition 7 with $\\alpha_t = 1$ and $\\sigma^2_t = t$. I believe this result to be of independent interest, and would thus benefit from being highlighted and stated as simply as possible.\n\nAnother issue I have is regarding the logical OR and AND operators as defined in this paper.\n- The logical OR operator corresponds to a fixed-weight mixture of distributions, and it is thus trivial to sample from. One can simply select one diffusion model with probability corresponding to the mixture weight, and then use exclusively the score of the chosen diffusion model during generation. Using SuperDiff should be equivalent to this algorithm. So either the improved results in section 4 can also be achieved with this simple baseline, in which case the theoretical results are not needed, or the baseline underperforms, in which case the improvements come from unknown implementation choices which are completely orthogonal from the theoretical analysis. In both cases, this raises questions.\n- The real strength of the approach, I think, is when the mixture weights are adaptive (i.e., they are allowed to depend on the current iterate $x_t$). In that case, however, it is not clear what density we are ultimately sampling from. If I understand correctly, here the logical AND operator is defined implicitly, and produces samples $x$ such that $q^1(x) = q^2(x)$. A perhaps more usual definition is that one would aim to sample from the normalized product $q^1(x)q^2(x)/Z$ (or geometric mean $\\sqrt{q^1(x)q^2(x)}/Z$), but this seems difficult to achieve with the formalism of this paper. It could be beneficial to include a short discussion of this matter in the paper.\n\nFinally, I could not see where the parameters $\\omega$ and $T$ in Table 2 were explained." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "the principled way to combine the outputs of several diffusion models" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024the,\ntitle={The Superposition of Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2o58Mbqkd2},\nnote={under review}\n}" }, "abstract": { "value": "The undeniable success of deep generative models for learning complex and high-dimensional data distributions has led to the proliferation of large-scale diffusion models across the entire machine-learning application spectrum. This Cambrian explosion of easily accessible pre-trained models, including fine-tuned open-source models on user-specific data, suggests a demand for methods that combine multiple different pre-trained models without incurring the significant computational burden of re-training a larger combined model. In this paper, we cast the problem of combining multiple pre-trained diffusion models at the generation stage under a novel proposed framework termed superposition. Theoretically, we derive superposition from rigorous first principles stemming from the celebrated continuity equation and design two novel algorithms tailor-made for combining diffusion models in SuperDiff. We demonstrate that SuperDiff is scalable to large pre-trained diffusion models as superposition is performed *solely through composition during inference*, and also enjoys painless implementation as it combines different pre-trained vector fields through an automated re-weighting scheme. Notably, we show that SuperDiffis efficient during inference time, and mimics traditional composition operators such as the logical $\\texttt{OR}$ and the logical $\\texttt{AND}$. We empirically demonstrate the utility of using SuperDiff for generating more diverse images on CIFAR-10, more faithful prompt conditioned image editing using Stable Diffusion, and improved unconditional *de novo* structure design of proteins." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "generative modelling", "protein generation", "image generation", "diffusion models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/595e9c223fb038f65e634c9f7c0329a7b1c715a7.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "The Superposition of Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2o7wxbKEQY
TGTOD: A Global Temporal Graph Transformer for Outlier Detection at Scale
main
Active
Graph Outlier Detection;Temporal Graph Learning;Graph Transformers
learning on graphs and other geometries & topologies
3;3;3;5
5;5;5;5
1;2;3;2
2;2;2;2
2;3;3;2
3.5
5
2
2
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Q1 Could the authors provide some insights on the choice of clustering algorithm and patching interval? Specifically, the choice to use METIS for clustering is not directly tied to empirical or theoretical benefits specific to TGTOD’s design.\n\n- Q2 How does the partitioning of the temporal graph affect spatio-temporal correlation?\n\n- Q3 Have the authors tried directly using an efficient Trasnformer (e.g. Nodeformer) with single global-attention but not patching? \n\n- Q4 Could the authors provide a more clear comparison between TGTOD and Nodeformer, since they share the same kernelized message passing with GNN embedded? Is FiGraph that used C=1 cluster (Table 6) corresponding to this case?\n\n- Q5 How does TGTOD’s scalability compare to non-Transformer-based methods, such as GNNs?\n\nWu, Qitian, et al. \"Nodeformer: A scalable graph structure learning transformer for node classification.\" NeurIPS'22" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The hierarchical Transformer structure, combined with spatiotemporal patching is a promising approach to improving scalability.\n\n- TGTOD performs in the evaluation, validating the feasibility of using patch-based methods for financial and fraud detections in temporal graphs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies how to use Transformer for outlier detection in temporal graphs at scale. A temporal graph transformer with hierarchical architecture is proposed to handle partitioned temporal graph patches with improved scalability. The proposed TGTOD is evaluated on three datasets and outperforms standard Transformer baselines in both performance and computational efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Partitioning large graphs into clusters is a well-established technique for dealing with scalability issues, e.g., ClusterGCN, GraphSAINT.\n- Current model designs (e.g., choice of clustering algorithm, patch size, and hierarchy) lack clear, evidence-based justification.\n- Results appear to be highly tailored to specific datasets for outlier detection, while the broader applicability of TGTOD to other temporal graph domains or for general purpose of spatio-temproal graph learning remains uncertain.\n\nChiang, Wei-Lin, et al. \"Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks.\" KDD'19. \nZeng, Hanqing, et al. \"Graphsaint: Graph sampling based inductive learning method.\" ICLR'20" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "**Q1**: How does the graph partitioning approach in TGTOD differ from that used in ClusterGCN? Additionally, how does TGTOD's focus on temporal graphs influence its partitioning strategy compared to the static graph approach of ClusterGCN?\n\n**Q2**: Can existing scalable node-level anomaly detection methods, such as XGBGraph [R4], be directly applied to address the challenges of temporal outlier detection? If not, what specific modifications or adaptations are necessary to ensure these methods effectively handle the dynamic nature of temporal graphs? If they can be applied directly, how does TGTOD compare with XGBGraph in terms of effectiveness and efficiency when dealing with temporal outlier detection?\n\n**Q3**: It appears that the authors may have omitted necessary parentheses in the loss function presented in Equations 2 and 3.\n\n**Q4**: To provide a comprehensive efficiency analysis of TGTOD, it would be helpful to report the results of other baseline models.\n\n---\n\n[R4] J. Tang, , F. Hua, Z. Gao, P. Zhao and J. Li. GADBench: Revisiting and Benchmarking Supervised Graph Anomaly Detection. 2023. NeurIPS(36): 29628-29653." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "**S1**: The paper studies the problem of graph outlier detection by focusing on temporal graphs. This problem is important and has many practical applications in real-world scenarios.\n\n**S2**: The authors conduct extensive and thorough experiments to demonstrate the effectiveness of their proposed transformer framework across three real-world datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the challenge of graph outlier detection in temporal graphs. The authors argue that existing transformer-based models are inadequate for temporal graphs due to their quadratic computational cost and suboptimal generalization capabilities. To overcome these limitations, they propose partitioning the given graph into multiple subgraphs and applying hierarchical transformers to these subgraphs. Their method, TGTOD, integrates both graph neural networks and transformers to effectively capture structural and temporal dependencies within the temporal graph. Experimental results demonstrate the superior performance of TGTOD on three real-world temporal graphs, outperforming general graph neural networks, graph transformers, and graph outlier detectors." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**W1**: The primary concern regarding this work centers on its substantial lack of novel insights and originality in the proposed framework. The core components of the proposed framwork appear to be largely derivative of existing approaches, with minimal innovative additions. Firstly, the idea of graph partitioning as a strategy for reducing computational complexity, while effective, cannot be considered a novel contribution, as this approach has been extensively explored and implemented in existing models like ClusterGCN [R1]. Secondly, both the temporal transformer and cluster transformer essentially replicating the vanilla transformer architecture without substantial modifications or improvements tailored to graph-specific challenges. Similarly, the patch transformer component appears to be a direct adaptation of NodeFormer [R2]. Thirdly, integrating different components through weighted summation of GNN and transformer outputs has been previously introduced in SGFormer [R3].\n\n**W2**: The time complexity analysis is cursory and lacks rigor. It omits crucial considerations regarding the complexity of the METIS clustering algorithm, and the presentation lacks formal asymptotic notations. Additionally, the numerical examples provided are overly simplified, neglecting critical constant terms that could significantly impact real-world performance, such as the number of clusters, hidden dimensions, and attention head counts. A more rigorous analysis should encompass these factors and present complexity bounds with appropriate asymptotic notation. \n\n**W3**: The efficiency analysis is insufficient. The authors only compare their proposed TGTOD with DyGFormer, which does not offer a comprehensive assessment of its efficiency. It is imperative to include comparisons against a wider array of state-of-the-art methods and other baseline models for a more thorough evaluation. \n\n**W4**: The authors claim that existing transformer-based models suffer from restricted receptive fields. However, transformers are renowned for their ability to leverage a global receptive field, which is a significant advantage over traditional graph neural networks. As such, transformers can effectively address the constraints imposed by graph structures and capture long-range dependencies. This statement requires further justification and clarification to be convincing.\n\n---\n\n[R1] W. Chiang, X. Liu, S. Si, Y. Li, S. Bengio and C. Hsieh. Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks. 2019. SIGKDD: 257–266. \n\n[R2] Q. Wu, W. Zhao, Z. Li, D. Wipf and J. Yan. NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification. 2022. NeurIPS(35): 27387-27401.\n\n[R3] Q. Wu, W. Zhao, C. Yang, H. Zhang, F. Nie, H. Jiang, Y. Bian and J. Yan. SGFormer: Simplifying and Empowering Transformers for Large-Graph Representations. 2023. NeurIPS(36): 64753-64773." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the weakness" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. Outlier detection in dynamic graphs is an important problem.\n2. Given the limited number of existing models for outlier detection in dynamic graphs, this paper makes a valuable contribution by focusing on this direction and proposing a new method specifically for outlier detection in dynamic graphs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces TGTOD, a new end-to-end Temporal Graph Transformer designed for Outlier Detection in dynamic graphs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Unclear Motivation: The motivation behind this work is not well-founded. For example, the authors mention \"Limited receptive field\" as a motivation; however, neither DyGFormer nor SimpleDyG was specifically designed for outlier detection. The use of first-order neighbors is a deliberate design choice to avoid aggregating irrelevant information, which has proven effective in link prediction tasks. Thus, this choice is not inherently a limitation of receptive field. Additionally, the concept of \"task misalignment\" seems misplaced since previous models were not intended for outlier detection, making \"pretraining\" irrelevant in this context.\n\n2. Poor Organization: The paper dedicates substantial space to background knowledge and related works, yet fails to incorporate these works in the experimental comparisons. This organizational choice limits the paper’s coherence and weakens its argument for contribution.\n\n3. Limited Experiments: The experimental section is insufficient to convincingly demonstrate the model’s efficacy. Although several related works (e.g., NodeFormer, DIFFormer, SGFormer, CoBFormer) are discussed, none are included in the experimental comparisons. Furthermore, the baselines used (e.g., GCN, GraphSage) are basic, while more advanced temporal models like CAWN and TCL would be more appropriate. The limited metrics (AP and AUC) are inadequate for evaluating performance on an imbalanced dataset with a low anomaly rate; metrics such as F1-score would provide a more complete evaluation. The absence of ablation studies and hyperparameter analysis further detracts from the experimental rigor.\n\n4. Limited Novelty: The novelty of the model is minimal, as it merely combines three existing transformer architectures without any modification, contributing little innovation in terms of model design." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "+ Why is SimpleDyG mentioned in the related work but missing from the comparative analysis?\n\n+ Since the primary focus is on outlier detection, I suggest including some static outlier detection methods for comparison, instead of relying solely on common GNNs like GCN and SGC.\n\n+ The use of only three datasets is insufficient. Common benchmarks for temporal outlier detection, such as Wikipedia, Reddit, and Mooc[1], are notably missing from the experiments.\n\n+ The definitions in lines 192-193 are inaccurate. Generally, node labels are dynamically changing and are usually defined with a timestamp $t$.\n\n+ Some state-of-the-art baselines are missing, such as SAD[2] and SLADE[3].\n\n+ The claim that “existing Transformers are pretrained on link prediction…” is not entirely correct. Many temporal Transformers (e.g., TGAT, DyGFormer, SimpleDyG) are trained in an end-to-end manner for node- or link-level tasks.\n\n+ In Table 4, TGTOD shows good efficiency over DyGFormer. However, DyGFormer was not designed to be an efficient method for temporal graph learning. The authors should include more relevant baselines like SimpleDyG and TGAT for a comprehensive comparison.\n\n+ Ablation studies on varying time slots and the number of clusters are missing.\n\n+ In Table 6, the time slot is set to 1 for most datasets, which is a common setting in temporal graph learning. What is the necessity of the “patching” step in this context?\n\n\n\n[1] JODIE: Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks. KDD 2019.\n\n[2] SAD: Semi-Supervised Anomaly Detection on Dynamic Graphs. IJCAI 2023\n\n[3] SLADE: Detecting Dynamic Anomalies in Edge Streams without Labels via Self-Supervised Learning. KDD 2024." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ The proposed method is simple, effective, and scalable.\n\n+ The experimental results show overall improvement over baselines.\n\n+ Code for reproducing the experiments is provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors address the problem of anomaly detection over temporal graphs, a relatively less explored area compared to anomaly detection on static graphs. They highlight limitations in learning temporal signals using Transformers for this task.\n\nBased on these limitations, the authors propose an end-to-end Temporal Graph Transformer for Outlier Detection (TGTOD). TGTOD improves scalability by dividing large temporal graphs into spatiotemporal patches, followed by three Transformer networks to model both structural and temporal dependencies in temporal graphs. The experimental results demonstrate the effectiveness of TGTOD against leading baselines in outlier detection tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "+ The focus on Transformers for temporal graph learning raises concerns about novelty, as similar approaches have been extensively explored.\n\n+ The experiments are not fully convincing. Important datasets, baselines, and ablation studies are missing (see detailed comments below).\n\n+ Some claims and illustrations are vague and require more clarity (see detailed comments below)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose TGTOD, an end-to-end temporal graph Transformer for outlier detection, conducting global spatiotemporal attention at scale." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024tgtod,\ntitle={{TGTOD}: A Global Temporal Graph Transformer for Outlier Detection at Scale},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2o7wxbKEQY},\nnote={under review}\n}" }, "abstract": { "value": "Graph outlier detection aims to identify anomalous substructures in graphs that deviate significantly from normal patterns. Traditional methods primarily focus on static graphs, overlooking the dynamic nature of real-world networks and ignoring valuable temporal signals crucial for outlier detection. While Transformers have revolutionized machine learning on time-series data, existing Transformers for temporal graphs face limitations in (1) restricted receptive fields, (2) overhead of subgraph extraction, and (3) suboptimal generalization capability beyond link prediction. In this paper, we propose TGTOD, a novel end-to-end Temporal Graph Transformer for Outlier Detection. TGTOD employs global attention to model both structural and temporal dependencies within temporal graphs. To tackle scalability, our approach divides large temporal graphs into spatiotemporal patches, which are then processed by a hierarchical Transformer architecture comprising Patch Transformer, Cluster Transformer, and Temporal Transformer. We evaluate TGTOD on three public datasets under two settings, comparing with a wide range of baselines. Our experimental results demonstrate the effectiveness of TGTOD, achieving AP improvement of 61% on Elliptic dataset. Furthermore, our efficiency evaluation shows that TGTOD reduces training time by 44×compared to existing Transformers for temporal graphs. To foster reproducibility, we make our implementation publicly available at https://anonymous.4open.science/r/tgtod." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph Outlier Detection", "Temporal Graph Learning", "Graph Transformers" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4c46588159d3bffc4113cc331539fdfbedcae0ce.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "TGTOD: A Global Temporal Graph Transformer for Outlier Detection at Scale" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2oKkQTyfz7
General Scene Adaptation for Vision-and-Language Navigation
main
Active
vision-and-language navigation; scene adaptation; multi-modal learning
datasets and benchmarks
3;5;5;6;8
3;4;4;4;4
2;2;3;3;3
2;2;2;2;4
2;3;3;3;3
5.4
3.8
2.6
2.4
2.8
0.738549
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Memory Mechanism Scalability:\nWhile the memory-based approach in GR-DUET performs well in your experiments, how does this method scale to larger or more complex environments? As the environment size or the number of instructions increases, the memory bank may become too large to manage efficiently. Could you provide further analysis or experiments that demonstrate how the method performs with continuous accumulation of data in larger datasets or more complex environments?\n\n2. the paper lacks a detailed discussion on how the memory is utilized, including how similar tasks are stored, how memory is retrieved and assessed for relevance and validity, and how prior knowledge is leveraged. Is the memory bank pre-set or updated dynamically? If it is updated dynamically, how is the correctness of the stored memories ensured, especially when handling diverse memories? How are the initial model parameters (L194, L198) initialized to ensure sufficient generalization? Please provide more details\n\n3. Furthermore, other memory-based VLN methods, such as SG-Nav [1], provide more detailed storage and query mechanisms based on topological graphs and memory updates. Could you compare your approach with SG-Nav in terms of performance or highlight any differences and advantages?\n\n4. Adaptation to Instruction Styles:\nYou mention using GPT-4 and a three-stage process to generate different instruction styles, but it remains unclear how the agent adapts to these varying styles over time. Could you provide more quantitative and qualitative results on how GR-DUET handles changes in style, particularly in OOD environments? A deeper analysis of how different speaking styles affect agent performance and adaptability would offer valuable insights into the robustness of your method in real-world scenarios, where user communication patterns may vary significantly.\n\n5. Unsupervised Learning and Adaptation Efficiency:\nThe paper suggests that agents in GSA-VLN can improve their performance over time using unsupervised learning techniques. Could you clarify how quickly the agents adapt in different environments? Are there any cases where adaptation is less effective or slower? Are there specific environments where the memory mechanism struggles to adapt? A more detailed breakdown of adaptation speed and efficiency across different environment types would help clarify the limitations of your approach and guide future improvements.\n\n6. Practical Deployment and Real-World Use Cases:\nThe GSA-VLN task is well-motivated by real-world scenarios, but the paper does not provide a detailed discussion on how the proposed method could be deployed in practical systems. Could you elaborate on the computational and memory overhead of your approach in real-time systems, such as those used in robotics or autonomous agents?\n\nReference:\n[1] Yin, Hang, et al. \"SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation.\" arXiv preprint arXiv:2410.08189 (2024)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Novelty:\nThe paper introduces a new task, GSA-VLN, which focuses on the long-term adaptation of agents within specific environments, a capability with significant potential for real-world applications.\n\nDataset Contribution:\nThe authors present the GSA-R2R dataset, which extends the existing R2R dataset by using GPT-4 and a three-stage method to generate instructions in various speaking styles. The dataset is divided into residential and non-residential environments, serving as in-distribution (ID) and out-of-distribution (OOD) data, respectively.\n\nMethod Design:\nThe GR-DUET method integrates topological graphs with memory mechanisms, effectively preserving historical information and updating it continuously during navigation. This approach demonstrates notable improvements in performance, particularly in OOD (non-residential) scenarios.\n\nExperimental Results:\nThe paper compares GR-DUET with optimization-based and memory-based methods across different environment and speaking style splits. The experiments highlight the feasibility of the GSA-VLN task and the effectiveness of the GR-DUET method in various settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes GSA-VLN (General Scene Adaptation for Vision-and-Language Navigation), a task designed to enhance the performance of navigation agents by enabling them to adapt to specific environments, particularly when exploring in the same environment over an extended period. The authors also introduce GSA-R2R, an expanded version of the HM3D and MP3D dataset, offering richer environments and more diverse instructions. Additionally, they present a novel method, GR-DUET, which improves navigation performance by utilizing memory mechanisms and updating graph structures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Please see the Questions section for detailed improvement suggestions and questions.\nI look forward to the authors' responses to these questions, as addressing these points could significantly clarify some of the paper's contributions and limitations. I am open to adjusting my score if the authors provide further insights or resolve the concerns raised above." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses section." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper introduces the novel General Scene Adaptation for Vision-and-Language Navigation (GSA-VLN) task, filling a critical gap in VLN research by focusing on adaptation in persistent environments. Rather than assuming agents will encounter only unseen environments, GSA-VLN models a more realistic scenario where agents learn and improve over time within a familiar setting. This shift in task formulation is both timely and innovative, especially as VLN moves toward practical applications.\n2. The paper demonstrates rigorous methodology in creating the GSA-R2R dataset, expanding on the Room-to-Room (R2R) dataset with a variety of environments, instruction styles, and out-of-distribution examples to thoroughly test agent adaptability. The proposed Graph-Retained DUET (GR-DUET) model is well-designed, combining memory-based navigation graphs with a scene-specific training strategy, and shows significant performance improvements across metrics. \n3. The paper is clearly organized and effectively conveys the importance of long-term scene adaptation in VLN." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents General Scene Adaptation for Vision-and-Language Navigation (GSA-VLN), a new VLN task where agents adapt to and improve in a specific environment over time, making it closer to real-world applications. To support this, the authors introduce GSA-R2R, a dataset that expands on Room-to-Room (R2R) by adding more diverse environments and instruction styles, including out-of-distribution examples. They also propose Graph-Retained DUET (GR-DUET), a method that uses memory-based navigation graphs and scene-specific training to help agents learn and retain scene-specific information, achieving strong results on the GSA-R2R benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The GR-DUET method involves a memory bank and a global graph that retains historical information across episodes. As the memory and graph size increase, the model’s computational requirements may grow significantly, particularly for long-term navigation in large environments. While the paper includes an environment-specific training strategy to limit graph expansion, providing an analysis of computational costs and potential trade-offs between memory retention and scalability would strengthen the model's practicality for deployment on resource-constrained systems.\n2. While the GSA-R2R dataset is a notable improvement over existing datasets for testing scene-specific adaptation, it may still fall short in representing the full diversity of real-world environments and interaction styles. The dataset includes a mix of residential and non-residential scenes, but further validation with a broader set of real-world environments could strengthen the model's applicability. Including additional scene types, such as commercial or outdoor spaces, or testing in dynamic environments where the layout changes over time, would push the dataset closer to real-world settings.\n3. Although the paper’s three-stage instruction generation pipeline enhances instruction diversity, more detailed analysis on how different instruction styles (e.g., Basic, User, Scene) impact agent performance would be valuable. For instance, specific ablation studies on each instruction type could clarify how robust the GR-DUET model is to variances in language, phrasing, and style. Additionally, investigating how the model generalizes across speakers with different dialects or levels of detail in instructions could provide actionable insights into improving instruction handling." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Line 283-286, how is the navigation model used here implemented? How can we ensure that it is a good instruction discriminator? I am concerned that if the navigation model is not trained well, it will not be enough to explain the quality of the instruction." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1、This paper is written with details and clear presentations, easy to follow.\n\n2、The author solves the VLN problem from a new perspective and divides the scenarios into Residential and Non-Residential." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a task that requires the VLN agent to execute VLN task in only one environment while storing its historical information at the same time. To make the initial parameters of the agent more general, the authors generate more environments and more instructions by using LLM. Finally, the paper also provides some experiment results according to their proposed metrics to further highlight the efficacy of the proposed methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1、\tThe novelty of this paper is limited. The GSA-VLN TASK proposed in the paper is still a standard VLN task. The so-called “standard VLN task” mentioned by the paper also includes fine-tuning based on historical information and trained models, which are claimed as the novelty of GSA-VLN in Section 3.2. \n\n2、\tFollowing the previous comment, the GSA-R2R DATASET proposed in the paper uses more environments (HM3D), and then uses tools such as LLM to refine the dataset's quality, which has been a common practice in VLN. Also, the author should not choose to ignore the existing works (e.g., HM3D-AutoVLN[1], Scale VLN[2], YouTube-VLN[3]) have also expanded and refined the VLN dataset when comparing (Table 1). I recommend the authors compare the datasets mentioned above and include them in the main manuscript (e.g. in Table I).\n\n[1] Learning from Unlabeled 3D Environments for Vision-and-Language Navigation\n\n[2] Scaling Data Generation in Vision-and-Language Navigation\n\n[3] Learning Vision-and-Language Navigation from YouTube Videos\n\n3、The comparison metrics in the experimental part are all newly proposed by the authors, which cannot correctly reflect the effectiveness of the method proposed. I suggest that the authors conduct experimental comparisons in standard VLN using common VLN metrics, and compare them on other VLN datasets besides R2R, such as REVERIR, RxR, CVDN and SOON." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In section 3.3.4, the authors invite 15 participants to evaluate the instructions. Are the backgrounds (e.g., ages, etc.) of these 15 participants sufficiently homogeneous to demonstrate the refinement of the assessment? Also, I recommend disclosing the questionnaire they used for the test." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed GSA-VLN task and GSA-R2R dataset which considers real-world robot adaptation in persistent environments, is an interesting research direction.\n2. Overall, the writing is fluent and the figures convey the character of the task well.\n3. The proposed GR-DUET method outperforms the baselines, demonstrating its effectiveness in helping agents adapt to specific environments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new task, named GSA-VLN, which require agents to execute navigation instructions within a specific scene and simultaneously adapt to it for improved performance over time. This paper also proposes a new datast, GSA-R2R, which significantly expands the diversity and quantity of environments and instructions for the Room-to-Room (R2R) dataset to evaluate agent adaptability in both ID and OOD contexts. The biggest difference between the proposed task and dataset and previous work is the diversity of instructions, i.e., different individual features and linguistic conventions are taken into account." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some baselines are missing. I suggest to add some baseline methods based on LLM, especially the Zero-shot VLN methods. For example, InstructNav[1], NavCot[2]. The reason for this suggestion is that LLM's reasoning is now so powerful that it may be able to adapt well for different personal characteristics and language styles without the need for an additional adaption process, i.e., in zero-shot manner. Also, these different styles are essentially generated by LLM, so I'm concerned that these understanding these different styles is a very easy and undifferentiated thing for LLM to do. \n2. In this paper, the authors generated instructions for only five different character styles. However life can be much richer in terms of characters. The paper's contribution would have been greatly enhanced if the authors could propose a method for generating instructions for different character styles in a nearly infinite number of ways.\n3. The authors propose GR-DUET, but there are few details about it. For a reader who does not know DUET well, it may cause some difficulties in reading, and I suggest the authors to add some descriptions and details in the appendix.\n\n\n\n[1] InstructNav: InstructNav: Zero-shot System for Generic Instruction Navigation in Unexplored Environment.\n\n[2] NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is generally well motivated and the proposed tasks makes sense. For a pre-trained VLN agent, it is important to leverage the information and instructions in the new environment to further enhance it's knowledge and adapt to the new environment and uses.\n\n- A new dataset GSA-R2R based on the HM3D dataset is introduced with new instruction data collected to support the VLN task. The dataset can potentially be useful for the community.\n\n- Extensive evaluation of current VLN methods on the new dataset and different adaption methods are benchmarked. The proposed GR-DUET method demonstrates competitive performance compared to prior work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel task, GSA-VLN (General Scene Adaptation for Vision-and-Language Navigation), which trains agents to follow navigation instructions within a specific scene while adapting to it for enhanced performance over time. A new dataset, derived from the HM3D dataset, is introduced to support this task. Additionally, the authors propose a new method that serves as a baseline and achieves state-of-the-art performance in this domain." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The concept of adapting a model to a test scene is not entirely new, as numerous prior methods have explored unsupervised exploration or adaptation within embodied environments.\n\n- The Related Work section could be more comprehensive. For instance, some discussions are postponed to Section 4, but it's crucial to review prior work that also employs adaptation methods in VLN, particularly those utilizing memory-based approaches, and also highlight the main differences and contributions of the proposed method.\n\n- Additionally, beyond the VLN literature, how does the proposed method relate to Lifelong Learning and Test-time Adaptation?\n\n- Table 3 presents the navigation performance of various VLN models on the new dataset. Is the performance of these methods consistent with results on other benchmark datasets?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a new dataset, GSA-R2R, which significantly expands the diversity and quantity of environments and instructions for the Room-to-Room (R2R) dataset to evaluate agent adaptability in both ID and OOD contexts." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024general,\ntitle={General Scene Adaptation for Vision-and-Language Navigation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2oKkQTyfz7},\nnote={under review}\n}" }, "abstract": { "value": "Vision-and-Language Navigation (VLN) tasks mainly evaluate agents based on one-time execution of individual instructions across multiple environments, aiming to develop agents capable of functioning in any environment in a zero-shot manner. However, real-world navigation robots often operate in persistent environments with relatively consistent physical layouts, visual observations, and language styles from instructors. Such a gap in the task setting presents an opportunity to improve VLN agents by incorporating continuous adaptation to specific environments. To better reflect these real-world conditions, we introduce GSA-VLN (General Scene Adaptation for VLN), a novel task requiring agents to execute navigation instructions within a specific scene and simultaneously adapt to it for improved performance over time. To evaluate the proposed task, one has to address two challenges in existing VLN datasets: the lack of out-of-distribution (OOD) data, and the limited number and style diversity of instructions for each scene. Therefore, we propose a new dataset, GSA-R2R, which significantly expands the diversity and quantity of environments and instructions for the Room-to-Room (R2R) dataset to evaluate agent adaptability in both ID and OOD contexts. Furthermore, we design a three-stage instruction orchestration pipeline that leverages large language models (LLMs) to refine speaker-generated instructions and apply role-playing techniques to rephrase instructions into different speaking styles. This is motivated by the observation that each individual user often has consistent signatures or preferences in their instructions, taking the use case of home robotic assistants as an example. We conducted extensive experiments on GSA-R2R to thoroughly evaluate our dataset and benchmark various methods, revealing key factors enabling agents to adapt to specific environments. Based on our findings, we propose a novel method, Graph-Retained DUET (GR-DUET), which incorporates memory-based navigation graphs with an environment-specific training strategy, achieving state-of-the-art results on all GSA-R2R splits." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "vision-and-language navigation; scene adaptation; multi-modal learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e41d197ecb143849b8edd1e0630e5d8bb035b465.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6da67309fa3a6bf5493cd70f73e535394be1c171.zip" }, "title": { "value": "General Scene Adaptation for Vision-and-Language Navigation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ofVtMvRil
Learning grid cells by predictive coding
main
Active
grid cells;predictive coding;computational neuroscience
applications to neuroscience & cognitive science
3;3;5;6
3;3;4;4
2;2;3;4
1;2;1;3
2;3;3;4
4.25
3.5
2.75
1.75
3
0.96225
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The soundness of the paper is high, but my primary concerns center around the novelty of the algorithm beyond tPCN itself. Simply applying a non-negative constraint and applying to a new task does not seem like a sufficiently novel contribution for ICLR. It is unclear what enhancements of the algorithm could be necessary in the context of spatial navigation." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-\tThe general finding of the tPCN is encouraging, and the generalization to a different task than the Millidge 2024 paper is promising. \n-\tThe robustness experiments (4.4) show that the emergent grid-like activity is robust to model architectures. This is encouraging, since many experimental neuroscience manipulations show grid cells to be robust to manipulations of the environment of neural activity." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors investigate how a temporally-dependent version of predictive coding can extract compact latent spaces in the form of periodic grid activity from temporally structured place cell input. The findings are of general interest to theories of learning in biological settings, and replicate many previous results with a more biologically plausible learning mechanism." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tOverall, the study seems like an incremental follow on of the tPCN paper applied to a new domain, but which does not require fundamental changes to the original algorithm. \n-\tThe path integrating tPCN assumes input in the form of place cell activity, but does not account for how place cells and grid cells form from the combination of visual and self-motion information. Combined with the lack of anatomical constraints of direction of connectivity, the study is more about the formation of compressed latent spaces than the medial temporal lobe. Several existing studies, largely cited in the paper, already investigate the formation of such successor representations by predictive coding.\n-\tThe authors dismiss previous examples of learned grid cells (Dordek, Stachenfeld, Schaffer, etc) on the basis that these are not biologically plausible learning methods, but then move to use real-valued activation functions. There is no evidence from the methods presented in this paper that a spike-based temporal predictive coding network would converge." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I have two questions:\n\n1. In both model architectures presented, grid cell activity depends on input from place cells. However, in biological systems, place cell activity varies significantly across different environments, showing a phenomenon known as global remapping, whereas grid cells maintain a stable 2D toroidal manifold across environments. How does this model account for this discrepancy? If place cell activity, the input source for grid cells, changes substantially across environments, how does the model explain the stability of grid cell activity?\n\n2. In the medial entorhinal cortex (MEC), grid cells are organized into modules with distinct spacings. In the model proposed in this paper, do the network’s grid cells display discrete spacing distributions, and are there any indications of modular independence in their connectivity?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "To the best of my knowledge, this paper is the first to suggest that a predictive coding network can serve as a biologically plausible model for learning grid cells and perform simulations to validate this hypothesis. Additionally, the paper extends the application of PCN’s locally-based learning method to approximate backpropagation (BP) in temporally processing networks, using tPCN. While not formally proven, the authors draw comparisons between tPCN and 1-step BPTT, indicating that with multi-step inferences, tPCN’s performance could approach that of BPTT." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes that the mechanism by which grid cells are learned in biological systems may involve predictive coding. To test this hypothesis, the authors trained both a predictive coding network (PCN) and a temporal predictive coding network (tPCN) on path integration and non-path integration tasks. They observed that hexagonal firing patterns, characteristic of grid cells, emerged in both paradigms. Since PCN and tPCN introduce error cells that enable learning with spatially and temporally local rules, this discovery suggests a biologically plausible mechanism for grid cell formation. The authors also analyze the learning process in tPCN, comparing it analytically with 1-step backpropagation through time (BPTT), to explain the emergence of grid cells. Finally, they assess the robustness of grid cell emergence in their model by testing various activation functions, environments, and network sizes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main limitation lies in novelty. First, previous studies have already shown that grid cells can be learned either through non-negative PCA or via a single-layer BP-based network from place cell activity. Likewise, RNNs trained via BPTT for path integration to predict place cell activity have also been reported (see Sorscher et al., 2022). Additionally, the ability of PCN to approximate BP using local learning rules has been demonstrated previously (see Song et al., 2020), and the t-PCN structure’s capacity to approximate BPTT is a straightforward extension of prior work (Millidge et al., 2024). The robustness analysis in this paper largely follows procedures established in earlier RNN studies and does not report new phenomena (Schaeffer et al., 2022). Other biologically plausible learning algorithms, such as those using Oja’s rule, have also achieved grid cell-like activity, suggesting that this paper’s algorithm is not unique in this regard. Overall, the contribution seems to synthesize existing ideas without introducing significant innovation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see weakness" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is clearly written, and the question is well-defined." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study demonstrates that predictive coding can effectively train neural networks to develop hexagonal grid representations from spatial inputs, providing a biologically plausible explanation for the emergence of grid cells in the medial entorhinal cortex. By analytically comparing predictive coding with existing models, we offer new insights into the learning mechanisms of grid cells and extend predictive coding theory to the hippocampal formation, suggesting a unified learning algorithm for various cortical representations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My major concern is that the work may lack novelty.\n\n1. The use of non-negative and sparse network designs to produce grid cell-like patterns has been extensively discussed. For example, [1] reported that non-negative and sparse properties can generate grid cell -like patterns and theoretically demonstrated why non-negativity is the main driver of grid cell formation (which the author's paper does not address) instead of sparsity. Similar findings were also reported in [2]. Earlier, [3] proves that a nonnegativity constraint on firing rates induces a symmetry-breaking mechanism which favors hexagonal firing fields. [4] further explored, through extensive experiments, the conditions necessary for generating grid cells.\n\n2. Prediction tasks, including path integration, that produce grid cell-like patterns have also been widely reported, especially when the input data takes a place cell-like form. For instance, [5] also used place cell like input and path integration tasks to train networks and generate grid cells, while [6] theoretically analyzed the role of predictive learning in forming low-dimensional representations.\n\n3. In my understanding, tPCN is very similar to a one-step RNN (apart from the difference in local learning rules), so the fact that its training process resembles that of one-step tBPTT is not surprising. As previously noted, the key to forming grid cells lies in the predictive task, not the RNN network itself. Therefore, the similarity between tPCN and RNN does not offer significant insight into the generation of grid cells.\n\nFor the reasons above, I believe this paper does not offer substantial novelty or make a clear contribution to the field.\n\n\n\n[1]Whittington, James CR, et al. \"Disentanglement with biological constraints: A theory of functional cell types.\" *arXiv preprint arXiv:2210.01768* (2022).\n\n[2]Dorrell, William, et al. \"Actionable neural representations: Grid cells from minimal constraints.\" *arXiv preprint arXiv:2209.15563* (2022).\n\n[3]Sorscher, Ben, et al. \"A unified theory for the origin of grid cells through the lens of pattern formation.\" *Advances in neural information processing systems* 32 (2019).\n\n[4]Schaeffer, Rylan, Mikail Khona, and Ila Fiete. \"No free lunch from deep learning in neuroscience: A case study through models of the entorhinal-hippocampal circuit.\" *Advances in neural information processing systems* 35 (2022): 16052-16067.\n\n[5]Whittington, James CR, et al. \"The Tolman-Eichenbaum machine: unifying space and relational memory through generalization in the hippocampal formation.\" *Cell* 183.5 (2020): 1249-1263.\n\n[6]Recanatesi, Stefano, et al. \"Predictive learning as a network mechanism for extracting low-dimensional latent space representations.\" *Nature communications* 12.1 (2021): 1417." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The authors find that grid cells emerge under various configurations and constraints, even in the absence of velocity input. Could they expand on the implications of this finding for the role of predictive coding in spatial learning?\n\nYou claim that tPCN approximates tBPTT; however, the RMSE indicates that when the inference has fully converged, the tPCN outperformed tBPTT. Path integration is a Markov process, and it therefore makes sense that tBPTT should work. However, as you show, having the extra inference steps helps. Is it then tPCN that approximates tBPTT or the other way around (tBPTT approximates tPCN)\n\nMoreover, this begs the question: what is the difference between $g_{t-1}$ from RNNs and $\\hat{g}_{t-1}$ tPCNs that give this performance boost? \n\nIs there a qualitative difference in grid cells between the models, or are there other cell types that make $\\hat{g}_{t-1}$ \"better\"? One way to hint at this would be to ablate neurons in $g$ and rank them according to their effect on the loss. Are there any differences between these two populations? Another way would be to perform a detailed analysis of the predictive power of $g$ cells in the two models, for example, according to Ouchi et al.\n\nRelated works, such as the work from Giocomo in 2011, are outdated. Whether oscillatory dynamics are important for grid cells started as you point out with the work by [Burgess](https://pmc.ncbi.nlm.nih.gov/articles/PMC2678278/) and Hasselmo, but it was later included in CANNs by [Bush and Burgess](https://pubmed.ncbi.nlm.nih.gov/24695724/). The importance of oscillations in grid cells has been tested experimentally by [Lepperød et al](https://www.science.org/doi/full/10.1126/sciadv.abd5684), [Schmidt-Hieber et al.](https://www.nature.com/articles/nn.3340), [Robinson et al.](https://www.sciencedirect.com/science/article/pii/S2211124724009197)\n\n**Minor**\n\n - $\\hat{g}$ is used but not introduced as inferred before line 392; this can be nice to point out earlier.\n - Whether grid cells are learned or are there from birth is disputed; I would present this in less certain terms." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "**Originality:**\nThis paper provides a new perspective on grid cell formation by applying predictive coding. While previous work has used RNNs trained with BPTT to simulate grid cells, this study introduces predictive coding networks (PCNs) and temporal PCNs (tPCNs) as biologically plausible alternatives. While predictive coding has been addressed in hippocampal formation previously (Stachenfeld et al. ++), the proposed learning rules are novel in this context. \n\n**Quality:**\nThe authors demonstrate grid cell emergence in PCNs and perform a comparative analysis with existing RNN models. By analytically showing that tPCNs approximate truncated BPTT, they provide a theoretical solid grounding for their approach. Further, the robustness analysis—exploring different model architectures, non-linearities, and environments—addresses shortcomings proclaimed in recent work (Sorscher vs. Schaeffer). The theoretical and empirical sections are well-integrated.\n\n**Clarity:**\nThe authors use clear visual representations of presented ideas, making interpretation intuitive. The derivations are well-presented, especially in demonstrating the correspondence between tPCNs and truncated BPTT. However, some technical details on the inference dynamics of tPCNs might benefit from additional clarity or simplification, especially for readers less familiar with predictive coding. \n\n**Significance:**\nThe findings are interesting for neuroscience and machine learning. They suggest that predictive coding may underpin not only perceptual but also spatial and navigational representations. For neuroscience, predictive coding may unify perspectives across cortical functions. For machine learning, it offers an alternative to backpropagation-based learning in dynamic systems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the emergence of grid cells, known for their hexagonal firing patterns in spatial navigation, using predictive coding—a biologically plausible learning rule. The authors propose that grid cells can be learned by neural networks through predictive coding, which aligns well with the principles of local computations and Hebbian plasticity.\n\nThe key contributions are:\n\n - Demonstrating that predictive coding networks (PCNs) can naturally develop grid cell representations with sparse, non-negative constraints, and a temporal extension (tPCN) achieves similar results in dynamic tasks like path integration.\n - Establishing that tPCNs approximate the truncated backpropagation through time (BPTT), highlighting a biologically plausible alternative to BPTT for learning grid cells.\n - Analyzing the robustness of grid cell emergence in PCNs and tPCNs across varied architectural and environmental conditions, showing grid cells can still form even without velocity input." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although it is nice to see grid cells emerge in the proposed setup, it is not that surprising given the setup with static place cell readout. The comparison between BPTT and tPCNs is more interesting, in my opinion, than the grid cell results and can have broader implications beyond this particular setting; I would present this as the main result and, therefore, consider moving this result to an earlier stage and presenting the grid cell stuff as a test case.\n\nThe model operates under certain assumptions (e.g., reliance on sparsity, non-negative constraints, simplified path integration tasks, and place cell readout) that may not generalize well across different types of neuronal representations or tasks. However, the discussion lacks a critical assessment of these assumptions, specifically regarding where the predictive coding model might fall short compared to other frameworks for grid cells, such as the recent development of self-supervised learning for grid cells ([Schaeffer et al.](https://arxiv.org/abs/2311.02316)), conformal isometry, or distance preservation ([Xu et al.](https://arxiv.org/abs/2210.02684), [Dorell et al.](https://arxiv.org/abs/2209.15563)). For example, the choice of static read-out place cells limits studies of remapping (but can be done; see [Schøyen et al.](https://www.sciencedirect.com/science/article/pii/S258900422302179X), different geometries [Krupic et al.](https://www.nature.com/articles/nature14153) etc.\n\nThe proposed predictive coding model successfully generates grid cells, but the mechanistic explanation for how and why grid cells emerge under predictive coding is lacking. Moreover, the field suffers from challenges in comparing representations across studies, barring visual inspection. Grid scores are used to assess grid cell likeness; however, these give little insight beyond 60-degree symmetry. I suggest you use something else to assess the function of the networks, such as ablation studies and studying the full representational setting of the network. For example, do you see border cells, band cells, etc? At least provide examples, preferably representations from the full network, in the supplementary.\n\nAll in all, since the title and introduction of the paper highlight grid cells, I would expect more analysis of this finding and a broader comparison with the existing literature. However, I think the more interesting finding is the comparison between BPTT and tPCNs. Therefore, I would recommend lifting this part of the paper and proposing the grid cell story as a potential application motivating further studies on this line of work, although I do see your point on extended analysis on this being out of scope." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We show that grid cells can be learned in neural networks via predictive coding, a biologically plausible learning rule." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning grid cells by predictive coding},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ofVtMvRil},\nnote={under review}\n}" }, "abstract": { "value": "Grid cells in the medial entorhinal cortex (MEC) of the mammalian brain exhibit a strikingly regular hexagonal firing field over space. These cells are learned after birth and are thought to support spatial navigation but also more abstract computations. Although various computational models, including those based on artificial neural networks, have been proposed to explain the formation of grid cells, the process through which the MEC circuit ${\\it learns}$ to develop grid cells remains unclear. In this study, we argue that predictive coding, a biologically plausible plasticity rule known to learn visual representations, can also train neural networks to develop hexagonal grid representations from spatial inputs. We demonstrate that grid cells emerge robustly through predictive coding in both static and dynamic environments, and we develop an understanding of this grid cell learning capability by analytically comparing predictive coding with existing models. Our work therefore offers a novel and biologically plausible perspective on the learning mechanisms underlying grid cells. Moreover, it extends the predictive coding theory to the hippocampal formation, suggesting a unified learning algorithm for diverse cortical representations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "grid cells", "predictive coding", "computational neuroscience" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/df75e1a04211e59ed875105274fb984d58338ff0.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f95a6409e1a1770ccbd3bcf75fe0f183e1377605.zip" }, "title": { "value": "Learning grid cells by predictive coding" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2og3oWsC5n
TaKF$^{+}$: A versatile and parameter-efficient tuning for EEG foundation model
main
Active
EEG;Foundation model;Parameter-efficient fine-tuning;Additive fine-tuning
foundation or frontier models, including LLMs
3;3;3;5;6
3;4;5;4;4
2;3;2;2;3
2;2;2;2;2
2;2;3;3;3
4
4
2.4
2
2.6
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Why not evaluate on a more commonly used benchmark such as dataset B from 2008 BCI competition? https://www.doi.org/10.1109/TNSRE.2007.906956\n- Line 322: The term “fune-tuned” is confusing in this context, it suggests that the supervised methods are pre-trained. Is it the case?\n- Could you include the training times of the different methods?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "As pointed out by the authors, fine-tuning strategies are relatively underexplored with EEG foundation models. They start to fill this gap by proposing a novel fine-tuning algorithm.\nThe paper is well-structured and easy to follow, with good-quality figures. The diagrams are clear, and the use of pictograms makes their understanding intuitive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this article, the authors investigate fine-tuning techniques for pre-trained models in the context of EEG-based BCI. They present a method which combines adding adapter layers to the transformer (adapter form approach) and learning additional vectors which are concatenated to the key and value vectors in the transformer (prefix-finetuning). Both approaches are used in order to reduce the number of parameters to finetune." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- On the DREAMER and motor imagery (MI) datasets, the method proposed by the authors consistently produces relatively low results, underperforming compared to the supervised baseline. Lines 370-373, the authors suggest that this comparison is not fair and only compare the adaptive methods within themselves. However, I respectfully disagree and maintain that it is indeed appropriate and relevant. Indeed, all models had access to the same quantity of data from the target distribution. The fact that the pre-trained models perform poorly means that they are not able to correctly use the available target data. This issue is called “negative transfer” and it needs to be tackled, not ignored. \n- The models based on BIOT systematically perform around chance-level (50±2%) on the MI and DREAMER datasets. This raises questions about the statistical significance of these results. At the moment, this issue is not discussed or even mentioned by the authors. For transparency, I would suggest that the authors include the theoretical chance level in all tables and figures.\n- The MI dataset used is relatively unknown (only cited once on Google Scholar), which does not make it a good benchmark. As this is the only MI dataset used, I believe it is necessary to conduct additional experiments on another, more common, MI benchmark.\n- Line 124, the authors point out that few discussions were made on how to fine-tune models to downstream tasks in the BCI literature. While it is true that there are few, they are not nonexistent. As this is the main topic of the article, the few works that were done in that direction should at least be reported, if not compared to. The following two studies compared different downstream architectures combined with different fine-tuning regimes. In particular, they both explored additive fine-tuning algorithms, which is in contradiction with the statement line 145.\n - Kostas et al. (2021) https://doi.org/10.3389/fnhum.2021.653659\n - Guetschel et al. (2024) https://doi.org/10.3217/978-3-99161-014-4-003\n- The method proposed by the authors can only be applied to transformer-based pre-trained models and requires doing “surgical” modifications to the architecture. This is not easy to implement compared to simple finetuning.\n- The appendix is missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- What’s N_d in paragraph 3.2?\n- What is a “self-supervised modeling method”?\n- What is “SMM SOTA”? is it a neural network trained from scratch?\n\nTypo:\n- Eq (3): wrong matrix-vector shapes for “xW_q”" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Comprehensive related work situates TaKF+ well in EEG model adaptation literature.\n- Tackling parameter-efficient tuning for EEG is timely and could make a significant impact if successful.\n- TaKF+ is tested on 4 datasets and 2 recent pre-trained models.\n- There is no overlap between the evaluation datasets and the training datasets used in the pre-trained models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces TaKF+, a parameter-efficient tuning method for adapting EEG foundation models to a variety of downstream tasks. TaKF+ combines a Task-Adaptive Key-Feature Extractor (TaKF) with adapter modules to extract and refine task-specific features while keeping the foundation model's parameters largely unchanged, thus minimizing computational costs. Through experiments on diverse EEG tasks like motor imagery and seizure detection, TaKF+ shows superior adaptability and stability compared to existing tuning methods, particularly in data-scarce settings. The study highlights TaKF+ as a versatile and efficient tool for EEG-based applications, addressing critical challenges in EEG model adaptation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I understand you want to show that TaKF+ is more robust than the baseline but the tables are hard to read: sometimes TaKF+ is better sometimes not. To show this, you could use normalized plots as in [1, 2].\n- Heavy use of acronyms impacts readability: SMM, FT, LP, PT.\n- An analysis of the performance with respect to the number of training samples would be interesting.\n- A comparison of computational time with other methods would also be interesting.\n\n[1] Mellot, A., Collas, A., Chevallier, S., Gramfort, A., & Engemann, D. A. (2024). Geodesic Optimization for Predictive Shift Adaptation on EEG data. arXiv preprint arXiv:2407.03878.\n\n[2] Kim, M. J., Grinsztajn, L., & Varoquaux, G. (2024). CARTE: pretraining and transfer for tabular learning. ICML 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.\tHow does TaKF+ handle cases where downstream tasks have significantly different label distributions from the pre-trained EEG foundation model?\n2.\tCould the authors clarify how TaKF+ performs on larger, more heterogeneous EEG datasets that may have different sampling rates or noise levels?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tTaKF+’s integration of the Task-Adaptive Key-Feature Extractor (TaKF) and adapter modules is novel and effective in tuning EEG foundation models with minimal parameter updates.\n2.\tThe method is designed to work efficiently in low-data settings, demonstrating strong performance in few-shot learning scenarios.\n3.\tTaKF+ supports a broad range of downstream tasks, making it highly adaptable and suitable for diverse EEG-based applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The \"TaKF+\" paper presents a parameter-efficient tuning method aimed at enhancing EEG foundation models for diverse downstream tasks, such as seizure detection, emotion recognition, and motor imagery. The method, TaKF+, introduces a Task-Adaptive Key-Feature Extractor (TaKF) and adapter modules to adapt EEG foundation models in a task-agnostic manner, maintaining generalization and minimizing computational cost. Through evaluations on multiple datasets, the authors demonstrate TaKF+’s superior performance in few-shot scenarios and its adaptability across various EEG-based applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tWhile TaKF+ introduces Task-Adaptive Key-Feature Extractor and adapter modules, the motivation behind this specific design choice seems not clear. The paper lacks a more detailed comparison of how TaKF+ improves upon existing methods, both in terms of unique technical contributions and in addressing specific limitations of previous EEG foundation models\n2.\tThe novelty of TaKF+ could be strengthened by discussing how it differs fundamentally from other parameter-efficient fine-tuning approaches beyond its application to EEG. \n3.\tAlthough the empirical results are promising, the paper needs a deeper theoretical rationale supporting the choice of parameter-efficient tuning for EEG foundation models. Specifically, a clearer explanation of why the TaKF+ structure is particularly suited for EEG data, as opposed to alternative architectures, would strengthen the paper’s foundation.\n4.\tAlthough TaKF+ shows improvement over some baselines, the paper should include more comparisons with recent advancements in EEG model tuning or transfer learning.\nI will reconsider my assessment after checking authors' response" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Section 6.1, the paper mentions that \"Although the Adapter performed more stably than other baselines, it did not achieve the versatility of TaKF+.\" What is meant by the versatility of TaKF+ in this context, and how is it quantitatively or qualitatively better than the Adapter in terms of versatility? More clarification is needed to justify this claim." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed method maintains competitive performance while reducing computational cost by tuning only a small fraction of parameters, making it resource-efficient for real-world applications in EEG-based tasks.\n2. The few-shot learning experiments demonstrate that TaKF+ approaches or even surpasses the performance of fully fine-tuned models in some datasets, which is a highly promising result." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents TaKF+, a new approach for parameter-efficient fine-tuning of EEG foundation models. The Task-Adaptive Key-Feature Extractor (TaKF) combined with adapter modules enables efficient extraction of task-relevant features with minimal computational overhead, while maintaining or exceeding the performance of fully fine-tuned models in few-shot learning scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper states that 3% of parameters are tunable in additive fine-tuning methods, the exact tunable parameter ratio for TaKF+ is not provided. This lack of explicit comparison may lead to an unfair assessment of baseline methods. Clearly stating the tunable parameters for TaKF+ would provide a more transparent comparison.\n2. The core idea of TaKF+—combining the well-established Adapter technique with a Q-former-like cross-attention mechanism—might be seen as a simple extension of known methods, limiting the novelty of the contribution.\n3. The results indicate that TaKF+ does not consistently outperform all additive fine-tuning baselines across datasets. This inconsistency raises concerns about its general robustness and effectiveness.\n4. Some widely used baselines, such as LoRA, Adaptformer, and UniPELT, are absent from the experimental comparison, limiting the comprehensiveness of the evaluation. \n5. In Table 3, the performance of the proposed method's variants fluctuates significantly across different datasets, which casts doubt on the consistent effectiveness of individual components." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How much smaller is the amount of parameters added during the fine-tuning phase compared to the original model? Is it worth the potential reduction in effectiveness?\n2. Both proposed modules increase trainable parameters to aid the fine-tuning process. Could they be demonstrated through interpretable methods, such as visualization, to substantiate the different effects described in the text?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.Innovation: Pioneering attention to the issue of high parameter counts during fine-tuning in large EEG models. \n2.Significance: Fine-tuning usually requires adjusting all parameters, which can be computationally and temporally expensive. If the hypothesis holds, the corresponding optimizations could facilitate the widespread application of large models. \n3.Clarity of writing: The descriptions of the proposed TAKF method and Adapter model are highlighted effectively. \n4.Rich experimentation: A broad range of baseline comparisons including supervised and self-supervised learning SOTA methods were selected, and different approaches to fine-tuning with additional parameters were compared. \n5.Reproducibility: The paper provides extensive code, and the reported results seem reproducible based on the documentation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on reducing computational demands during the fine-tuning phase of large EEG models by training only newly added parameters. It introduces the TAKF method to enhance the model's expressiveness by extracting task-specific features, and incorporates an Adapter module to transfer foundational knowledge of the base model to specific tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.Innovation: In terms of methodology, only the TAKF module is newly introduced, while the Adapter model merely combines existing methods. \n2.Notably unimpressive experimental results: As shown in Table 1, the performance on most datasets using LaBraM as the base model is significantly lower than LaBraM's fine-tuning results; intuitively, the lower computational cost may lead to a substantial decrease in effectiveness; on the TUEV dataset, it performs worse than the Adapter-only approach, which requires further analysis and explanation; according to Tables 1 and 2, it underperforms the MAM Adapter method in 3/8 of the metrics, showing no significant advantage. \n3.Significant errors in tables: In the Appendix, Tables 7 and 8 present the same series of methods across four different datasets, yet the data for methods from LaBraM-LP to (Ours) LaBraM-TaKF+ are identical in both tables; there are also significant errors in table titles, e.g., Table 7 includes data for LeftRight Hand, which does not belong in the emotion recognition category. The authors are advised to carefully proofread the content." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024takf,\ntitle={Ta{KF}\\${\\textasciicircum}\\{+\\}\\$: A versatile and parameter-efficient tuning for {EEG} foundation model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2og3oWsC5n},\nnote={under review}\n}" }, "abstract": { "value": "Electroencephalogram (EEG) data, widely used in brain-computer interfaces (BCIs), pose challenges for reusing deep learning models trained on specific datasets due to variations in recording configurations and domain gaps. While foundation models pre-trained on large-scale EEG datasets have emerged as a promising solution, the challenge of effectively adapting them to downstream tasks has yet to be fully explored. To address this, we propose a novel tuning method, TaKF$^{+}$, which consists of the Task-Adaptive Key-Feature Extractor (TaKF) and adapter modules. TaKF$^{+}$ is designed to efficiently extract task-relevant features from EEG foundation models for downstream tasks while preserving the model’s parameters and significantly reducing computational overhead. We evaluate TaKF$^{+}$ across a diverse range of tasks, including motor imagery, emotion recognition, and seizure detection, and demonstrate its superior performance and adaptability compared to existing methods over publicly available datasets. Our research paves the way for more efficient and versatile applications of EEG foundation models across various domains." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "EEG", "Foundation model", "Parameter-efficient fine-tuning", "Additive fine-tuning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ba6947c0e5f83258d252d273b17b600d99b195dc.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1768c026fad123e2ec45d0361f6452430dc4b929.zip" }, "title": { "value": "TaKF$^{+}$: A versatile and parameter-efficient tuning for EEG foundation model" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ogxyVlHmi
Distillation-Free One-Step Diffusion for Real-World Image Super-Resolution
main
Active
One-Step Diffusion;Image Super-Resolution;Distillation-Free;Diffusion Models
applications to computer vision, audio, language, and other modalities
3;5;5;6
5;4;5;4
2;2;3;3
2;2;3;2
2;3;3;3
4.75
4.5
2.5
2.25
2.75
-0.688247
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How does this method compare to traditional GAN ​​methods (Real-ESRGAN, BSRGAN) in terms of running costs?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea of using the pretrained diffusion model to train a real-sr GAN is new. The introduction of the Noise-Aware Discriminator (NAD) and the edge-aware DISTS (EA-DISTS) perceptual loss seems novel and effective. \n2. Comprehensive experimental results on three real-world datasets show the proposed DFOSD achieves competitive or superior performance in both no-reference (NR) and full-reference (FR) image quality metrics.\n3. The overall writing is good and the paper is easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new GAN-based real-world image super-resolution method (Real-ISR) using pretrained diffusion models. The work introducesa Noise-Aware Discriminator (NAD) and an edge-aware perceptual loss function (EA-DISTS) for the GAN training. The paper presents extensive experimental results demonstrating that the proposed method achieves superior performance in both quantitative metrics and visual quality compared to state-of-the-art diffusion-based and GAN-based methods for Real-ISR." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the authors claim that the proposed method, DFOSD (Distillation-Free One-Step Diffusion), is a diffusion SR model, it is essentially a GAN-based method. The model only uses the parameters trained by a diffusion model, but there is no Markov process. This term may cause some misunderstanding of the method.\n2. While the paper emphasizes the reduction in training overhead and computational complexity relative to distillation-based methods, the overall framework still relies on heavy pre-trained models (e.g., Stable Diffusion UNet). The method may not be as lightweight as simpler GAN-based approaches, which could limit its adoption in resource-constrained environments. A more explicit comparison with simpler non-diffusion-based methods in terms of memory and computational requirements would provide a clearer picture.\n3. Although the authors report visual comparisons and use several no-reference and full-reference metrics, the paper would benefit from subjective user studies to evaluate the perceived quality of the generated high-resolution images. \n4. The paper does not provide an analysis of how sensitive DFOSD is to hyperparameter choices, such as the weights of the loss function components." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please referring to the weaknesses above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ The paper presents DFOSD, a novel model that significantly advances real-world image super-resolution by offering a distillation-free one-step diffusion approach, which is highly innovative in the field.\n\n+ Two standout contributions are the noise-aware discriminator (NAD) and the edge-aware DISTS (EA-DISTS) loss. The NAD leverages prior knowledge from pre-trained models to enhance realism, while EA-DISTS improves texture detail restoration.\n\n+ The writing is clear and methodical and the experimental section is robust, providing not only quantitative metrics but also qualitative assessments that demonstrate DFOSD's superior performance and efficiency in image super-resolution tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a model named DFOSD, addressing the problem of real-world image super-resolution. The authors propose a noise-aware discriminator (NAD) and an edge-aware DISTS (EA-DISTS) loss to optimize the model, resulting in superior performance on quantitative metrics and qualitative assessments. DFOSD achieves remarkable results on tasks such as image restoration, demonstrating significant improvements in realism and detail generation across various real-world datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Usually, the training cost is not very large for diffusion-based SR methods compared to text-2-image tasks, so I think the distillation-free optimization is not much necessary. Besides, we also could pre-compute the output of teacher models with multi-steps predictions before starting the complete training. Can you elaborate further on the advantages of non-distillation training?\n\n- The DFOSD proposed in this paper is just a marginal optimization based on OSEDiff[1] and other adversail training-based methods[2,3,4].\n\n- The proposal of EA-DISTS loss lacks of novelty, just an experimental trick.\n\n- Noise-aware discriminator is not new, the same ideas are shown in SD3-turbo[2] and TAD-SR[3]. Although the NAD seems simpler and effective, but is not a very innovative method.\n\n- The experimental setting is not rigorous and unfair, will you release the 200K high-quality images to public?\n\n\n[1] One-Step Effective Diffusion Network for Real-World Image Super-Resolution, 2024.\n\n[2] Adversarial Diffusion Distillation, 2023.\n\n[3] Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation, 2024.\n\n[4] One Step Diffusion-based Super-Resolution with Time-Aware Distillation, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. DFOSD uses learnable text embedding without DAPE and the text encoder, reducing inference time compared to OSEDiff. However, it's unclear if this fully accounts for the 0.24s speedup. The authors should provide a breakdown of inference times for each major component (e.g., text embedding, main network, etc.) for DFOSD and OSEDiff on the same device. This would help clarify where the speedup is coming from.\n\n2. In Table 4, DFOSD's performance with the LSDIR+10K FFHQ training dataset is worse than OSEDiff with the same training dataset in no-reference metrics (MUSIQ, ManIQA, ClipIQA). It would be useful to clarify if these improvements in no-reference metrics are primarily due to the high-quality training dataset. A more detailed analysis in Sec. 4.3 would be helpful.\nTo avoid the influence of input resolution, I suggest the authors evaluate the DFOSD's performance with different training datasets on the pre-cropped test dataset (https://huggingface.co/datasets/Iceclear/StableSR-TestSets) from StableSR [1]. \n\n[1] Wang, Jianyi, et al. \"Exploiting diffusion prior for real-world image super-resolution.\" International Journal of Computer Vision (2024): 1-21.\n\nI will consider raising my score if my primary concerns are addressed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper proposes a noise-aware discriminator, leveraging the prior knowledge from the pre-trained SD UNet. This enhances the realism and details of the reconstructed images without much memory usage and training time.\n\n2. The proposed EA-DISTS can enhance texture detail restoration.\n\n3. The writing is well, and the idea is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes DFOSD, a Distillation-Free One-Step Diffusion SR model that enhances image detail and visual quality. Key contributions include a Noise-Aware Discriminator (NAD), which improves realism through adversarial training, and Edge-Aware DISTS (EA-DISTS) loss, which leverages image edges to enhance the authenticity of reconstructed details." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper introduces learnable text embedding to replace the text extractor, significantly reducing inference time. How it is implemented and trained and more explanation of learnable text embedding are needed for clarity. \n \n2. This paper evaluates image quality without cropping (Sec. 4.1, Lines 362-364), which is unusual for comparing SD-based SR methods, as they are sensitive to input resolution. I suggest evaluating the methods on the pre-cropped test dataset from StableSR [1] (https://huggingface.co/datasets/Iceclear/StableSR-TestSets), which has a fixed resolution with $512\\times512$ avoiding random crop and non-reproducible results. This test dataset is widely used in various SD-based SR methods, ensuring a more standardized and fair comparison while addressing the authors' concerns.\n\n[1] Wang, Jianyi, et al. \"Exploiting diffusion prior for real-world image super-resolution.\" International Journal of Computer Vision (2024): 1-21.\n\n3. The idea of NAD is similar to UFOGen [2] and LADD [3]. Relevant references and comparisons should be provided.\n\n[2] Xu, Yanwu, et al. \"Ufogen: You forward once large scale text-to-image generation via diffusion gans.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[3] Sauer, Axel, et al. \"Fast high-resolution image synthesis with latent adversarial diffusion distillation.\" arXiv preprint arXiv:2403.12015 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. While NAD operates on noisy latents domain, an alternative approach would involve operating on decoded images. The reviewer acknowledges that the VAE decoder has a large parameter count, yet it would be insightful to see experimental results in the image domain.\n\n2. As Weakness 4, could the authors provide details about the collected dataset, specifically regarding its scale, resolution, and diversity?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Quantitative and qualitative analyses clearly demonstrate the effectiveness of the proposed method. Specifically, Figure 3 illustrates how DFOSD successfully aligns mid-level features with real image distributions. DFOSD achieves significant improvements in both distortion-based metrics (PSNR and SSIM) and perceptual metrics, which is interesting. Additionally, computational costs are significantly reduced, as shown in Table 3." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces DFOSD, a one-step diffusion model for real-world image super-resolution that bypasses multi-step diffusion processes and teacher models, reducing training and inference time. It integrates a noise-aware discriminator (NAD) within an adversarial framework to boost perceptual SR quality and employs an EA-DISTS loss to further enhance perceptual performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The relationship between the proposed NAD and EA-DISTS remains somewhat unclear. Both components aim to enhance perceptual performance, but it would be beneficial for the reviewer if their complementary relationship, if any, were explicitly clarified.\n\n2. Although Table 5 provides ablation studies on different loss functions, other perceptual losses should be included for a more comprehensive comparison. The table currently highlights the superiority of DISTS over LPIPS, but this might be due to the larger parameters used in DISTS. It would be useful to include additional perceptual losses, such as NIQE, MUSIQ, ManiQA, and ClipIQA, in both their original and EA-enhanced versions.\n\n3. What distinguishes NAD from *? What specific advantages does NAD offer over these approaches?\n\n*A. Sauer, Adversarial diffusion distillation\n*A. Sauer, Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation\n\n4. Since this paper follows the Real-ESRGAN degradation pipeline, it can use any high-quality images for training, as shown in Table 4. However, as this is not a unique contribution of the paper, it would be helpful, if any, to include detailed information on \"our dataset.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024distillationfree,\ntitle={Distillation-Free One-Step Diffusion for Real-World Image Super-Resolution},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ogxyVlHmi},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion models have been achieving excellent performance for real-world image super-resolution (Real-ISR) with considerable computational costs. Current approaches are trying to derive one-step diffusion models from multi-step counterparts through knowledge distillation. However, these methods incur substantial training costs and may constrain the performance of the student model by the teacher's limitations. To tackle these issues, we propose DFOSD, a Distillation-Free One-Step Diffusion model. Specifically, we propose a noise-aware discriminator (NAD) to participate in adversarial training, further enhancing the authenticity of the generated content. Additionally, we improve the perceptual loss with edge-aware DISTS (EA-DISTS) to enhance the model's ability to generate fine details. Our experiments demonstrate that, compared with previous diffusion-based methods requiring dozens or even hundreds of steps, our DFOSD achieves comparable or even superior results in both objective metrics and subjective evaluations. Our DFOSD also abtains higher performance and efficiency compared with other one-step diffusion methods. We will release code and models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "One-Step Diffusion", "Image Super-Resolution", "Distillation-Free", "Diffusion Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c61ee9811492740d26c79cc3f8a24400782ed2cf.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/fd901f94e9844c76224f2236ec3bc5619ed5f3b3.pdf" }, "title": { "value": "Distillation-Free One-Step Diffusion for Real-World Image Super-Resolution" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2orBSi7pvi
STDM: Spatio-Temporal Diffusion Models for Time Series Analysis
main
Active
Diffusion Models;Time Series Analysis;Anomaly Detection;Forecasting
generative models
3;3;3;3
4;4;5;4
1;2;1;2
2;2;1;2
2;2;1;1
3
4.25
1.5
1.75
1.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Is convolution kernel $H$ trainable? What is the extra training cost of this design in diffusion forward process?\n- How does the convolution kernel capture spatio-temporal correlations? In my opinion, kernel $H$ seems to be able to capture only **temporal** pattern correlations within a series, but the author claims that STDM captures **spatial** correlations (for example, the contribution section). The method cannot capture spatio-temporal correlations of multivariate series anyway, I don't understand why the author named it **Spatio-Temporal** diffusion model." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Novel research perspective. As far as the reviewer is aware, this is the first paper to improve the performance on time series analysis tasks by redesigning the diffusion forward process.\n- The proposed method is flexible and extensible, and can be seamlessly integrated into the time series diffusion model to improve performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Spatio-Temporal Diffusion Model (STDM) which redesigns the diffusion forward process for capturing correlations in time series data, and can be seamlessly integrated into current diffusion models to improve their performance in time series analysis tasks. Experiments explore the performence of STDM in time series anomaly detection and forecasting tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Motivation is not clear and does not tell a coherent story. I do not understand the motivation and significance of paying attention to the temporal patterns in the diffusion forward process of adding noise. It appears that the temporal correlations introduced by the noise in the forward process may enable the model to effectively consider and learn these correlations for denoising in the reverse process. However, the writing of the paper does not clearly explain this.\n- In Section 3, the author mentioned that \"our methodology innovatively manipulates the forward process. This adjustment facilitates faster convergence during training and enhances robustness during inference\". Nevertheless, the mechanism by which STDM accelerates training and improves inference robustness are not sufficiently explained, and both theoretical analysis and lacks empirical evidence to support this assertion.\n- The experiment results only evaluate the DiffusionAE and TimeGrad models, which are not enough to support the effectivenenss of the proposed method. And there is a notable absence of baselines for time series forecasting and anomaly detection, which limits the comprehensiveness of the evaluation.\n- The writing and charts are extremely crude and rudimentary." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please refer to weankess" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The motivatin of producing entirely new sample via diffussion model seem to be interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a Spatio-Temporal Diffusion Models for generating entire samples of time series. Experiments are carried on synthetic and real-world datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Figure 1 is clear to illustrate the strength of STDM in comparison with vanilla DDPM.\n2. What does ``Spatio-Temporal' mean and is related to the proposed approach?\n3. More relevant works are needed to be discussed and compared, including Csdi: Conditional score-based diffusion models for probabilistic time series imputation (NIPS 2021); Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask (KDD 2024)\n4. The contribution and novelty are unclear. What is the superiority of STDM in comparison with current time series SSL methods? \n5. Vital baselines are missed, e.g., SimMTM (NIPS 2023), TS-TCC (TPAMI 2024), TS2Vec (AAAI2022)......\n6. More datasets should be analyzed, e.g., ETTh1/h2/m1/m2 for time series forecasting, and SMD/SWAT for anomaly detection" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Motivation issue. See weaknesses. Does using convolution-based noise addition/removal versus Gaussian-based noise addition/removal have a substantial impact on sample generation? Can this be theoretically proven?\n \n2. Eq(16). If I understand correctly, $x_0$ should be $x_{k-1}$.\n \n3. Eq(14). As $k \\to \\infty$, will this distribution converge to $N(0, I)$? This is relevant because in the experiments, you directly sample $x_K \\sim N(0, I)$.\n \n4. The experiments are too simplistic. I recommend adding more baselines to compare with diffusion models that use different noise addition processes." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Overall, the writing is fluent and easy to follow. The key details are well-explained. The paper replaces the traditional linear transformation in the noise addition process of diffusion models with convolution operations, which, to my knowledge, has not been seen in other work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduce a novel approach to enhance denoising diffusion models for time series tasks, addressing the challenge of conditioning for accurate reconstruction and sampling based on past time steps. Unlike existing methods, STDM guides the diffusion model's forward process by leveraging the high correlation between neighboring time steps. This is achieved through a diffusion step-dependent convolutional kernel and correlated noise to capture spatial relations and refine the degradation of inputs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The motivation of the paper is not very clear. From the perspective of guided diffusion, the conditioning approach used here doesn’t seem different from existing works. In my view, the main contribution of this paper lies in the use of convolution operations in the noise addition process, which introduces a smoothing effect on the signal distinct from traditional diffusion models. However, this smoothing approach doesn’t appear particularly meaningful, as in diffusion models we generally don’t focus much on the intermediate states in the noise/denoising process but rather only on the final generated samples. Additionally, the experiments are weak, as the paper only compares against the original DDPM and overlooks recent work from the past few years." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Could author provide a step-by-step derivation of equation (16), highlighting how it differs from the traditional diffusion process derivation?\n- Could author provide a clear definition of $c$, regarding each of the evaluated tasks?\n- How is the proposed method applied on both autoregressive and non-autoregressive generation process? Particularly, how it works with TimeGrad?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The idea of incorporating explicit capture of temporal patterns within the time series during forward process is inspiring. The step-dependent convolutional operator for executing the forward process is novel and reasonable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author proposed Spatio-Temporal Diffusion Models (STDM), introducing a new forward process for time series diffusion models. The new forward process tries to use step-dependent convolutional kernels for capturing spatial relations and a combined, correlated noise to degenerate the input. The method can be integrated seamlessly into existing time series models like DiffuisonAE and TimeGrad, replacing the original forward process. Experiment results show the effectiveness of the proposed method on two tasks: time series anomaly detection and forecasting, with one baseline model examined for each task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The eq. (16) needs to be validated, since the forward process is modified, the differences in the derivation should be noted. \n- The method section seems incomplete, for example, the definition of $c$ is not clearly stated. \n- The experiments are only on one baseline method for each task, which seems not adequate. The content of Table 2 is not as described in the caption (MG-TSD is mentioned in caption but not shown in table content).\n- In TimeGrad, the multi-variate time series are generated autoregressively, which seems to contradict with the proposed method where the $x^0$ denotes a multi-step series. It's not clear to me how the convolution kernel is applied on cross-sectional data (containing only one time step). Please correct me if I misunderstood some steps here." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A novel approach for manipulating the forward process of time series diffusion models to benefit from temporal correlations" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024stdm,\ntitle={{STDM}: Spatio-Temporal Diffusion Models for Time Series Analysis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2orBSi7pvi},\nnote={under review}\n}" }, "abstract": { "value": "Denoising diffusion models have emerged as a formidable method, consistently surpassing previous state-of-the-art benchmarks. However, a notable challenge in time series-related tasks like anomaly detection and forecasting is the conditioning for models to reconstruct inputs accurately or generate samples based on past time steps rather than producing entirely new samples. To address this, we introduce a novel technique that enhances the sampling capabilities of denoising diffusion models for time series analysis, namely Spatio-Temporal Diffusion Models (STDM). While recent methods fall short of mapping contextual neighborhood dependencies directly into the sampling of a noisy sample, we focus on guiding the forward process of the diffusion model. The degeneration of a sample is based on the idea that values of neighboring time steps are highly correlated. We benefit from this assumption by presenting a diffusion step-dependent convolutional kernel to capture spatial relations and a combined, correlated noise to degenerate the input. Our method can be integrated seamlessly into various existing time series diffusion models. We compare the results of anomaly detection and forecasting when using the traditional and our novel forward process. In our experiments on synthetic and real-world datasets, we show that an adaption of the forward process can be beneficial, as our approach outperforms diffusion models with the ordinary forward process in task-specific metrics, underscoring the efficacy of our strategy in enhancing time series analysis through advanced diffusion techniques." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion Models", "Time Series Analysis", "Anomaly Detection", "Forecasting" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8cc3958ec00a76d15fad5fe9c6fc82625cb639b8.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "STDM: Spatio-Temporal Diffusion Models for Time Series Analysis" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ozEpaU02q
Enhancing Adversarial Transferability Through Exploiting Multiple Randomized Trajectories for Better Global Guidance
main
Active
adversarial transferability
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5
4;3;4;4
1;2;3;2
3;2;3;2
2;1;3;3
4
3.75
2
2.5
2.25
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "(1) The paper is well-structured.\n\n(2) The research topic is significant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of enhancing adversarial transferability in deep neural networks (DNNs) by proposing new strategies to avoid local optima during the generation of adversarial examples." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) The novelty is limited. I believe that the essence of the proposed AGI and dual examples (DE) are equivalent to reducing the variance of attack directions, since RGI and DE accumulated/averaged multiple attack directions for each perturbation updating. While accumulating multiple attack directions for stabling each perturbation updating has been proposed in [1][2].\n\n(2) Insufficient Evaluation: The evaluations presented are not robust enough. Given the similarity of the proposed approach to methods in [1][2], it is crucial to include these as baseline comparisons. Moreover, widely recognized transferable attacks such as DIM [3] and TIM [4] should also be included as baselines \n\n(3) The attack success rates reported in Table 3 against defense methods like NRP, RS, HGD, and AT are notably low. In contrast, prior methods like DIM and TIM have achieved higher success rates against these defenses, raising concerns about the fairness and validity of the evaluation.\n\n(4) Since AGI and DE introduce additional steps in generating perturbations, it is unfair to compare the proposed methods and baselines with differing numbers of optimization steps.\n\n\n(5) typos and format errors:\n (1) In abstraction, line 4, \"samplesoften\" (2) in Section 2.2, the reference format is not correct. \n\n\n[1] Wu, Lei, Zhanxing Zhu, and Cheng Tai. \"Understanding and enhancing the transferability of adversarial examples.\" arXiv preprint arXiv:1802.09707 (2018).\n\n[2] Huang, Tianjin, et al. \"Direction-aggregated attack for transferable adversarial examples.\" ACM Journal on Emerging Technologies in Computing Systems (JETC) 18.3 (2022): 1-22.\n\n[3] Xie, Cihang, et al. \"Improving transferability of adversarial examples with input diversity.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.\n\n[4] Dong, Yinpeng, et al. \"Evading defenses to transferable adversarial examples by translation-invariant attacks.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "**1.** This paper has well orgnized visualization, which effectively helps theoretical derivation. For example, Figure 2 clearly explains the different paths of the FGSM during the iterative process.\n\n**2.** The novel method achieves SOTA results in the major experiments, which aligns with those inferences." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduced new optimization strategies for adversarial attacks called Randomized Global Initialization and Dual Example. These methods trade-off computational cost for improved transferability by exploring more of the loss landscape. The authors demonstrated through extensive experiments that Randomized Global Initialization and Dual Example significantly boost the performance of gradient-based attack methods by leveraging broader optimization paths." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1.** The authors claim that their methods enhances the adversarial transferability of attacks, but does not conduct enough evaluation under defences to prove this.\nFor example, some novel adversarial defence methods claim that they can defend the attackers by reducing adversarial transferability [1,2,3,4]. If these strong adversarial defence algorithms could be used as a benchmark and given the success rate of the attack, it would better demonstrate the validity of the advantage in adversarial transferability.\n\n**2.** The authors don't seem to mention the limitations of their paper.\n\n[1] G. Carbone et al., “Robustness and interpretability of neural networks’ predictions under adversarial attacks,” 2023.\n\n[2] Y. Ma, M. Dong, and C. Xu, “Adversarial robustness through random weight sampling,” in Advances in Neural Information Processing Systems, A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., vol. 36. Curran Associates, Inc., 2023, pp. 37 657–37 669.\n\n[3] M. Dong, X. Chen, Y. Wang, and C. Xu, “Random normalization aggregation for adversarial defense,” Advances in Neural Information Processing Systems, vol. 35, pp. 33 676–33 688, 2022.\n\n[4] B. Li, C. Chen, W. Wang, and L. Carin, “Certified adversarial robustness with additive noise,” Advances in neural information processing systems, vol. 32, 2019." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the **Weakness section**. I might raise my score if the authors address my concerns, especially regarding the computational overhead of the algorithm." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. It is necessary to dive into transferable attacks to discover hidden defects of neural networks, especially for realistic black-box environments.\n\n2. The authors conducted extensive experiments, reporting performance metrics across a diverse set of models (ResNet-18, DenseNet-121, ViT, etc.) and testing both single-model and ensemble settings. The results consistently show improvement in attack success rates, particularly for transformer-based models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces two key strategies—randomized global initialization (RGI) and dual example generation (DE)—which leverage multiple optimization trajectories to explore the loss landscape more comprehensively. This is a novel addition to adversarial attack literature, aiming to improve transferability by avoiding local optima." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Main concerns:**\n\n**1. Concerns about innovation:**\n\nThe generation method of Dual Examples is still unclear. In line 293, the author claims to use random Gaussian noise to generate N initialization samples. Are the sampling methods for random initialization of momentum and dual samples consistent? If so, there may be some overlap in the sampling methods. The random perturbation sampling in momentum initialization (such as randomly initializing multiple perturbations) is basically consistent with the generation of Dual Examples (sampling multiple trajectories from the neighborhood), both of which are sampled in the neighborhood and optimized on their own independent trajectories. This means that Dual Examples actually overlaps with the strategy of momentum initialization, does not really provide new information or optimization paths, and only increases the complexity of the calculation. Could I ask the authors to provide a detailed comparison of the sampling methods used for random initialization of momentum and dual samples?\n\nIn addition, since there are already numerous works **[1] [2] [3]** that combine the gradient information of neighborhood samples to improve transferability, could I think that the core of this paper is essentially combine neighborhood exploration (through random initialization and Dual Examples) with the pre-convergence momentum strategy of GIMI-FGSM? The pre-convergence momentum strategy has been reflected in GIMI-FGSM, and more gradient information is introduced by neighborhood increasing exploration (random initialization and Dual Examples) to calculate the average momentum, mainly by sampling multiple examples in the neighborhood. Could I ask the authors to provide a more detailed comparison of their method with existing works, particularly focusing on how their approach differs from or improves upon the combination of neighborhood exploration and pre-convergence momentum strategies?\n\n**2. Randomized Initialization Without Sufficient Parameter Analysis:**\n\nThe paper proposes randomized global initialization but does not provide a systematic study on how different levels of randomness affect convergence and transferability. Specifically, there is no ablation to explore the sensitivity of RGI to the number of random initializations or perturbation magnitude.\n\nRGI uses a predefined number of samples, yet the impact of this parameter **N** remains unclear. Testing different sample sizes or introducing an analysis of the trade-offs between computation cost and performance gain would make the method more practical and understandable.\n\n**3. Vagueness on Empirical Validation:**\n\nWhile the experimental results are promising, the paper’s reliance on empirical data without deeper technical analysis limits the work’s robustness. For instance, t-SNE visualizations show trajectories across random initializations but fail to address how these trajectories relate to transferable gradient directions in high-dimensional space. \n\nThe contribution of Figure 2 is ambiguous. In lines 214-215, the author says \"running GIMI-FGSM from different random starting points often causes adversarial examples to converge to distinct local optima\", but Figure 2 is only a visualization of adversarial sample updates and does not reflect the concept of \"local optimum\". In addition, the author claims in lines 197-198 that \"even with the same step size and number of optimization steps, each attack pushes the adversarial example to different distances from the benign sample.\" Obviously, when the input perturbations are inconsistent, the update directions of the adversarial samples generated by random initialization are different. This phenomenon does not explain the contribution of random initialization to transferability. I suggest modifying Figure 2 to more clearly reflect the motivation.\n\n**4. Computational overhead of multiple trajectories:**\n\nThe core method of this paper relies on multi-trajectory optimization of adversarial examples, including random initialization and Dual Examples, which means that each update requires separately calculating gradients on multiple trajectories. This process significantly increases the computational cost because each trajectory needs to be forward and backward propagated independently, and then the gradient information of different trajectories is integrated for update. This multiple optimization trajectories increase the demand for computing resources and memory to a certain extent. Especially on large-scale models or datasets (such as ImageNet), such consumption may not be negligible. Comparing the inference time of the proposed method with other baselines can effectively evaluate the efficiency of the algorithm.\n\n**References:**\n\n**[1]** Wang, X., & He, K. (2021). Enhancing the transferability of adversarial attacks through variance tuning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1924-1933).\n\n**[2]** Zhu, H., Ren, Y., Sui, X., Yang, L., & Jiang, W. (2023). Boosting adversarial transferability via gradient relevance attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 4741-4750).\n\n**[3]** Wang, X., Jin, Z., Zhu, Z., Zhang, J., & Chen, H. (2024, October). Improving Adversarial Transferability via Frequency-Guided Sample Relevance Attack. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (pp. 2410-2419).\n\n\n**Minor concerns:**\n\n**1. Ambiguity of pseudocode parameters:**\n\nThe value of the **T'** parameter in line 5 of the pseudocode is not specified. Since its function is similar to that of the parameter **P** in GIMI-FGSM, can it be assumed that it is set to 5 with reference to the parameter selection of GIMI-FGSM? Could I ask the authors to clarify the value of **T'** and explain its relationship to the **P** parameter in GIMI-FGSM?\n\n**2. Possible typo errors in pseudocode:** \n\nIn line 16 of the pseudocode, should **$\\frac{1}{N} \\sum_{n=1}^{N}$** be **$\\frac{1}{K} \\sum_{k=1}^{K}$**? I'm not sure. **$g_{k,t}$** is based on the gradient of **K** Dual Examples, so **1/K** should be used instead of **1/N** when averaging **$g_{k,t}$** (here **N** is the number of samples used to calculate the randomly initialized momentum, and it has ended the loop in line 9).\n\n**3.Reproducibility Concerns:**\n\nGiven the complexity of the proposed strategies and the lack of specific initialization parameters, reproducibility may be challenging for future researchers. If possible, open-sourcing the code would help improve transparency, allowing the community to validate and build upon the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tThe introduction of randomized global initialization and dual samples strategies increases the computational overhead. Does the paper quantitatively evaluate the time complexity and computational resource requirements of these strategies? How computationally efficient is this approach in practical applications?\n2.\tThe method is sensitive to some hyperparameters in different models; have the authors evaluated the optimal values for these parameters on different models? How are these parameters chosen in practical applications? \n3.\tThe paper primarily compares its method with gradient-based approaches but does not address non-gradient-based adversarial attack methods. What are the advantages and disadvantages of this method compared to non-gradient-based approaches?\n4.\tIs there an issue with adversarial example stability due to certain random initializations when using randomized global initialization? Has the author evaluated the variance in attack success rates across different random initializations?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "First, this paper provides a novel approach with meaningful technical contributions and well-supported experimental validations. The introduction of Randomized Global Initialization (RGI) and Dual Examples demonstrates a unique perspective in enhancing adversarial transferability, which could inspire further research in adversarial robustness. Additionally, the paper is well-organized and readable, with detailed descriptions that make complex technical methods accessible. Extensive experimental results on multiple models and datasets, including both CNNs and vision transformers, reinforce the method’s general applicability and strengths in adversarial attack scenarios. The main technical contributions of this paper include the following:\n1.\tThe RGI technique formalizes an approach to initialize adversarial examples across multiple random perturbations, capturing a more representative global momentum for better generalization and reduced local optima entrapment.\n2.\tThe Dual Example Strategy enhances the transferability of adversarial examples by generating parallel trajectories, effectively exploring a larger portion of the loss landscape. This broad approach ensures more robust adversarial optimization across different models. \n3.\tThe proposed RGI and Dual Examples are seamlessly integrated with existing gradient-based methods, highlighting the flexibility and adaptability of the proposed approach across various adversarial attack frameworks.\n4.\tExtensive experiments on the ImageNet-1K dataset demonstrate that the method outperforms other adversarial transfer techniques. The paper provides theoretical insights that underscore the importance of initialization and trajectory exploration in adversarial attacks, contributing to the broader understanding of optimization in high-dimensional spaces." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an innovative approach to enhance adversarial transferability by introducing two primary strategies: Randomized Global Initialization (RGI) and Dual Examples. The RGI strategy leverages multiple random perturbations around an initial sample to create a more representative global momentum, thus broadening the optimization landscape and reducing the likelihood of adversarial samples being trapped in local optima. Meanwhile, the Dual Examples strategy generates parallel trajectories for adversarial optimization, effectively exploring a larger portion of the loss landscape and further enhancing transferability. Experimental results on the ImageNet-1K dataset demonstrate that this approach significantly improves attack success rates across various models, including CNNs and vision transformers, underscoring the proposed method's efficacy in increasing adversarial transferability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe Randomized Global Initialization and Dual Example strategies introduce significant computational requirements, especially when optimizing multiple trajectories simultaneously. This could limit the method's practicality in resource-constrained environments. \n2.\tThe approach relies on several hyperparameters (e.g., number of random initializations, step size sequence), which may require fine-tuning for different models. This sensitivity could hinder straightforward application and scalability across diverse model architectures.\n3.\tThis article may lack a theoretical proof for the validity of the global initialization. In addition, there is a lack of experimental proof of the optimal settings for the number of samples to compute the global momentum and dual examples." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing Adversarial Transferability Through Exploiting Multiple Randomized Trajectories for Better Global Guidance},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ozEpaU02q},\nnote={under review}\n}" }, "abstract": { "value": "Deep neural networks are well-known for their vulnerability to adversarial examples, particularly demonstrating poor performance in white-box attack settings. However, most white-box attack methods heavily depend on the target model and often get trapped in local optima, leading to limited adversarial transferability. Techniques such as momentum, variance reduction, and gradient penalty mitigate overfitting by combining historical information with local regions around adversarial examples, but exploration of the global loss landscape remains constrained, hindering further performance improvements.\n\nIn this work, we find that initialization influences the optimization of adversarial examples, often guiding them toward multiple local optima, providing an opportunity to explore the loss landscape more effectively. Based on this insight, we propose two strategies: randomized global initialization and dual examples. These strategies utilize multiple trajectories from benign samples to capture global optimization directions, enhancing adversarial transferability. Our approach integrates seamlessly with existing adversarial attack methods and significantly improves transferability, as demonstrated by empirical evaluations on the standard ImageNet dataset." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "adversarial transferability" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/03368974831e5a9913d88a74dc94fe2b60b5926d.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Enhancing Adversarial Transferability Through Exploiting Multiple Randomized Trajectories for Better Global Guidance" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2p03KljxE9
Language-Assisted Feature Transformation for Anomaly Detection
main
Active
anomaly detection;feature transformation;vision-language model;language guidance
applications to computer vision, audio, language, and other modalities
3;5;6;8
5;5;5;3
2;3;3;3
2;3;3;3
1;3;3;2
5.5
4.5
2.75
2.75
2.25
-0.800641
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. What does $c_i$ represent in Equations 5 and 6?\n2. For zero-shot anomaly detection, can the transformed image features still match the text features effectively?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The authors explore a valuable research topic that contributes to the current body of knowledge—how to adjust decision boundaries using language to enhance CLIP’s anomaly detection performance. \n- The proposed method stands out due to its training-free nature, which provides flexibility in application across various tasks with limited data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a feature transformation method aimed at focusing on specific image attributes guided by language. The approach, termed Language-Assisted Feature Transformation (LAFT), leverages the shared embedding space of vision-language models (specifically CLIP) to modify image features according to user-defined concepts expressed in natural language, enabling enhanced anomaly detection capabilities without additional training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper uses the vector difference between two textual descriptions to represent a single attribute and maps this attribute directly to image feature transformation. However, this simplification raises at least three issues:\n - The properties of objects cannot be adequately represented by the difference between two concepts.\n - Real-world attributes are often complex and may involve different colors or textures across various parts of an object.\n - The text embedding space and the image embedding space in CLIP are not perfectly aligned; therefore, vectors derived from the text space may not be directly applicable to the image space.\n\n- To validate the effectiveness of feature transformation, using a CLIP-based classification task would be more suitable than anomaly detection.\n\n- The paper lacks results on anomaly localization, which is crucial for industrial applications.\n\n- The language throughout the paper could be clearer. It is recommended to refer to previous works using proper method names and provide concise descriptions of these methods.\n\n- The axis labels in Figure 3 are inconsistent. How were the attributes 'Number' and 'Color' derived?\n\n- The dataset chosen for experiments, SEMANTIC ANOMALY DETECTION, focuses on distinguishing simple concepts. Why not test the method on widely recognized OOD datasets such as ImageNet-1k and OpenOOD? Industrial anomaly detection would benefit from validation on datasets like VisA and Real-IAD as well.\n\n- The comparison methods included are relatively weak. Why not compare with more recent OOD detection approaches such as NegLabel [1] and ClipN [2]?\n---\n- \\[1] X. Jiang, F. Liu, Z. Fang, H. Chen, T. Liu, F. Zheng, and B. Han, “Negative label guided OOD detection with pretrained vision-language models,” in The Twelfth International Conference on Learning Representations, 2024.\n- \\[2] Hualiang Wang, Yi Li, Huifeng Yao, and Xiaomeng Li. ClipN for zero-shot OOD detection: Teaching CLIP to say no. ICCV, 2023.\n---\nIf the author can address my concerns, I will consider increasing the score." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Table 8, the header refers to \"bird,\" which is inconsistent with the title of the Colored MNIST dataset mentioned (maybe a typo). Could the authors clarify this discrepancy?\n2. What are the sizes of the training sets for each dataset used in the experiments? Given that these samples serve as candidates for kNN search, how might the number of training samples affect the final performance of the model?\n3. The experimental results on the MVTec AD dataset in Table 3 suggest that InCTRL might outperform WinCLIP+LAFT when considering deviation, especially when the number of shots exceeds 2. Could the authors provide detailed experimental results for each of the five different reference sample sets?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. LAFT bridges a gap in anomaly detection by allowing users to express preferences using natural language, providing more control over what is considered \"normal.\"\n2. Unlike other feature transformation models, LAFT does not require additional training, making it efficient for settings with scarce data.\n3. The experimental results demonstrate that LAFT outperforms state-of-the-art methods, particularly in semantic anomaly detection tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Language-Assisted Feature Transformation (LAFT), a novel framework that leverages vision-language models (like CLIP) to enhance anomaly detection. Traditional anomaly detection methods often struggle to capture user-defined nuances of normality, particularly when attributes are entangled or datasets are incomplete. LAFT tackles this by enabling feature transformations guided by natural language prompts. These prompts align visual features with user intent by projecting image features onto specific concept subspaces within a shared embedding space. The paper also proposes LAFT AD, a k-nearest-neighbor (kNN)-based method combining LAFT with anomaly detection, and extends this work into WinCLIP+LAFT, designed for industrial applications. The effectiveness of LAFT is demonstrated across datasets like Colored MNIST, Waterbirds, CelebA, and MVTec AD, showing superior performance in both semantic and industrial anomaly detection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While LAFT demonstrates significant improvements in controlled environments, such as the Colored MNIST dataset, its performance gains appear less pronounced when applied to complex real-world datasets. This discrepancy suggests that the model may struggle to maintain robustness across multiple intricate attributes, highlighting the need for further refinement in handling multi-attribute scenarios.\n2. The experimental setup lacks comprehensive comparisons, particularly between language-assisted and vision-assisted approaches. For instance, incorporating image guidance by utilizing related reference normal images (e.g., normal digits in various colors) or color-augmentation for kNN baseline could provide valuable insights. A thorough examination of both language-based and vision-based assistance would strengthen the evaluation of LAFT's efficacy.\n3. The impact of the number of PCA components, which is the sole hyperparameter in LAFT, is not adequately investigated. Given that this parameter influences the model's performance, it is crucial to explore its effect across different datasets. Specifically, an analysis of whether a larger number of components may be beneficial for more complex datasets would provide valuable insights into optimizing the model’s performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please address the points raised in the Weakness section. Also:\n\n1. What is the purpose of including Aux. prompts? \n\n2. How does different CLIP architecture and also different VLMs affect performance?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The methodology is interesting and a solid contribution to this direction of research in vision-language modelling for anomaly detection.\n\nThe results appear to be promising in the experiments presented, although a wider range of experimental setups would be more convincing (see weakness) \n\nThe ablation study is comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a feature transformation methodology using concept axes, which are the principal components of the difference vectors between text embeddings of prompts specially designed to ignore nuisance attributes/highlight important attributes for anomaly detection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Figure 1 is not particularly intuitive or clear, and it is not explained in the text. \n\n2. As the exact formulation of prompts is absolutely critical for this methodology, it should have more dedicated explanation in the main text of the paper, not relegated almost entirely to the appendix. \n\n3. There are not many baselines, and it would have been more convincing if you compare more baselines with and without LAFT transformations. \n\n4. The range of experiments presented are quite restricted. For example with Coloured MNIST, it appears that only one number-colour combination as the normal set was tried. It would be more proper to conduct multiple experiments with different combinations of attributes and show the average result. The same can be said for the other datasets." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper is with clear motivation.\n2. This paper is well-organized and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "On the basis of existing anomaly detection methods based on visual language alignment, this paper proposes using task related languages for task oriented feature information screening and transformation to improve the model's anomaly detection capability. The experiment was conducted on multiple datasets and demonstrated better performance compared with existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The criteria for selecting text prompts are ambiguous. Some datasets utilize the category names of the samples, while others employ diverse descriptions. These approaches rest on the critical assumption that anomalies are distinctly defined, exemplified by MNIST, where anomalies arise from differences in numerals rather than variations in handwriting styles or colors. Should the actual anomalies diverge from these presuppositions, might the proposed model's performance diminish relative to methods devoid of textual guidance? In other words, could the model forfeit its capacity to detect all possible anomalies?\n\n2. In the MVTec dataset experiment, the author opted not to employ the concise anomaly descriptions provided by the dataset itself for text prompts, instead relying solely on item categories, mirroring the approach of WinCLIP. What rationale informed this decision?\n\n3. The proposed model is an extension of WinCLIP, yet it appears to forgo the anomaly segmentation functionality inherent to WinCLIP. Is this omission attributable to certain design elements that potentially diminish the model's anomaly localization capabilities?\n\n4. Experiments have been conducted on synthetic datasets like MNIST and CelebA by altering the original datasets. While I acknowledge the challenge of selecting appropriate text prompts for real-world datasets such as MVTec, the author should endeavor to incorporate more authentic datasets into their study, such as the VisA dataset utilized in WinCLIP or the medical AD benchmark employed in MVFA [a].\n\n[a] Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images. CVPR 2024." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present Language-Assisted Feature Transformation (LAFT) to guide normality boundaries in image anomaly detection using natural language." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024languageassisted,\ntitle={Language-Assisted Feature Transformation for Anomaly Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2p03KljxE9},\nnote={under review}\n}" }, "abstract": { "value": "This paper introduces LAFT, a novel feature transformation method designed to incorporate user knowledge and preferences into anomaly detection using natural language. Accurately modeling the boundary of normality is crucial for distinguishing abnormal data, but this is often challenging due to limited data or the presence of nuisance attributes. While unsupervised methods that rely solely on data without user guidance are common, they may fail to detect anomalies of specific interest. To address this limitation, we propose Language-Assisted Feature Transformation (LAFT), which leverages the shared image-text embedding space of vision-language models to transform visual features according to user-defined requirements. Combined with anomaly detection methods, LAFT effectively aligns visual features with user preferences, allowing anomalies of interest to be detected. Extensive experiments on both toy and real-world datasets validate the effectiveness of our method." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "anomaly detection", "feature transformation", "vision-language model", "language guidance" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/603120385c6a245c85cd75ff7e558343c6556c26.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/a5fff50c521dcc95360f3c415581f7517ff8c882.zip" }, "title": { "value": "Language-Assisted Feature Transformation for Anomaly Detection" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2pEqXce0um
Root Cause Analysis of Failure with Observational Causal Discovery
main
Active
causal discovery;root cause analysis
causal reasoning
3;3;3;5
4;4;4;4
2;3;2;2
2;2;2;2
2;2;2;2
3.5
4
2.25
2
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Consider a causal model with three variables $(X_1, X_2, X_3)$ with edges $X_1 \\to X_2 \\to X_3$ and $X_1 \\to X_3$. Following the definition of possible parent in Definition A.4, $X_2$, which is an actual parent of $X_3$, is not a possible parent of $X_3$. Is this correct?\n\n2. Given that the output of Algorithm 2 is a partially oriented DAG, is the definition of possible parent set the same as in Definition A.4?\n\n3. Should the faithfulness assumptions be defined on the augmented graph?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The organization of the paper is easy to follow, although some technical parts are not clear (see Weakness 3).\n2. The authors provide detailed simulation results to demonstrate the performance of the proposed algorithm. In particular, the authors present multiple variants of the proposed RCG algorithm with different graph inputs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors provide a method for identifying the root cause based on marginal invariance tests. The proposed method includes two steps: first, recovering the skeleton of the underlying causal model using normal (pre-failure) data, and second, constructing an augmented graph using invariance tests to identify the root cause by computing conditional mutual information. The authors demonstrate that, if the underlying causal model is known, the root cause can be identified using $O(\\log(n))$ marginal invariance tests, where $n$ is the number of observed variables. Additionally, given observational data, the root cause can be recovered using $O(n)$ invariance tests according to the proposed algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main theoretical results in Sections 4 and 5 are based on the implicit assumption of atomic intervention (i.e., only one variable is affected by the failure mode). This is a very strong assumption, and existing methods such as RCD do not rely on this assumption. For example, in Figure 5, given that $F$ is not independent of $X_2$, it might be the case that both $X_2$ and $X_3$ are directly affected by $F$.\n\n2. Section 5 lacks a theoretical guarantee of the recovery output, which may make the comparison with RCD unfair. It has been shown in RCD that the true root cause can be uniquely identified given infinite data (without knowing the graph structure or the number of root causes), although it may require an exponential number of invariance tests. The authors claim that only $O(n)$ invariance tests are needed in the RCG algorithm. However, there is no guarantee of recovery accuracy; that is, it is unclear under what conditions the true root cause is the only variable adjacent to $F$, as stated in Lemma 5.3.\n\n3. Some technical details are either missing or provided in the appendix; including them in the main text would improve the presentation. For example, Lemma 5.3 relies on the possible parent set $PossPa(X)$, which is defined in Definition A.4 without explanation. Further, it appears that not all actual parents are possible parents (see Q1 below), which may lead to incorrect theoretical results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Would you be able to try various other methods besides C-PC for the essential graph search, trying to find the best performers in the literature?\n\nWould you be able to go through the literature more thoroughly and give a more substantive literature review?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "An attempt is made here to give an algorithm for root cause analysis that considers some of the literature. Preliminary results are promising." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Using causal analysis, the authors provide and review methods to determine the root cause(s) of failure for up to 100 nodes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper needs more work. There are claims throughout that seem like they're not quite thought through. I will list a few, but generally, the paper needs to focus more on comparing the best available methods in the literature and be rewritten with this in mind. That requires a bit more work than was taken up here.\n\nFor example, the reference to Chickering 2004 as evidence that causal search algorithms are slow is a bit forced since there are implementations of Chickering 2002 that are quite fast (e.g., FGES). The Lam et al. 2022 paper referenced is pretty slow for 100 nodes, but a follow-up paper to this in Neurips 2023 is quite fast for 100 nodes and scales to 1000 nodes with near-perfect accuracy. Also, whether a causal search algorithm is slow largely depends on the model class. For the linear Gaussian or multinomial cases, algorithms can be quite fast, but general distributions can become very slow, as general tests or scores need to be employed. The speed and accuracy also depend on the density of the graph. FGES (above) is very accurate for sparse graphs (sparsity = 1 - avg degree / # nodes, so for 100 nodes, average degree 4 might be considered sparse). But for dense graphs, its accuracy falls off quickly. PC (and derivatives) tend to have decent adjacency precision, but adjacency recall can be low, and orientation accuracy can be low. The devil is in the details. So those comments were a little too hand-wavy. For the version of PC you're using, you need to cite accuracy statistics not just for adjacency but also for orientation, as you are making claims about whether ancestral paths exist. This is completely missing from the draft.\n\nAs a general strategy, one should compare one's novel methods to _the best-performing alternative methods in the literature_, not just a few chosen methods one has on hand. As for the methods compared, these don't seem like the best methods that could be devised in the literature, so more work needs to be done to find what those methods might be (or devise them) and compare them. The PC version you're using should be compared to other plausible alternatives, such as the one mentioned above, or to the R method, BIDAG, which is also quite good. Again, for timing results, just give the _actual timing results_ for the various methods and let the reader decide. If the C-PC method turns out to be the winner, this should be evident from one's figures.\n\nIn addition, there are more papers on root cause analysis than are given in the literature review; this could well be expanded.\n\nSome minor comments.\n\n1. The definition of Markov given is for DAGs in particular, not for arbitrary graphs. It doesn't even work for what you're calling \"essential graphs.\"\n\n2. There is a little confusion about soft intervention. If you do a \"soft intervention\" on a variable X, X can still have parents. The case where it cannot have parents is where you have a \"hard intervention,\" in which case you replace its distribution with a parent-free distribution of your choice. This is a terminological problem that can be fixed.\n\nThere is a little confusion between the lemmas given on p. 4 and the algorithms later in the paper. On p. 4, you claim that \"The following two lemmas use the fact that there is only one single root cause,\" leading me to think that you are only considering the case where there is a single root cause of failure in the system. It's a strong assumption, but fair enough. But later, in the algorithms, you say you list the top l root causes. I could not discern any transition between these two ideas.\n\nTypos p. 5 \"Algorithm the only\" ==> \"Algorithm that only\"; \"C avid\" ==> \"C to an avid\"\n\nYou say proofs are to be left to the avid reader, but for a paper for ICML, you should supply the proofs of your claims.\n\nIn Algorithm 2, circle endpoints suddenly appear out of nowhere. What are these? Are you dealing with PAGs here?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The work certainly has some novel and great insights. However, I am concerned about the (more high-level) novelty here. The related work that identifies the node with a mechanism shift without assuming a graph naturally needs to run this on a potentially exponential number of combinations. Their main claim is also about not needing the graph structure, as this is an obvious way to reduce the required number of independence tests one needs to perform. Running causal discovery on the normal operation period of a system is a logical first step if a method requires a causal graph or, as in your case, to reduce the search space. In that sense, I am concerned about the novelty claim that one needs fewer tests if the graph is given, as this is obvious. I might be missing a crucial part here in the, admittedly, over-simplification of the idea and hope the authors can comment on this.\n\nSome further remarks:\n- The related work focuses on certain types of work in the domain of root cause analysis but lacks discussion about other types of work that utilize a causal graph directly, such as:\n\n\"Identifying patient-specific root causes with the heteroscedastic noise model\" by Strobl et al. \n\"Causal structure-based root cause analysis of outliers\" by Budhathoki et al. \n\"Counterfactual formulation of patient-specific root causes of disease\" by Strobl et al. \n\"Root Cause Analysis of Outliers with Missing Structural Knowledge\" by Okati et al. \n\n- The difference between the complexities mentioned on lines 101 and 103 is not clear, and further clarification would be helpful.\n- In Definition 2.1, the formal definition of the graph is lacking, which you then later use in Assumption 2.3. You could move this to Definition 2.1 already.\n- The notation Z(X, Y ∉ Z) in line 130 is confusing; can you clarify this?\n- As mentioned before, the need to introduce SCMs is unclear as a mechanism shift can also be purely introduced using the Bayesian network formulation.\n- A clear assumption statement that you assume a single root cause is lacking.\n- The faithfulness assumption is important for causal discovery via CIs, but that connection could be emphasized more clearly.\n- Applying causal discovery on the 'normal operation' period alone implies the assumption that the causal structure has changed in the anomalous regime. While this is a valid assumption, it is also only made implicitly. In a general setting, anomalous data can even be particularly helpful in identifying cause-effect relationships.\n- The introduction of the notation for having multiple metrics for a node over time does not seem to be used afterward. While I am not very familiar with the C-PC algorithm, it is unclear how one would perform causal discovery in such a setting with high-dimensional nodes and temporal dependencies without employing more time-based causal discovery approaches. Does the data you used in the experiments reflect such data?\n- A drawback of your approach that lacks further discussion is the requirement of a \"sufficiently\" large sample size of the anomalous population. Since you argue that the root cause needs to be identified in a timely manner, this would only work in a system that produces a lot of data. If, for example, the system only produces a metric observation every few minutes, you would not have enough samples. This aspect could be discussed further as the works mentioned in the first point would work on single observations." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ Insightful analysis and fair discussion of causal discovery in a large-scale setting\n+ Good introduction to the problem\n+ Extensive additional information in appendix for certain details" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an approach for identifying the root cause in a system where we observe anomalies. For this, the authors propose to utilize data from the normal operating regime to infer a (partial) causal graph. The graphical information is then used to reduce the number of independence tests to identify the root cause node based on the assumption of a shift in the underlying data generating mechanism. The approach has been evaluated using artificial and real-world data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the related work section has a fair discussion about different works, it also lacks work involving the direct use of graph structure (see Questions section for more details).\n- The proposed method certainly has its novelty, but it seems rather limited as it boils down to the idea of first reconstructing the causal graph using causal discovery and by this, naturally, reducing the search space when running independence tests. The arguments for papers that address the problem without graph knowledge explicitly avoid a causal discovery step.\n- Some definitions (like SCMs) are introduced but then not really needed. A shift in the mechanisms can be defined without this.\n- The formulation of some of the definitions could be improved (e.g., when introducing a causal graph). However, these are minor issues that could be easily fixed in a revision.\n- Some assumptions are not clearly stated and implied. For instance, the assumption that there can only be one root cause.\n\nFor more details, see the Questions section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The authors claim that RCD performs an exponentially large number of CI tests, but I'm not sure this is correct for Hierarchical Learning in (Ikram et al., 2022).\n\n2. Lemma 4.1 and 4.2 are both based on the fact there is only one single root cause, but it is possible that there are multiple root causes in real-world scenarios, limiting applicability of this work.\n\n3. There is too much space between references in page 14." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The motivation of this work makes sense. I totally agree that RCA is time-sensitive only after the failure occurs. We can use the time before a failure to learn the causal graph which can help us to reduce the number of tests of conditional independence in the period of RCA.\n\n2. The authors provide extensive experimental results and detailed discussion. In particular, I very appreciate Appendix I." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This research links root cause analysis in network failures to Interactive Graph Search (IGS), establishing a mathematical lower bound for the number of tests needed. With a fully known causal graph, the authors then propose an optimal algorithm that achieves this bound. With a partially known causal graph, they can identify the root-cause with linear number of invariance tests." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This work relies heavily on previous works. First and foremost, it borrows the idea of modeling a failure as an intervention and transforming RCA into a problem of finding adjacency of the F-NODE from (Ikram et al., 2022). Also, it directly uses the theoretical results in (Shangqi et al., 2023) and C-PC in (Lee et al., 2024). More specifically, (Ikram et al., 2022) has already linked RCA to causal discovery and most causal discovery techniques used in this paper are also proposed by previous works. In my opinion, the major contribution in causal discovery of this paper lies in Lemma 4.1, 4.2, and 5.2. Considering that the authors list \"causal discovery\" as the first keyword, I think their contribution in this aspect is limited.\n\n2. The organization of this paper should be improved. The discussion on related works is spread across many sections. The authors can use a dedicated section to introduce existing techniques used in this paper and the detailed differences between this work and previous works, rather than giving too many details in Sec. 1, 4, 5, which makes it harder for readers to grasp their contributions. Besides, I strongly suggest the authors move Appendix I to the main text.\n\n3. Some minor concerns are detailed in Questions.\n\nIf the authors can address my concerns, I would like to raise my score." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We show how to use the causal graph for identifying failures in software systems." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024root,\ntitle={Root Cause Analysis of Failure with Observational Causal Discovery},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2pEqXce0um},\nnote={under review}\n}" }, "abstract": { "value": "Finding the root cause of failures is a prominent problem in many complex networks. Causal inference provides us with tools to address this problem algorithmically to automate this process and solve it efficiently. The existing methods either use a known causal structure to identify root cause via backtracking the changes, or ignore the causal structure but rely on invariance tests to identify the changing causal mechanisms after the failure. We first establish a connection between root cause analysis and the \\textit{Interactive Graph Search (IGS)} problem. This mapping highlights the importance of causal knowledge: we demonstrate that any algorithm relying solely on marginal invariance tests to identify root causes must perform at least $\\Omega(\\log_{2}(n) + d\\log_{1+d}n)$ many tests, where $n$ represents the number of components and $d$ denotes the maximum out-degree of the graph. We then present an optimal algorithm that achieves this bound by reducing the root cause identification problem as an instance of IGS. Moreover, we show that even if the causal graph is partially known in the form of a Markov equivalence class, we can identify the root-cause with linear number of invariance tests. Our experiments on a production-level application demonstrate that, even in the absence of complete causal information, our approach accurately identifies the root cause of failures." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "causal discovery", "root cause analysis" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/09edf32d7432b49ae89cc030f58002444c89aaa9.pdf" }, "presentation": null, "primary_area": { "value": "causal reasoning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Root Cause Analysis of Failure with Observational Causal Discovery" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2pJpFtdVNe
Preference Elicitation for Offline Reinforcement Learning
main
Active
Reinforcement Learning;Offline Reinforcement Learning;Preference-based Reinforcement Learning
reinforcement learning
5;6;6;6;8
3;4;3;2;2
3;3;3;3;3
3;3;3;2;2
4;3;3;3;3
6.2
2.8
3
2.6
3.2
-0.49099
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Based on your theoretical analysis, could you discuss how you expect the performance will change on dataset of varying optimality?\n2. Could you present experiment results on other dataset for the Cheetah environment, such as medium, medium-expert and expert, to support your discussion?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The idea of using simulated rollouts in preference queries is a natural but unexplored idea in the literature of PbRL. One strength of this paper is that, the authors show the effectiveness in terms of sample complexity both theoretically and empirically." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies preference-based reinforcement learning (PbRL) in offline setting, in which the agent utilizes a fixed trajectory dataset for policy learning and can query humans for preference feedback. In particular, the authors propose to sample preference queries by rolling out trajectory data using learned models of MDPs. The authors provides theoretical guarantees for the sample complexity of their proposed strategy and verify it on simple control tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My concern is about the quality of learned policies. While I agree with the optimality criterion mentioned in 3.2, I think to ensure the practical value of the proposed strategy, it is important to include evaluations for offline dataset of varying optimality. This is because for high-dimensional tasks, under a fixed budget of offline trajectories, the coverage over state-action space and the optimality of the behavior policy, can be conflicting objectives. The state-action space is less covered by good behavior policies, yet this reduced coverage can raise concerns on learned transition model. See detailed question below." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Strengths:\n1. This paper provides a good theoretical analysis of preference elicitation with the offline datasets. It bounds the value difference between the optimal policy under the estimated transition model and the true optimal policy. Such bounds are achieved by decomposing the loss from the model estimation and the reward estimation.\n2. Experiments show the proposed methods outperform other algorithms in several environments.\n3. This paper conducted an ablation study to show the importance of pessimistic with respect to the transition model." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper uses the offline dataset to learn the environment model. They do not assume they have access to the reward in the offline data set. Such offline datasets contribute to the overall learning by providing an estimation of the transition probability. This paper provides a theoretical analysis of reinforcement learning with offline datasets to achieve preference elicitation. The experiments show their algorithms outperform other algorithms in several environments. They also conducted an ablation test to show the importance of pessimistic with respect to the transition model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weaknesses:\n\n1. The experiment environments are relatively simple. The grid world is quite small. It is interesting to try to extend this to more challenging reinforcement learning benchmarks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- I do not quite understand “An advantage of sampling from the offline buffer, however, is that it is not sensitive to the quality of the model” in L346. What does “the model” refer to?\n- Should $N_T$ in the second equation in L369 be $N_R$?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper focuses on the preference elicitation problem on offline RL, which attracts wide attention recently from many fields (such as RLHF for LLMs).\n- This paper has theoretical results on the proposed algorithm with some high-level insights (e.g., pessimism for dynamics and optimism for reward modeling).\n- This paper has practical algorithm designs and good empirical results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents Sim-OPRL, an offline preference-based reinforcement learning algorithm that addresses the challenge of acquiring preference feedback in a fully offline setup. It leverages a learned environment model to elicit preference feedback on simulated rollouts, balancing conservatism and exploration. The main idea is to employ a pessimistic estimation for the transition dynamics (based on the offline dataset) for the OOD issue, and use an optimistic estimation for the reward model (based on the preference elicitation data). The benefit of using simulated rollouts is to avoid wasting preference budget on trajectories with low rewards. The authors provide theoretical guarantees on sample complexity and demonstrate the empirical performance of a practical version of Sim-OPRL across various environments, showing its superiority over previous baseline methods (OPRL and PbOP)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Complexity of Implementation:** The algorithm's reliance on learning several accurate dynamics model might be challenging in practice, especially if the model fails to capture the true dynamics. Moreover, Sim-OPRL requires the trajectory rollouts using the dynamics model and the error may accumulate, which poses higher requirements for the dynamics model. Do the authors have any idea on how to design practical algorithms with less computational overhead (e.g., estimating multiple models) and on more complex environments (e.g., when it is hard to learn an accurate dynamics model).\n- **Lack of study on the dependence on quality of offline data and feedback:** The performance of Sim-OPRL may be heavily dependent on the quality and coverage of the offline dataset. For the experiments in on the tasks listed in Table 2, how are the offline datasets are collected? Are they expert datasets (so the concentrability coefficients are small)? How the feedback is generated in the experiments? How would the algorithm perform when we vary the feedback quality?\n- Minor: What is ``\\hat{R}_\\text{inf}``? I can guess it is pessimistic reward, but ``\\hat{R}_\\text{inf}`` and ``\\hat{T}_\\text{inf}`` are not formally defined." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. How does the complexity of the reward function impact the performance of Sim-OPRL? Have you (or do you plan to) test the algorithm with environments that are characterized by more complex, multi-objective, or non-linear reward functions? If the method is agnostic to the reward function (aside from sparsity) it would help to show that as well.\n2. Can you provide more details on the sensitivity of Sim-OPRL to its hyperparameters, such as the pessimism and optimism parameters? How do you recommend tuning these parameters in practice? It may be insightful to include ablation testing in the appendix that demonstrates the sensitivity (or robustness) to hyperparameter selection, especially as this could drastically affect the real-world viability of the algorithm. \n3. Are there any other algorithms that would serve as a effective and informative baseline for Sim-OPRL? If not, would it be possible to run experiments that demonstrate learning performance on naive methods?\n4. Could you please clarify the rationale behind limiting the experiments to the selected datasets and environments? Are there specific challenges that restrict the application of the method to a broader range of environments and dataset combinations? If there are no such constraints, additional experimental results would be valuable. Conversely, if limitations do exist, it would be beneficial to outline what they are, the reasons for their existence, and why they do not compromise the method's overall effectiveness and practical utility.\n5. Generally speaking could the authors please explain the motivations for the setting further? Specifically, would it be practical to compare the results of Sim-OPRL to running standard offline RL algorithms (CQL, IQL, TD3_BC etc.) on the offline dataset directly? If not, why not?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors provide strong theoretical guarantees on the sample complexity of their approach, ensuring that the algorithm is both efficient and reliable. Additionally, the empirical results across various environments demonstrate the practical effectiveness and scalability of Sim-OPRL, showing it outperforms existing methods in terms of sample efficiency and policy performance.\n2. Sim-OPRL incorporates a pessimistic approach to handle out-of-distribution data, ensuring robustness to model uncertainty. This is particularly important in offline settings where the data may not cover the entire state-action space. Being robust to OOD data makes the algorithm far more applicable to ‘real-world’ problems/settings.\n3. The paper makes a compelling case due to their incorporation of theoretical and empirical evidence. To back up their theoretical insights, they conduct extensive experiments across two different environments. This provided empirical data confirms the practical applicability and robustness of Sim-OPRL, illustrating its effectiveness in scenarios where direct environment interaction is not feasible.\n4. The attached code is well-written and largely self-documenting, with a clear and logical internal structure. This design not only facilitates ease of use for other users looking to implement the Sim-OPRL algorithm but also made the process of reviewing the practical implementation and validating the experiments straightforward and efficient. This made the review process much easier." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the challenge of applying RL to real-world scenarios where direct interaction with the environment is impractical or unsafe. Traditional learning methods require environment interactions, which can be risky in certain fields (like healthcare applications). The paper proposes an algorithm called Sim-OPRL, an offline PBRL learning algorithm that learns from preferences without needing online interaction. This algorithm uses a learned environment model to simulate rollouts and gather preference feedback, balancing pessimism for out-of-distribution data and optimism for acquiring informative preferences. The paper formalizes the problem of preference elicitation in offline RL, proposes a novel algorithm, and provides theoretical guarantees on its sample complexity.\n\nThe paper also demonstrates the effectiveness of Sim-OPRL through empirical validation in various environments, including a gridworld and a sepsis simulation. The results show that Sim-OPRL outperforms an existing baseline algorithm (OPRL) in terms of sample efficiency and policy performance. The paper shows that by leveraging simulated rollouts, their algorithm efficiently learns the optimal policy while minimizing the number of human queries required." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper’s empirical section does not properly consider different baseline algorithms to compare theirs with. The only algorithm that the authors use as a baseline is OPRL. This severely limits the ability to fully assess the relative performance and advantages of Sim-OPRL. To rectify this, The authors should consider including a wider array of offline PBRL algorithms/frameworks in their experiments.\n2. The paper demonstrates promising results in the demonstrated environments, but it lacks validation in more complex and realistic settings. To strengthen the evidence of the algorithm’s practical applicability, the authors should evaluate Sim-OPRL on several different datasets. One example could be MuJoCo style datasets. Other relevant papers in the field construct preference datasets from the D4RL offline benchmark. These datasets provide a more challenging and ‘closer to real world’ testbed. Evaluation on such environments (in conjunction with adding more baseline algorithms) could result in a better assessment of the algorithm’s robustness, scalability, and generalizability.\n3. The paper demonstrates the algorithm’s performance in relatively small-scale environments. Empirically, it does not seem to address scalability to larger, more complex environments. Due to the smaller scale test environments (Gridworld & Sepsis), the the actual scalability of the algorithm (particularly in real-world deployments outside of benchmarks) remains unclear. \n4. As the authors state, for the sepsis environment, the requisite number of preference samples is rather large, due to the sparse reward function. This seems like an inherent limitation, which they posit could be solved by warm-starting the reward model. It would be interesting to see this data and how it affects performance. If a sparse reward function is a true limitation of the Sim-OPRL method, the authors should show more experiments demonstrating that this can be 'worked around' by performing warm starts. This could also help to further justify the real world applicability of the algorithm." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "/" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "/" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. They delve into very interesting setups: offline RL with preference feedback.\n\n2. Their theoretical results are solid and show he advantage of their proposed preference elicitation algorithm over prior methods.\n\n3. They propose a practical algorithm for implementation and extensive experiments show that their method outperform prior methods in several environment." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper delves into offline reinforcement learning from preference feedback and proposes an offline preference elicitation method to simulate trajectories from the learned environment model instead of sampling trajectories directly from the offline dataset. They provide theoretical justification for the previous RL with preference feedback method and show that their proposed method can effectively reduce the sample complexity upper bound. They also propose an empirical algorithm and show it can outperform prior methods and achieve SOTA on offline PbRL setups without access to the ground truth rewarded. They finally iid ablation studies to show the importance of incorporating the principle of pessimism." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I do not see any big issues." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We explore efficient methods for acquiring preference feedback for RL in a fully offline setup." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024preference,\ntitle={Preference Elicitation for Offline Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2pJpFtdVNe},\nnote={under review}\n}" }, "abstract": { "value": "Applying reinforcement learning (RL) to real-world problems is often made challenging by the inability to interact with the environment and the difficulty of designing reward functions. Offline RL addresses the first challenge by considering access to an offline dataset of environment interactions labeled by the reward function. In contrast, Preference-based RL does not assume access to the reward function and learns it from preferences, but typically requires an online interaction with the environment. We bridge the gap between these frameworks by exploring efficient methods for acquiring preference feedback in a fully offline setup. We propose Sim-OPRL, an offline preference-based reinforcement learning algorithm, which leverages a learned environment model to elicit preference feedback on simulated rollouts. Drawing on insights from both the offline RL and the preference-based RL literature, our algorithm employs a pessimistic approach for out-of-distribution data, and an optimistic approach for acquiring informative preferences about the optimal policy. We provide theoretical guarantees regarding the sample complexity of our approach, dependent on how well the offline data covers the optimal policy. Finally, we demonstrate the empirical performance of Sim-OPRL in various environments." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Reinforcement Learning", "Offline Reinforcement Learning", "Preference-based Reinforcement Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6348e617abb8ef086fb1720d532fbe0628cf43fb.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/58a28c1b452bafa3ed24c5a1c06f22e50f9d6e5d.zip" }, "title": { "value": "Preference Elicitation for Offline Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2pNLknCTvG
uniINF: Best-of-Both-Worlds Algorithm for Parameter-Free Heavy-Tailed MABs
main
Active
Heavy Tailed;Multi-Armed Bandits;Parameter-Free;Best-of-Both-Worlds
learning theory
6;6;6;6
4;4;4;4
3;3;3;3
3;3;3;3
2;3;3;3
6
4
3
3
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "When $\\alpha, \\sigma$ known, it is trivial to use regular FTRL based algorithm to achieves (nearly) optimal worst-case regret for adversarial bandit problems with potentially heavy-tailed losses (fix the clipping bound $[-r,r]$ with $r=\\sigma T^{1/\\alpha}K^{-1/\\alpha}$ and use Theorem 4 in [6]). When $\\alpha, \\sigma$ are unknown, intuitively, it suffices to use the adaptive clipping bound according to the empirical estimation of $\\alpha, \\sigma$ (Line 6 of ALG 1). Is the high-level idea of the algorithm in this paper the one I described?\n\n\nReferences: \n[1] Putta, Sudeep Raja, and Shipra Agrawal. \"Scale-free adversarial multi armed bandits.\" International Conference on Algorithmic Learning Theory. PMLR, 2022.\n\n[2] Chen, Mingyu, and Xuezhou Zhang. \"Scale-free Adversarial Reinforcement Learning.\" arXiv preprint arXiv:2403.00930 (2024).\n\n[3] Chen, Mingyu, and Xuezhou Zhang. \"Improved Algorithms for Adversarial Bandits with Unbounded Losses.\" arXiv preprint arXiv:2310.01756 (2023).\n\n[4] Huang, Jiatai, Yan Dai, and Longbo Huang. \"Banker online mirror descent: A universal approach for delayed online bandit learning.\" International Conference on Machine Learning. PMLR, 2023.\n\n[5] Hadiji, Hédi, and Gilles Stoltz. \"Adaptation to the range in k-armed bandits.\" Journal of Machine Learning Research 24.13 (2023): 1-33.\n\n[6] Wei, Chen-Yu, and Haipeng Luo. \"More adaptive algorithms for adversarial bandits.\" Conference On Learning Theory. PMLR, 2018." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well written. The theoretical results and proof appear to be correct.\n2. The paper achieves worst-case BoBW optimal regret for HTMAB, which improves previous results proposed in [Huang 2022]." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies Heavy-Tailed MultiArmed Bandits (HTMAB) problem. The main contribution of the paper is to design an optimal algorithm that achieves both Best of-Both-Worlds (BoBW) and Parameter-free properties for HTMAB, where BoBW means that the algorithm performs optimally in both stochastic and adversarial environments and Parameter-free means that the algorithm do not need to know the the heavy-tail parameters in advance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper should include the comparisons with previous scale-free MAB works, e.g. [1-5]. Specifically, the algorithm structure proposed in the paper seems very close to the one proposed in [3], which also uses the clipping/skipping technique and inf regularization. The differences should be further clarified.\n2. Assumption 1 is a bit weird. I can understand why it is unavoidable, but I suggest that the authors can give the best (not worst-case optimal) upper bounds we can get without this assumption." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How do applications justify the adversarial HTMAB? Can you please provide some examples?\n\n- I think it would be interesting to highlight the trade-off between ADAR-UCB and UniInf more. How is the best-of-both-worlds property related to the extra factor in the stochastic setting's regret bound? Does your algorithm require extra round-robin turns (as in Adar-UCB)?\n\n- Do you know what the optimal performance would be without the truncated non-negativity assumption? Are there any known lower bounds for the problem without this assumption?\n\n- It would be interesting to understand if alternative (and possibly weaker) assumptions can lead to the same performances (I would like to point out that Theorems 2 and 3 from Genalti et al. don't necessarily imply that this specific assumption is required, but rather that without any assumptions such performances are unattainable)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is written, and it provides an exhaustive review of the existing literature.\n\nThe contribution is clear, and it is well highlighted which open question the paper addresses.\n\nThe paper also presents some nice technical contributions in the algorithm and in the proofs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the heavy-tailed MAB problem, a variant of the stochastic bandit problem where rewards are sampled from distributions having potentially infinite variance. The main contribution of this work is to provide an algorithm with tight regret guarantees in both the stochastic and adversarial HTMAB problem. While the performance in the stochastic setting is worse (not in terms of T) than existing algorithms (e.g. AdarUCB), the algorithm simultaneously deals with the two settings and is tight in both the instance-dependent and independent sense." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall, the contribution is limited when it comes to applications since adversarial HTMABs are uncommon in the real world and the literature. Instead, in the purely stochastic setting, the algorithm does slightly worse than AdaRUCB by a factor of $\\log \\frac{\\sigma^\\alpha}{\\Delta_{min}}$ (Genalti et al.)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. If $(\\sigma, \\alpha)$ is known a priori, can we eliminate Assumption 1? What are the main technical difficulties when eliminating Assumption 1 in the case of known $(\\sigma, \\alpha)$?\n2. In the previous work [10], the Tsallis entropy regularizer is used while the log-barrier regularizer is used in this work. Is it because the magnitude of the loss estimates in this work is larger than the magnitude of the loss estimates in [10]?\n\n[10] Huang et al. Adaptive Best-of-Both-Worlds Algorithm for Heavy-Tailed Multi-Armed\nBandits. ICML, 22." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Technical innovations**: The proposed algorithm and its analysis incorporate new ingredients including a new skipping and clipping scheme of the loss estimates and a stopping time argument to bound the stability terms and the skipping errors, which seem to be technically valuable and might be of independent interest.\n2. **Writing**: Generally, this work is well written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work establishes the first parameter-free algorithm for heavy-tailed multi-armed bandits (MABs) with best-of-both-worlds (BOBW) properties. This algorithm does not require prior knowledge of heavy-tail parameters $(\\sigma, \\alpha)$ and simultaneously obtains the (nearly) optimal regret in both the stochastic and adversarial environments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **More comparisons with existing literature**: Most parts of the presentation in this work are clear. However, I would like to suggest the authors provide more discussions and comparisons with the techniques in existing literature. For instance, the exclusion of the optimal arm $i^\\ast$ in Eq. (5) when using the log-barrier regularizer is also achieved by [1,2]. I am not very sure whether there are additional technical nuances between the exclusion of the optimal arm in Eq. (5) of this work and those in [1,2]. For the data-dependent learning rates, several works have also leveraged them to achieve BOBW results in various online learning problems (say, [2,3,4,5,6,7]). Besides, when bounding the stability term of OMD/FTRL, a key property required is to ensure the multiplicative stability of the update of the prediction. In this work, such a property is guaranteed by Lemma 4. However, it seems not appropriate to call such a lemma “novel” as on Line 423, since it has also appeared in previous works when using the log-barrier regularizer (say, Lemma 9 in [8]; Lemma 12 in [9]).\n\n[1] Ito. Parameter-Free Multi-Armed Bandit Algorithms with Hybrid Data-Dependent Regret Bounds. COLT, 21.\n\n[2] Ito. Hybrid Regret Bounds for Combinatorial Semi-Bandits and Adversarial Linear Bandits. NeurIPS, 21.\n\n[3] Ito et al. Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with Feedback Graphs. NeurIPS, 22.\n\n[4] Tsuchiya et al. Best-of-Both-Worlds Algorithms for Partial Monitoring. ALT, 23.\n\n[5] Ito et al. Best-of-Three-Worlds Linear Bandit Algorithm with Variance-Adaptive Regret Bounds. COLT, 23.\n\n[6] Kong et al. Best-of-three-worlds analysis for linear bandits with follow-the-regularized-leader algorithm. COLT, 23.\n\n[7] Ito et al. Adaptive Learning Rate for Follow-the-Regularized-Leader: Competitive Analysis and Best-of-Both-Worlds. COLT, 24.\n\n[8] Lee et al. A closer look at small-loss bounds for bandits with graph feedback. COLT, 20.\n\n[9] Jin et al. Simultaneously Learning Stochastic and Adversarial Episodic MDPs with Known Transition. NeurIPS, 20." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. To my understanding, this paper claims to develop a new analysis for $DIV_t$ (also named stability term) when using log-barrier, and introduces an extra $(1-x_{t,i})^2$ factor in the bound, which is the key to get self-bounding property and BOBW. However, I don't quite understand what it means by “$S_t$ is adequately large compared to $||c_t||_{\\infty}$”. I tried to find a formal lemma or theorem statement for this new bound (with exactly the same form) on $DIV_t$ term but I failed. Could the authors help explain this? Under what conditions does this new bound hold?\n\n2. What’s the difficulty to get the refined gap dependency? From the appendix, I feel that in both DIV term and SHIFT term, we cannot achieve that. Could the authors elaborate more on that (from the analysis perspective)? Is it because the regularizer is log-barrier rather than Tsallis entropy?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The parameter-free BOBW bound in HTMAB is a quite strong guarantee, and to achieve this, several technical innovations are proposed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies parameter-free best-of-both-worlds (BOBW) for HT-MABs, where 1) HT means that the loss distributions can be unbounded but have $\\\\alpha$-th moment bounded by $\\\\sigma^{\\\\alpha}$, for some $\\\\sigma>0, \\\\alpha\\in(1,2]$; 2) BOBW means that one single algorithm can enjoy logarithmic gap-dependent regret in the stochastic environment (loss distributions are fixed over time) and worst-case optimal regret in adversarial environment (loss distributions change over time), without knowing in advance whether the environment is sto. or not; 3) parameter-free means that the algorithm doesn’t now the value of $\\\\sigma>0, \\\\alpha\\in(1,2]$, but can ensure the regret guarantee as if they were known.\n\nAn algorithm called uniINF is proposed, which ensures $\\\\tilde{O}(\\\\frac{K}{(\\\\Delta_{\\\\text{min}})^{\\\\frac{1}{\\\\alpha-1}}}) $ (expected pseudo-)regret in sto. env. (which is optimal up to log terms and the gap dependency), and near-optimal regret in adv. env. (which is optimal up to log terms) when the loss distributions of the optimal arm satisfy the truncated non-negative assumption (Assumption 1). This is the first parameter-free BOBW result in HTMAB. Previous results approach that in one single env. only (either sto. or adv.).\n\nTechnically, this is achieved by several components, including 1) iterative and adaptive learning rate scheduling; 2) adaptive clipping/skipping; 3) refined analysis for log-barrier regularizer." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I’m not happy with how Assumption 1 is justified. From Line 193 to 195, it says that \"we make the following **essential** assumption. As shown in (Genalti et al., 2024, Theorems 2 & 3), without Assumption 1, there does not exist HTMAB algorithms that can … without knowing either $\\\\alpha$ or $\\\\sigma$.\" However, this statement could be misleading based on my understanding on (Genalti et al., 2024).\n\nThe negative result shown in (Genalti et al., 2024) is that, it’s impossible for one single algorithm to match the lower bound in (Bubeck et al., 2013) for all unknown $\\\\sigma>0$ or $\\\\alpha\\\\in(1,2]$. However, I don’t think it has been characterized that how weak the needed assumption to be \"parameter-free\" in HTMAB. In fact, in the conclusion part of (Genalti et al., 2024), it even says that \"investigating the role of the truncated non-positivity assumption, especially, whether weaker assumptions can be formulated.\"\n\nTherefore, I would urge the authors to refine the statements related to Assumption 1, as currently it may leave the impression that Assumption 1 is a necessary condition for \"parameter-free\", which as of now it’s still unclear yet." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024uniinf,\ntitle={uni{INF}: Best-of-Both-Worlds Algorithm for Parameter-Free Heavy-Tailed {MAB}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2pNLknCTvG},\nnote={under review}\n}" }, "abstract": { "value": "In this paper, we present a novel algorithm, `uniINF`, for the Heavy-Tailed Multi-Armed Bandits (HTMAB) problem, demonstrating robustness and adaptability in both stochastic and adversarial environments. Unlike the stochastic MAB setting where loss distributions are stationary with time, our study extends to the adversarial setup, where losses are generated from heavy-tailed distributions that depend on both arms and time. Our novel algorithm `uniINF` enjoys the so-called Best-of-Both-Worlds (BoBW) property, performing optimally in both stochastic and adversarial environments *without* knowing the exact environment type. Moreover, our algorithm also possesses a Parameter-Free feature, *i.e.*, it operates *without* the need of knowing the heavy-tail parameters $(\\sigma, \\alpha)$ a-priori.\nTo be precise, `uniINF` ensures nearly-optimal regret in both stochastic and adversarial environments, matching the corresponding lower bounds when $(\\sigma, \\alpha)$ is known (up to logarithmic factors). To our knowledge, `uniINF` is the first parameter-free algorithm to achieve the BoBW property for the heavy-tailed MAB problem. Technically, we develop innovative techniques to achieve BoBW guarantees for Parameter-Free HTMABs, including a refined analysis for the dynamics of log-barrier, an auto-balancing learning rate scheduling scheme, an adaptive skipping-clipping loss tuning technique, and a stopping-time analysis for logarithmic regret." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Heavy Tailed", "Multi-Armed Bandits", "Parameter-Free", "Best-of-Both-Worlds" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b8cc96ff5029e237fa4ac379dd8901f4e014b4b6.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/656ea2632bddac6eff4ca9be7911806fbcf428bc.pdf" }, "title": { "value": "uniINF: Best-of-Both-Worlds Algorithm for Parameter-Free Heavy-Tailed MABs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2prShxdLkX
MoDGS: Dynamic Gaussian Splatting from Casually-captured Monocular Videos
main
Active
3D Gaussian Splatting;Dynamic Novel-view Synthesis;Neural Rendering
applications to computer vision, audio, language, and other modalities
3;5;8;8
4;4;3;5
2;2;4;4
2;2;4;4
3;3;3;4
6
4
3
3
3.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "From the perspective of a peer, I suggest the authors address the concept of 'Depth supervised in dynamic GS' in the title. After all, a novel method would be more informative and important to the other researchers than a usage scenario like ''casually captured video\"." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "For the novelty, this paper makes a distinct contribution to introducing depth supervision into the domain of dynamic Gaussian Splatting (DGS) for monocular dynamic input. This approach is novel yet intuitive, filling a key gap in the field for cases where the input consists of casually captured videos with minimal camera movement. Compared to the other papers in the field that mechanically put all fancy complicated input feature streams or loss functions together, the proposed solution is conceptually straightforward but impactful, pushing forward the capabilities of monocular dynamic scene reconstruction.\n\nThe experiments are well-designed and executed, rigorously testing the proposed method across various datasets, including Nvidia, DyNeRF, and DAVIS. Each experiment logically supports the methodology, demonstrating how the 3D-aware initialization and ordinal depth loss contribute to enhanced depth consistency and scene deformation modeling. The results clearly show MoDGS’s robustness and superiority over baseline methods, adding confidence in its effectiveness.\n\nThe paper is presented with clarity and precision, making even technical aspects of the method easy to follow. The figures and tables are well-constructed and informative, providing visual clarity to support the text and helping to reinforce the main findings. The logical flow, from problem statement to method explanation and results, enables readers to understand the method's motivation and benefits seamlessly." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces MoDGS (Monocular Dynamic Gaussian Splatting), a novel approach for rendering dynamic 3D scenes from casually captured monocular videos, overcoming limitations faced by prior dynamic NeRF and Gaussian Splatting methods via depth estimation. These existing approaches require either extensive camera movement or synchronized multi-view setups to establish multiview consistency, which is lacking in casual, minimally moving videos.\n\nTo tackle this challenge, MoDGS incorporates recent advancements in single-view depth estimation to guide the learning of a deformation field that represents scene dynamics. The method introduces a novel ordinal depth loss to address the depth inconsistency in single-view depth maps, enhancing the robustness and continuity of 3D scene reconstruction.\n\nComprehensive experiments across multiple datasets (Nvidia, DyNeRF, DAVIS, and a self-collected casual video dataset) demonstrate that MoDGS produces high-quality novel views in dynamic scenes, outperforming state-of-the-art methods. The authors also plan to release their code and dataset to support future research in this area." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "MoDGS is validated across several datasets, which demonstrates its robustness. However, the paper could discuss the potential limitations in generalizing this approach to different depth estimation models. It would demonstrate the robustness of the proposed method and its generalizability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "A version of this paper is available on arxiv https://arxiv.org/pdf/2406.00434, and I had viewed a tweet earlier in the summer with the same title, paper, code: https://x.com/zhenjun_zhao/status/1798281777242632700. This may violate the double-blind review that is required, so I would like that to be known." }, "flag_for_ethics_review": { "value": [ "Yes, Other reasons (please specify below)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can you elaborate on the choice of ordinal depth loss over other depth loss functions, such as perceptual depth consistency? How did the ordinal depth loss compare to other depth loss formulations in preliminary experiments, and what were the observed advantages or disadvantages?\n2. How robust is MoDGS in scenarios with heavy occlusions or specular reflections? Would integrating additional priors or multi-scale depth estimations help in such cases?\n3. How does MoDGS compare with recent depth consistency techniques, particularly those used in self-supervised monocular depth estimation? Exploring this comparison could shed light on the effectiveness of the ordinal depth loss relative to existing methods." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "MoDGS represents an original approach within novel view synthesis and dynamic scene modeling by specifically addressing the limitations of existing methods for casually captured monocular videos. The authors introduce a 3D-aware initialization mechanism and an ordinal depth loss, that offer a solution that successfully reduces the dependency on rapid camera motion. The novel use of ordinal depth loss to maintain depth order among frames, rather than relying solely on absolute values, represents an innovative perspective on addressing depth consistency issues, which has practical implications for improving depth coherence in dynamic scenes captured casually. I believe the paper is well-executed in terms of technical rigor, with comprehensive evaluations across three datasets: DyNeRF, Nvidia, and a newly created monocular casual video dataset. Each component of MoDGS is thoroughly tested and ablated to demonstrate its impact on the final results. This systematic experimentation supports the author’s claim that MoDGS significantly outperforms other approaches in the quality of novel-view rendering for dynamic scenes. The paper is structured logically, with clear explanations of each component of the MoDGS pipeline. The figures visually support the textual explanations, making complex concepts more understandable to a reader. The method has significant implications for real-world applications that involve casually captured videos, such as mobile AR/VR, video editing, and 3D content creation. By enabling high-quality novel view synthesis from single-camera footage without multiview camera motions, MoDGS broadens the scope of dynamic scene reconstruction, making it accessible to a wider range of use cases. The method’s ability to handle both static and dynamic elements in monocular videos opens new avenues for monocular depth estimation and dynamic scene modeling, where single-camera approaches have been historically constrained by depth inconsistency issues." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes MoDGS, a novel pipeline to render high quality novel views of dynamic scenes from casually captured monocular videos. Unlike traditional dynamic scene reconstruction methods that rely on rapid camera motions to establish multiview consistency, MoDGS is designed for videos with static/slowly moving cameras, where such consistency is weaker. The core of their method involves using a single-view depth estimation technique to guide scene learning and introducing a 3D-aware initialization method to construct a realistic deformation field. MoDGS incorporates an innovative ordinal depth loss to address the challenge of depth inconsistency across frames, enhancing the coherence and quality of rendered views. Experiments on datasets such as DyNeRF, Nvidia, and a self-collected dataset demonstrate it's ability to outperform SOTA methods in novel view synthesis, achieving superior image quality even in challenging dynamic scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the ordinal depth loss is a novel way to improve depth coherence, I believe the paper may benefit from more discussion on its limitations. Specifically, the ordinal depth loss assumes a consistent depth order among frames, which may not hold in scenes with complex occlusions or reflections. MoDGS assumes smooth transitions between frames for consistent depth ordering. However, the approach may face challenges in scenes with rapid or erratic movement where objects appear and disappear frequently. While it performs well on scenes with relatively smooth dynamics, addressing how the method might be adapted or optimized for highly dynamic environments would improve its versatility. The method relies heavily on single view depth estimators to guide the reconstruction process. Although the depth estimation technique used is SOTA, it still inherits the limitations of single view estimators, particularly in complex scenes with specular surfaces or low-lit conditions. Including a more detailed analysis on how the quality of the depth estimator impacts the proposed method’s performance, and potentially exploring integration with other depth supervision methods could potentially make the approach more adaptable across varying input qualities." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Kindly refer to the [Weaknesses]." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* A differentiable order-based loss function, the ordinal depth loss, is proposed, with detailed descriptions of its motivation and its distinctions from other depth loss functions.\n* It demonstrates significant superiority over multi-view camera methods in reconstruction metrics and visual results, with ablation studies validating the importance of the \"3D-aware initialization scheme\" and \"ordinal depth loss.\"\n* The paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents MoDGS, a novel pipeline for synthesizing dynamic scenes from casually captured monocular videos. Unlike existing methods requiring large camera motions, MoDGS leverages single-view depth estimation for 3D reconstruction and introduces a 3D-aware initialization alongside an ordinal depth loss. These innovations enable robust, high-quality novel view synthesis, outperforming state-of-the-art methods in rendering casually captured videos." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* **The contributions and innovations are limited**. This work is based on the previous canonical space paradigm of 3D Gaussian Splatting (3DGS) combined with deformation fields, with the main contributions being a deformable 3DGS initialization method and a depth loss. The primary principle of the former relies on predicting per-pixel 3D flow using current state-of-the-art monocular depth estimation and optical flow estimation methods. However, the sole innovative aspect lies in converting 2D optical flow to 3D flow using the estimated depth map. As for the depth loss, although it is well-motivated and provides performance improvement, it essentially replaces the Pearson correlation loss with an order correlation loss.\n* **The experimental comparisons lack fairness**. In most quantitative comparisons, this work is only compared against methods that require multi-view camera input. It is recommended to include quantitative and qualitative comparison results with methods under the same setting of \"casually captured monocular video.\" It is also perplexing that the authors mention \"RoDynRF, a method that adopts single-view depth estimation as supervision\" in \"Baseline methods\", yet I only found comparative results for this method in Fig. 6." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does MoDGS handle scenarios where the pre-trained depth estimator provides inconsistent depth due to environmental variations? Has any analysis been conducted to measure performance stability when GeoWizard or other models are less reliable? \n\n\n2. Would MoDGS perform as well on datasets with higher motion complexity or less predictable scene geometry? Testing on a broader range of datasets, such as those with cluttered backgrounds or multiple moving objects, would better validate the method's generalization. \n\n3. Considering MoDGS’s reliance on single-view depth priors, would a formalized knowledge distillation framework improve model autonomy by adapting these priors dynamically during training?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. I think the 3D-aware initialization process is a strong point, as it specifically addresses a common issue in monocular reconstruction. By initializing Gaussians instead of relying on random initialization, this method seems to potentially add more consistency. \n\n2. The ordinal depth loss is, in my view, an interesting idea. It tries to tackle scale ambiguity in monocular depth estimation, which I think is particularly relevant in dynamic scenes. This loss formulation promotes depth consistency across frames, an essential factor when handling complex, moving scenes." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces MoDGS, a method for dynamic view synthesis from monocular videos, leveraging a Gaussian-based splatting technique combined with deformation fields and an ordinal depth loss to reconstruct scenes. This framework integrates a 3D-aware initialization to align Gaussian representations in a canonical space, while the ordinal depth loss is used to improve scene geometry continuity. MoDGS is claimed to improve over previous dynamic NeRF approaches and related deformation methods, with results evaluated on the DyNeRF and Nvidia datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I think the innovation is quite incremental for the reason that compared to closely related works like Deformable 3DGS and 4DGS, the methodological innovation appears incremental, mainly optimizing existing elements (depth consistency and deformation) rather than proposing a new structural approach.\n\n\n2. Besides, the approach relies heavily on pre-trained depth models. MoDGS relies on single-view depth estimators like GeoWizard for depth initialization, which brings into question the independence of its results. The approach leverages external models as priors, potentially limiting its novelty and raising questions regarding knowledge distillation. The extent to which these pre-trained models influence the final performance is not rigorously analyzed. \n \n\n3. While MoDGS integrates external depth estimation for initialization, there is no formalized knowledge distillation to adaptively refine the model during training. This absence may reduce the adaptability of MoDGS across different dynamic scenes where pre-trained depth estimators may not perform equally well." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024modgs,\ntitle={Mo{DGS}: Dynamic Gaussian Splatting from Casually-captured Monocular Videos},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2prShxdLkX},\nnote={under review}\n}" }, "abstract": { "value": "In this paper, we propose MoDGS, a new pipeline to render novel-view images in dynamic scenes using only casually captured monocular videos. Previous monocular dynamic NeRF or Gaussian Splatting methods strongly rely on the rapid movement of input cameras to construct multiview consistency but fail to reconstruct dynamic scenes on casually captured input videos whose cameras are static or move slowly. To address this challenging task, MoDGS adopts recent single-view depth estimation methods to guide the learning of the dynamic scene. Then, a novel 3D-aware initialization method is proposed to learn a reasonable deformation field and a new robust depth loss is proposed to guide the learning of dynamic scene geometry. Comprehensive experiments demonstrate that MoDGS is able to render high-quality novel view images of dynamic scenes from just a casually captured monocular video, which outperforms baseline methods by a significant margin." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3D Gaussian Splatting", "Dynamic Novel-view Synthesis", "Neural Rendering" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d6d3f312042a0e87b6a180ad4c5d66669ae79bbc.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/5a75f66168ff65f33d2bb5408418c355a24e2e5f.zip" }, "title": { "value": "MoDGS: Dynamic Gaussian Splatting from Casually-captured Monocular Videos" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2pvMZKGYDR
Extend Model Merging from Fine-Tuned to Pre-Trained Large Language Models via Weight Disentanglement
main
Active
Model Merging;Large Language Models
foundation or frontier models, including LLMs
5;5;6
3;4;2
3;2;3
2;3;3
2;3;4
5.333333
3
2.666667
2.666667
3
-0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Equation 1: The shape of mD is equal to that of W. This equation should be corrected.\n2. Equation 7: Could you provide a reason for not averaging all the differences by multiplying by 1/N?\n3. Have the authors considered an alternative approach that compares each weight matrix on a column-by-column basis between the tuned model and the original backbone? Specifically, this approach would involve calculating and ranking differences column by column, rather than disentangling weights into separate magnitude and direction components.\n4. How to grid search the hyperparameters for baselines methods? What validation dataset is used?\n5. The paper should provide a figure to visually illustrate the proposed method." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper makes a valuable contribution by identifying a critical limitation in existing model merging methods: their ineffectiveness when applied to continually pre-trained (PT) models. This insight is essential, as it highlights a gap in current merging techniques, which are generally only effective for fine-tuned (FT) models with minimal parameter shifts.\n2. The paper introduces WIDEN (Weight Disentanglement), an innovative method that automatically computes the importance of weights during the merging process. WIDEN disentangles each model’s weights into magnitude and direction components, and then adapts the merging decisions based on the divergence of these components from a shared backbone. This approach removes the need for manually assigning scaling factors and effectively addresses the challenges posed by the varied parameter changes in both fine-tuned (FT) and pre-trained (PT) models.\n3. The experimental results demonstrate that WIDEN outperforms existing merging methods by effectively combining both instruction-following and multilingual capabilities. The paper also evaluates WIDEN in traditional FT-only merging scenarios, where it achieves competitive performance compared to established methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Merging multiple LLMs, particularly those with substantial parameter shifts from pre-training (PT), presents challenges for traditional merging methods. To address this issue, the paper introduces WIDEN (Weight Disentanglement), a novel approach for merging large language models (LLMs) that have undergone either fine-tuning (FT) or pre-training (PT). This method expands the applicability of model merging beyond conventional fine-tuned models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper assumes that continually pre-trained (PT) models inherently experience larger weight shifts than fine-tuned (FT) models, which serves as the justification for a new merging approach. However, this assumption may not hold universally, as the degree of weight change in PT models depends on factors such as the data domain and dataset size. This raises questions about the paper’s motivation and the general applicability of its problem formulation. A more thorough exploration or empirical verification of weight changes across PT and FT models would help substantiate this claim. The authors are expected to provide empirical evidence comparing the distribution of weight changes between PT and FT models across different domains, model sizes, and dataset sizes.\n2. The proposed ranking mechanism in WIDEN calculates divergences in magnitude and direction separately for each weight relative to the backbone model. However, the reliability of comparing magnitudes across models with different directional vectors is questionable. When calculating magnitude differences, direction is not considered, meaning that the importance of weights in different models could be misinterpreted if their directions diverge. Similarly, comparing directional differences might be misleading if the corresponding magnitudes differ significantly between models. So, have the authors considered alternative approaches that jointly consider both magnitude and direction? Additionally, have the authors empirically analyzed how often misinterpretation occur in practice due to treating these components separately?\n3. Although WIDEN is intended to be a general merging technique applicable to both FT and PT models, its performance in merging FT models is comparatively weak (as shown in Table 5). Given that the method is designed to be adaptable across model types, this underperformance raises concerns about its overall efficacy. Are there certain characteristics of FT models that WIDEN struggles with?\n4. The experiments primarily focus on merging a specific PT model (Sailor) with a FT model, which limits the generalization ability of the results. Evaluating WIDEN on other PT models, particularly in diverse domains such as finance or healthcare, would provide stronger evidence of its effectiveness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See weaknesses.\n\nI'm not familiar with LLM merging and am open to discussion if misunderstood any part of the paper." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is the first successful attempt to incorporate the ability of PT LLM into model merging techniques.\n2. Extensive experiments and analyses have demonstrated the effectiveness of the proposed method.\n3. The paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a pioneering effort in extending model merging to Pretrained LLMs utilizing weight disentanglement. Extensive studies on previous methods demonstrate their inability to perform when applied to pretrained LLM while the method proposed in the paper is able to solve the task with minimal performance drop compared to the models to be merged." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experiments are only limited to Sailor, more results on different models could validate the effectiveness of the proposed method.\n2. Despite being indicated by the method, the experiments didn't show evidence that the proposed method could work for multiple LLM cases. Some experiments from this perspective would be appreciated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How does WIDEN handle cases where the backbone (reference pretrained model) diverges substantially in structure or task specificity from both FT and PT models? Would WIDEN work with heterogeneous LLMs beyond those sharing the same backbone?\n- Did the authors attempt to merge more than two or three models to evaluate WIDEN’s scalability and robustness? If so, what were the results, and how does performance change as the number of LLMs increases?\n- Given WIDEN’s better performance on SEA benchmarks than on the OpenLLM Leaderboard, could the authors elaborate on why this discrepancy exists? Is WIDEN more suited to particular types of tasks or linguistic benchmarks?\n- On tasks where Task Arithmetic performs better, why might WIDEN’s performance lag?\n- Since WIDEN modifies weights adaptively, would it be feasible to incorporate it into a continual learning setup where multiple LLMs are progressively merged over time? Could this method be used for models other than LLMs?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper’s effort to expand merging capabilities from FT to PT models is well-motivated and addresses a crucial gap in existing merging techniques.\n- The methodology has a sound technical foundation, with a detailed four-step framework integrating weight disentanglement, ranking, and adaptive score calibration.\n- The experimental setup is thorough, covering both conventional FT merging tasks and the new FT-PT merging setting. WIDEN’s performance across SEA and Open LLM Leaderboard benchmarks and comparison with multiple baselines highlights its applicability to diverse LLMs.\n- The impact of each component within WIDEN is evaluated with an ablation experiment in Figure 2, demonstrating the importance of weight disentanglement and score calibration." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents WIDEN, a novel merging technique for Large Language Models (LLMs), which extends the applicability of merging from finetuned (FT) to pretrained (PT) models by disentangling weights into magnitude and direction components. This weight disentanglement enables adaptive merging by quantifying each LLM's alteration relative to a shared backbone. The disentangled weights are ranked using normalized divergence scores compared to the pretrained baseline, and this ranking is used to compute an automated importance factor for each LLM. This results in a generalized form of several existing arithmetic methods for LLM merging. Experimental results suggest that WIDEN effectively balances multiple capabilities, such as multilingual and instruction-following skills, across FT and PT LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Although WIDEN generalizes across FT and PT models, it does not consistently outperform Task Arithmetic on all benchmarks. For instance, Task Arithmetic often shows competitive results on Open LLM Leaderboard tasks, raising concerns about WIDEN’s scalability and stability. For example, on the SEA benchmark, the performance improvement on 14B models is smaller than the 7B model, with the gap between Task Arithmetic and its claimed generalized form WIDEN narrowing as the LLMs become larger.\n- The improvement WIDEN demonstrates is noticeably higher on SEA benchmarks than on the OpenLLM Leaderboard, yet the paper does not clarify why performance fluctuates between benchmarks. This omission raises questions about its adaptability to different domains or task settings.\n- While grid search is used for tuning, the choice of hyperparameters (particularly t and s) lacks justification beyond empirical results. A clearer rationale or theoretical insight into their selection would enhance the robustness of WIDEN’s methodology.\n- Although score calibration is a novel addition to ensure adaptive ranking and merging, values other than 1.0 should be evaluated in score calibration. The \"ease of implementation\" rationale is not good enough." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We pioneer the extension of large language model merging to include both fine-tuned and pre-trained models by disentangling and adaptively fusing their weights." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024extend,\ntitle={Extend Model Merging from Fine-Tuned to Pre-Trained Large Language Models via Weight Disentanglement},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2pvMZKGYDR},\nnote={under review}\n}" }, "abstract": { "value": "Merging Large Language Models (LLMs) aims to amalgamate multiple homologous LLMs into one with all the capabilities. Ideally, any LLMs sharing the same backbone should be mergeable, irrespective of whether they are Fine-Tuned (FT) with minor parameter changes or Pre-Trained (PT) with substantial parameter shifts. However, existing methods often manually assign the model importance, rendering them feasible only for LLMs with similar parameter alterations, such as multiple FT LLMs. The diverse parameter changed ranges between FT and PT LLMs pose challenges for current solutions in empirically determining the optimal combination. In this paper, we make a pioneering effort to broaden the applicability of merging techniques from FT to PT LLMs. We initially examine the efficacy of current methods in merging FT and PT LLMs, discovering that they struggle to deal with PT LLMs. Subsequently, we introduce an approach based on WeIght DisENtanglement (WIDEN) to effectively extend the merging scope, which first disentangles model weights into magnitude and direction components, and then performs adaptive fusion by considering their respective contributions. In the experiments, we merge Qwen1.5-Chat (an FT LLM with instruction-following skills) with Sailor (a PT LLM with multilingual abilities) across 7B and 14B model scales. Results reveal that: (1) existing solutions usually fail when merging Sailor, either losing both abilities or only retaining instruction-following skills; (2) WIDEN successfully injects the multilingual abilities of Sailor into Qwen1.5-Chat and make it proficient in Southeast Asian languages, achieving enhancements in the fundamental capabilities. In light of previous research, we also merge multiple 13B FT LLMs and observe that WIDEN achieves a balanced amalgamation of instruction following, mathematical reasoning, and code generation skills." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Model Merging", "Large Language Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c766f5c4cccac1e9a77614bec91f7a42b7ee10d3.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/95a2774e9973df118dc8bd697e94a652d73e2a38.zip" }, "title": { "value": "Extend Model Merging from Fine-Tuned to Pre-Trained Large Language Models via Weight Disentanglement" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2qJXhflNbR
A Solver-Aided Hierarchical Language For LLM-Driven CAD Design
main
Active
Computer-Aided Design;Parametric Modeling;Machine Learning;Large Language Models;Programming Languages
applications to computer vision, audio, language, and other modalities
3;3;5;5
4;5;3;4
1;3;2;3
2;2;2;2
2;3;2;3
4
4
2.25
2
2.5
-0.707107
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I think the innovation in the paper has not been spelt out. In particular how is it different from code generation in a particular domain which is a well studied subject\n2. Can something like an SMT solver be used verify the constraints (code) generated?\n3. Are there better evaluation metrics? For example, the productivity of a designer using AIDL as opposed to a traditional CAD engine." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The first DSL for CAD modeling using LLMs\n2. In few-shot regime, AIDL outperforms OpenSCAD" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a hierarchical domain-specific language (DSL) for modeling Computer-Aided Design Applications using LLMS. The idea is to use LLMs for high level reasoning while spatial and geometric reasoning is outsourced to a domain-specific solver. The evaluation compares different aspects of the proposed DSL with OpenSCAD." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. How is it different from tool learning? In this case the tool is the solver. In fact you can consider multiple solvers. \n2. Apart from providing a UI, it is not clear what reasoning is carried out by the LLM. It seems to me that the function of the LLM is to compile the constraints that will be solved by the solver. Can you elaborate on the reasoning tasks carried out by the LLM? The use of LLMs is essentially as a code generation tool in a particular domain. Where is the innovation? Can you elaborate how it is different from code generation in a particular domain? \n3. I didn't see any discussion on how to prevent errors being introduced by the LLM. CLIP scores or the perceptual study will not provide any intuition about the behavior of the LLM. Better evaluation methods are needed as well as techniques to prevent bugs induced by the LLM (can an SMT solver be used?)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Has the computational efficiency of AIDL been benchmarked, especially concerning the constraint solver's performance with increasing model complexity?\n2. Since LLMs can produce syntactic or semantic errors in code generation, what mechanisms does AIDL have to handle such errors, and how does it impact the overall system reliability? This is important for understanding the system's robustness.\n3. Given that the experiments focus on a limited set of 2D models, how well does AIDL scale when generating more complex or detailed designs?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. AIDL effectively combines LLMs with a geometric constraint solver, enabling the generation of complex CAD models without requiring the LLM to handle intricate spatial reasoning. This approach allows for more accurate and semantically rich designs.\n2. By incorporating hierarchical structures, AIDL facilitates modular design, making it easier to manage and edit complex models. This hierarchical approach aligns well with designers' workflows, improving the practicality of LLM-generated CAD models.\n3. The experiments show that AIDL outperforms OpenSCAD in generating models that are closer to user prompts and are more editable. This is significant because OpenSCAD is included in LLM training data, whereas AIDL is not, highlighting the effectiveness of the language design." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a promising approach to enhancing LLM-driven CAD design through the introduction of AIDL. The innovative integration of a geometric constraint solver and the focus on hierarchical, semantically rich language constructs are notable contributions. However, to strengthen the work, the authors should address the limitations related to performance analysis, error handling, and include user studies to validate the language's practical applicability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper lacks a detailed analysis of the computational overhead introduced by integrating an external constraint solver. There are no benchmarks or discussions on how solver performance scales with model complexity, which is crucial for assessing practicality.\n2. The approach relies heavily on the LLM's ability to generate correct AIDL code based on prompts. Without fine-tuning or extensive training data, there may be inconsistencies or errors in code generation, affecting the system's reliability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1- Why does the paper generate 2D designs instead of 3D? The 2D designs resemble images rather than true CAD designs." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1- Proposed a novel approach for generating CAD programs using hierarchical techniques.\n\n2- Introduced a new application of LLMs for design tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces AI Design Language (AIDL), a new hierarchical domain-specific language (DSL) for CAD design leveraging large language models (LLMs). It presents a novel approach for generating 2D CAD programs through hierarchical techniques, evaluated on 36 prompts with CLIP score as the evaluation metric." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1- The paper evaluated the approach using only 36 prompts, making the dataset quite limited and insufficient for effectively evaluating LLMs.\n\n2- Relying on the CLIP score may not provide an accurate evaluation for generated CAD designs. I strongly recommend creating a larger dataset with ground truth values that can support a more reliable evaluation.\n\n3- The paper presents the results of the proposed approach but lacks a baseline or comparison with other methods in code generation.\n\n4- There is no human evaluation conducted. Given the potential challenges in achieving precise automatic evaluation in this study, incorporating human evaluation would be valuable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Could you help to address my concerns listed on the \"Weakness\" part?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The methodology is well-structured and clearly articulated, allowing readers to easily follow the steps taken in the research. \n\n- The central idea of the work is straightforward, making it accessible to a broad audience. \n\n- The figures presented in the paper are highly effective in illustrating the main contributions of the research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents AIDL (AI Design Language), a solver-aided hierarchical domain-specific language designed to enhance CAD modeling through the capabilities of large language models (LLMs). \n\nTraditional CAD systems struggle with spatial reasoning and procedural geometry generation, which AIDL addresses by offloading complex spatial tasks to an external geometric constraint solver. \n\nThe authors identify four key design goals: enabling dependencies on previously constructed geometry, supporting explicit geometric constraints, leveraging the LLM's natural language understanding, and allowing hierarchical design for modularity. \n\nExperiments demonstrate that AIDL outperforms existing CAD languages, such as OpenSCAD, in generating visually accurate and editable models, showcasing that thoughtful language design can significantly improve LLM performance in CAD applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The motivation for requiring a language description to identify the necessary objects is unclear. It is also questionable why a large language model (LLM) is needed to address this problem. For instance, why not leverage an LLM to search various websites for relevant raw CAD files based on specified keywords? Additionally, the discussion of the limitations of existing methods could be rewritten to more clearly articulate the specific challenges faced.\n\n- The proposed method appears to be effective primarily for simpler examples compared to the existing capabilities demonstrated by OpenSCAD (see [OpenSCAD Demo](https://openscad.org/assets/img/screenshot.png)). The examples presented seem easily manageable through direct human editing \"over the CAD object\" or using the OpenSCAD software, raising concerns about the method's practical utility.\n\n- Overall, the technological depth of this paper seems insufficient. Numerous studies have explored the reformulation of various tasks with the aid of LLMs. From my perspective, this paper presents yet another application of this idea without introducing significant advancements or insights." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Solver-Aided Hierarchical Language For {LLM}-Driven {CAD} Design},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2qJXhflNbR},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) have been enormously successful in solving a wide variety of structured and unstructured generative tasks, but they struggle to generate procedural geometry in Computer Aided Design (CAD). These difficulties arise from an inability to do spatial reasoning and the necessity to guide a model through complex, long range planning required for generating complex geometry. We enable generative CAD Design with LLMs through the introduction of a solver-aided, hierarchical domain specific language (DSL) called AIDL, which offloads the spatial reasoning requirements to a geometric constraint solver. Additionally, we show that in the few-shot regime, AIDL outperforms even a language with in-training data (OpenSCAD), both in terms of generating visual results closer to the prompt and creating objects that are easier to post-process and reason about." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Computer-Aided Design", "Parametric Modeling", "Machine Learning", "Large Language Models", "Programming Languages" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c0da7987e373990d064c5ae410a675b7f6271f43.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/fee2d828d982821e8d964f5fa9f0708c479aa61f.zip" }, "title": { "value": "A Solver-Aided Hierarchical Language For LLM-Driven CAD Design" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2qvFs9d2jt
Non-linear activation soothes NTK conditioning for wide neural networks: a study in the ReLU case
main
Active
ReLU;non-linear activation function;condition number;NTK;neural tangent kernel;convergence rate
learning theory
5;5;6;6
4;4;2;3
3;3;3;3
2;2;3;3
3;3;3;3
5.5
3.25
3
2.5
3
-0.904534
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In the paragraph beginning on line 132, the authors reference a paper by Arora et al., which suggests that deep linear networks accelerate optimization. This claim appears to contradict the message of Section 2 in the paper. A brief comment could clarify this point and help readers better reconcile these perspectives.\n\n- I would suggest expanding the 'Infinite Width Limit' section (line 177) by adding a couple of sentences to clarify what is meant by taking the infinite limit. Specifically, it would be helpful for the authors to specify the type of convergence they refer to and how they manage successive layers in this context. As stated in the theorems ($m \\rightarrow +\\infty$), it seems to imply that the widths of different layers go to infinity simultaneously. However, after a high-level check of the proofs, it appears some arguments use induction on the layers, taking the limit successively, one layer at a time. Adding clarification here would improve reader comprehension and strengthen the rigor of the presentation." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Approaching the study from the perspective of data separability rather than focusing on expressivity proves to be an insightful choice. The insights obtained are interesting and complement the existing results well. Besides, the paper is well-written and accessible to a relatively broad audience. The experiments illustrate well the main findings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work, the authors study the benefits of using ReLU activation for Wide Feedforward Neural Networks under the NTK framework. Contrary to previous works that focused on expressivity, they adopt a novel perspective and show that ReLU activation yields better data separation in the gradient feature space and, hence, better NTK conditioning when compared to Linear Networks. This effect is even exacerbated with deeper networks. They also illustrate their main results with experiments on synthetic and benchmark datasets (MNIST, etc.)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main limitation I observed, which is often anticipated in papers leveraging the NTK framework, is that this initialization differs from those commonly used in practice. While it allows for theoretical insights, the paper would be significantly strengthened if the authors could provide empirical verification to determine if these findings extend to more practical initialization schemes.\n\nA secondary limitation lies in Theorems 4.2 and 4.3, which establish that enhanced data separability in the gradient feature space concretely benefits NTK conditioning. However, these results rest on stronger assumptions, though the experiments partially compensate for this limitation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could the authors compare additional non-linear activation functions in the experiments?\n\n- Is it feasible to extend the current analysis to GeLU or SiLU?\n\n- Can the condition of infinite width be relaxed to require a sufficiently large width?\n\n- There is a typo in line 195; $G$ should be in $\\mathbb{R}^{n \\times n}$.\n ." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written, and the claims appear to be sound.\n- The experiments are comprehensive and align well with the theoretical results.\n- The investigation of the angle between two samples after projection into the feature space is both novel and intriguing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper compares the deep networks with or without the ReLU activation under the NTK regime. They show that ReLU has two effects: (a) There is a larger angle separation for similar data in the feature space; (b) The NTK conditional better becomes larger. They also show that the depth of the network will further enhance these effects." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- This paper compares only ReLU networks and linear networks. The results are not surprising, given the established fact that non-linear activations enhance the expressivity of networks.\n\n- The title mentions \"Non-Linear Activation Soothes NTK Condition,\" but the paper focuses solely on ReLU, which is just one type of non-linear activation.\n\n- The NTK regime typically requires the network width to exceed a certain constant. However, the paper assumes that the width approaches infinity. It would be beneficial if the authors could relax this condition." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can the findings be generalized to other non-linear activation functions? How might the NTK conditioning change with different functions?\n\n2. What are the implications of these findings on network architecture design? Specifically, how might they influence decisions on depth and width of networks (m is finite)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper provides a thorough theoretical analysis backed by empirical evidence demonstrating that ReLU activation improves both the separation of data in feature space and the conditioning of the NTK." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the impact of non-linear activation functions, specifically the ReLU in wide neural networks. The authors demonstrate that ReLU activation improves data separation in feature space and enhances the conditioning of the NTK, leading to better theoretical convergence rates of gradient descent optimization algorithms." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The analysis is specifically focused on networks with ReLU activation and the results primarily demonstrate that ReLU NTK outperforms linear NTK, which may seem somewhat limited in scope.\n\n\nTypo: Line 209 $\\nabla f(x)(z) \\to \\nabla f(z)$" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Out of interest, can the analysis of this paper be applied to only ReLU ? In other words, does this paper use specific properties of ReLU in the proof? For example, can it be little bit generalized to Leaky ReLU (= $ax$ when $x<0$, and $x$ when $x \\geq 0$. The case when $a=0$ is special case of ReLU) ?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The strength of this paper is to show that ReLU activation function has the effects of better data separation and better NTK condition. This paper implies the optimization benefit that ReLU network helps improving worst case convergence rate of gradient descent and faster convergence than shallower one." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper theoretically studies the beneficial effects and interesting properties of ReLU activation function." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "As mentioned in Conclusion and Discussion, the finite depth case is focused, and not directly extended to the infinite depth case." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "we showcase a new and interesting property of certain non-linear activations, focusing on ReLU: the non-linearity help to decrease the NTK condition number and potentially accelerate optimization for wide neural networks" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024nonlinear,\ntitle={Non-linear activation soothes {NTK} conditioning for wide neural networks: a study in the Re{LU} case},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2qvFs9d2jt},\nnote={under review}\n}" }, "abstract": { "value": "Non-linear activation functions are well known to improve the expressivity of neural networks, which is the main reason of their wide implementation in neural networks. In this work, we showcase a new and interesting property of certain non-linear activations, focusing on the most popular example of its kind - Rectified Linear Unit (ReLU). By comparing the cases with and without this non-linear activation, we show that the ReLU has the following effects: (a) better data separation, i.e., a larger angle separation for similar data in the feature space of model gradient, and (b) better NTK conditioning, i.e., a smaller condition number of neural tangent kernel (NTK). Furthermore, we show that the ReLU network depth (i.e., with more ReLU activation operations) further magnifies these effects. Note that, without the non-linear activation, i.e., in a linear neural network, the data separation and NTK condition number always remain the same as in the case of a linear model, regardless of the network depth. Our results imply that ReLU activation, as well as the depth of ReLU network, helps improve the worst-case convergence rate of GD, which is closely related to the NTK condition number." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "ReLU", "non-linear activation function", "condition number", "NTK", "neural tangent kernel", "convergence rate" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/84390ce46ac3add9f84866e3316a5f6a81ef12a8.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Non-linear activation soothes NTK conditioning for wide neural networks: a study in the ReLU case" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2rBLbNJwBm
ELBOing Stein: Variational Bayes with Stein Mixture Inference
main
Active
variational inference;particle-based inference;variance collapse
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
5;5;6;8
3;3;2;4
2;2;3;4
2;3;3;3
3;2;3;4
6
3
2.75
2.75
3
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Q1: Can the authors speculate on performance as a function of the parameter count, e.g., sticking to BNNs, at which depth/width would the method start to struggle?\n- Q2: What are the increased runtime costs compared to compared baselines?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The method is a straightforward and effective extension of SVGD/NSVGD\n- The paper is well-written and easy to follow and the same goes for the provided codebase" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors improve Stein variational gradient descent (Liu & Wang, 2016) by extending nonlinear SVGD (Wang & Liu, 2019) by learning a density-based mixture model to approximate the posterior, instead of solely relying on particles (i.e., delta-distributions).\nThey evaluate their method on a set of (small) scale regression tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The experiments are rather small-scale and limited to regression data sets. Their aim seems to be primarily to demonstrate the relative performance of the proposed approach compared to prior SVGD-related approaches rather than, its absolute performance. In the list of baselines, at least a comparison against an HMC performance on the UCI data sets would have been nice to see how close it can come to it (or improve upon it).\n- The paper lacks ablations to evaluate what happens as an underlying BNN gets deeper, i.e., to what extent it can handle the increase in parameters. A deep experiment could be a combination with last-layer BNNs, i.e., learn the mixture not for the whole net, but treat only the penultimate layer in a Bayesian fashion.\n- The experiments are limited to regressions with a homoscedastic, known observation noise. What about classification or heteroscedastic regression tasks? \n- Citing Agarap (2018) in l478 as a reference for ReLUs seems rather odd. In their work, they evaluate the usage of a ReLU in place of a softmax for classification, i.e., nothing related to the current work nor has the ReLU been introduced in that paper.\n\n### Typos\n- l233 lacks a second closing bracket" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It’s challenging to distinguish the lines representing different methods in Figure 2 (e.g. $SMI_{1}$, $SMI_{20}$). \nUsing distinct colors for each method would improve the visualization and make the differences clearer.\n2. The experiments in Section 6.1 demonstrate that SMI overcomes variance collapse. It would also be valuable to assess whether the approximate distribution given by SMI accurately captures the shape of the posterior. \nThis could be evaluated by comparing the estimated covariance matrix with the target covariance matrix." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem addressed in this paper is both important and compelling. Traditional approaches like ordinary mean-field variational inference (OVI) and Stein Variational Gradient Descent (SVGD) often experience variance collapse, whereas SMI provides more accurate variance estimates, improving uncertainty quantification.\n\n2. The paper is well-written, providing a clear background and a thorough summary of related work. As someone slightly unfamiliar with the field, I particularly appreciated the authors' effort to re-explain and contextualize prior results, which greatly helped in assessing the paper's contributions.\n\n3. SMI is compared with other methods across a variety of synthetic and real-world datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Stein Mixture Inference (SMI), which optimizes a lower bound to the Evidence Lower Bound (ELBO). \nSMI extends Nonlinear Stein Variational Gradient Descent (NSVGD) to the variational Bayes setting and addresses the issue of variance collapse. \nThe effectiveness of SMI is demonstrated on both synthetic and real-world datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Variational inference offers a compelling alternative to sampling methods like MCMC due to its efficiency, especially in high-dimensional settings and with large-scale datasets. \nHowever, the current validation of SMI is limited to small to moderately-sized models, which somewhat limits its appeal and persuasiveness for broader, large-scale applications.\n\n2. The paper lacks theoretical insights or guidance on how SMI’s performance depends on the number of particles $m$.\nProviding recommendations or analysis on selecting an appropriate particle count $m$ would greatly enhance its practical applicability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Could you provide additional analysis or intuition on why the combination of an ELBO-like objective from VI and the Non-linear SVGD framework effectively mitigates variance collapse? Specifically, how does modeling a mixture of approximate distributions around the vicinity of SVGD particles help in avoiding variance collapse? A more detailed explanation or visual representation would be appreciated.\n - For example, could you provide a more detailed explanation or visual representation of how the mixture components interact with the ELBO objective to mitigate variance collapse? For instance, a step-by-step explanation or diagram illustrating how the proposed method addresses the variance collapse problem would be helpful.\n- Why is the integration with the NSVGD framework necessary in your method? Is there evidence that the entropy regularization term alone is insufficient to address variance collapse? Given that Figure 2 shows variance collapse is mitigated even when $\\alpha$ takes small values, does this imply that the regularization component may not be as critical? If so, what is the rationale for including it in the framework?\n- Why were methods such as HMC and MMD descent not included in the comparative analysis, especially given their relevance in approximate inference and their use in experiments in (Ba et al., 2021)?\n - If possible, could you add comparisons with HMC (Neal, 2011) and MMD descent (Arbel et al., 2019; Ba et al., 2021) in the experimental section, particularly at least on the UCI datasets, to provide a broader context for evaluating SMI’s performance in addressing variance collapse? If a full comparison is not feasible, could you discuss how SMI might be expected to compare to these methods theoretically or empirically, based on existing literature?\n- Could you elaborate on why the resampling method from (Ba et al., 2021) was excluded as a comparative method, despite the computational resources available (e.g., “NVIDIA Quadro RTX 6000 GPU”)? Is this method genuinely computationally infeasible for UCI benchmark datasets, or were there other factors influencing its exclusion?\n - So, could you include the resampling method from Ba et al. (2021) in your comparisons, particularly on the UCI datasets, to strengthen the evaluation? If this is not feasible, could you provide a more detailed justification for why it is computationally infeasible, even with the available GPU resources? Additionally, if an empirical comparison is truly not possible, could you discuss how SMI theoretically compares to the resampling approach?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The application of variational inference (VI) concepts to Stein Variational Gradient Descent (SVGD) appears novel and intriguing.\n- The authors validate their VI-based approach through numerical experiments on several UCI benchmark datasets, demonstrating good performance. The results seem to suggest that this approach effectively mitigates the impact of variance collapse." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- The authors propose a method called Stein Mixture Inference (SMI) to address the issue of variance collapse observed in Stein Variational Gradient Descent (SVGD).\n- SMI extends the Nonlinear SVGD framework (Wang & Liu, 2019) to variational inference (VI) by allowing each particle to parameterize a component distribution within a mixture model, thereby introducing ELBO-like objectives.\n- The authors show that SMI offers several advantages over standard SVGD by effectively mitigating variance collapse." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "### Insufficient Analysis of the Motivation Behind Extending SVGD with VI for Variance Collapse Mitigation:\n- The main objective of this paper, as I understand it, is to mitigate variance collapse by extending the SVGD objective function through a combination of an ELBO-like objective from VI and the Non-linear SVGD framework. However, it is not entirely clear \"why\" this extension effectively mitigates variance collapse. While Figure 1 provides a conceptual illustration, it does not intuitively explain how the proposed method addresses variance collapse. Additionally, while the third paragraph of the Introduction discusses the motivation for this approach, it remains unclear how using a mixture of approximate distributions around SVGD particles with a VI-inspired objective avoids variance collapse.\n- The authors propose controlling particles through variational distributions, similar to VI, as a solution to the variance collapse issue. However, given the use of the NSVGD framework, the critical role of this aspect remains unclear. The entropy regularization term could potentially affect not only mode collapse but also variance collapse. If the VI-inspired approach is indeed effective, the method should perform well even with $\\alpha=0$. In this context, Figure 2 shows that variance collapse is mitigated even when $\\alpha$ takes small values, suggesting that particle control via variational distributions may be effective. On the other hand, this result implies that regularization may not play a significant role, raising questions about the necessity of combining it with NSVGD. Overall, it remains unclear why the NSVGD framework is essential and which part of the proposed approach effectively addresses variance collapse.\n\n### Concerns Regarding the Limited Number of Comparative Methods:\n- For sample approximations of the posterior distribution, methods such as HMC (Neal, 2011) and MMD descent (Arbel et al., 2019; Ba et al., 2021) are also effective. However, this study only compares performance within the SVGD family of methods and EVI, leaving questions about the extent to which the proposed method mitigates variance collapse in the broader context of approximate inference. Given that (Ba et al., 2021) also includes these methods in numerical experiments addressing variance collapse, this comparison is essential for validating contributions in this research area.\n- Additionally, the absence of a comparison with the resampling method proposed by (Ba et al., 2021) raises concerns regarding the integrity of the performance evaluation. While the authors argue in Section 5 that the resampling method is computationally infeasible, I believe this does not fully justify its exclusion as a comparative method. Given the availability of an “NVIDIA Quadro RTX 6000 GPU,” running such methods may not be computationally prohibitive, at least for datasets like the UCI benchmarks.\n- Furthermore, I find it difficult to agree with the authors’ claim: “Annealing SVGD (ASVGD) D’Angelo & Fortuin (2021a) is the only alternative that directly addresses variance collapse in SVGD with a viable method.” I believe that the resampling method proposed by (Ba et al., 2021) is also aimed at mitigating the variance collapse problem.\n\n### Citation:\n- (Neal, 2011): R. M. Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2(11):2. https://arxiv.org/abs/1206.1901.\n- (Arbel et al., 2019): M. Arbel, A. Korba, A. Salim, and Arthur Gretton. Maximum Mean Discrepancy Gradient Flow. NeurIPS2019. https://arxiv.org/abs/1906.04370." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does SMI compare with ADVI with mixtures? \n- How is each component in the mixture distributions parameterized?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I believe the method is novel and the main idea is sound. The claims are clearly presented and supported by empirical evidences. Overall it is a complete work with a few concerns." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The posterior of a Bayesian model is sometimes intractable, calling for approximate inference techniques. This paper focuses on the idea of approximating the Bayesian posterior with a mixture distribution where each component is parameterized separately but still in the same family. By viewing the ELBO as an objective with permutation invariant parameters, this paper incorporates ideas from Nonlinear-SVGD (NSVGD) and develop a Stein-style update rule of the mixture parameters. The resulted method, called Stein mixture inference (SMI), prevents itself from variance collapse. The paper also shows that asymptotically the optimized bound is an ELBO.\n\nIn the experiments, it is first shown that Stein-based methods suffer from variance collapse in synthesized Gaussian models and 1D regression models. In contrast, SMI and vanilla VI (OVI) produce the desired uncertainty. On the UCI regression benchmarks, SMI gives the best NLL in the most cases, comparing with other Stein-based methods and OVI." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper has the title of ELBOing Stein, but I would rather call it Steining ELBO, which seems a bit unnecessary. To convince me to increase my score, I would like to see a discussion of [1], which also uses a mixture distribution to approximate the posterior. \n\nIf one is using a mixture distribution to target the Bayesian posterior, the most direct approach would be to try VI, instead of deriving a complicated Stein-based method. One pro of VI is that the mixture weights can be adjusted while one con is that it does not fully make use of the exchangeability of parameters. However, this work only considers mean-field VI, which is a really weak baseline. I would like to see how the permutation invariance helps the optimization of the mixture distribution.\n\nIt is not surprising that optimizing an VI objective prevents the approximate posterior from the pitfalls of SVGD. As shown in the paper, OVI does not have the issue. The argument that \"SMI is more particle-efficient than SVGD\" is translated to me as \"VI with mixtures is more particle-efficient than SVGD\". Then what is the point of using Stein?\n\nLine 150 says that \"Particle methods are attractive due to their freedom from strong parametric assumptions\". Mixture distribution in this paper seems to be a strong parametric assumption, especially when it uses fewer particles than SVGD, which further drags this work away from Stein. \n\nThe experiment section is also rather weak. The benchmark models all have very low dimensions. I expect a Bayesian inference algorithm in 2024 to be tested on more recent benchmarks, like models in posteriorDB, or larger BNN problems.\n\n[1] Morningstar, W., Vikram, S., Ham, C., Gallagher, A., & Dillon, J. (2021, March). Automatic differentiation variational inference with mixtures. In International Conference on Artificial Intelligence and Statistics (pp. 3250-3258). PMLR." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present Stein mixture inference, a particle-based inference method, that mitigates variance collapse in moderetly sized models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024elboing,\ntitle={{ELBO}ing Stein: Variational Bayes with Stein Mixture Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2rBLbNJwBm},\nnote={under review}\n}" }, "abstract": { "value": "Stein variational gradient descent (SVGD) (Liu & Wang, 2016) performs approximate Bayesian inference by representing the posterior with a set of particles.\nHowever, SVGD suffers from variance collapse, i.e. poor predictions due to underestimating uncertainty (Ba et al., 2021), even for moderately-dimensional models\nsuch as small Bayesian neural networks (BNNs). To address this issue, we generalize SVGD by letting each particle parameterize a component distribution in\na mixture model. Our method, Stein Mixture Inference (SMI), optimizes a lower\nbound to the evidence (ELBO) and introduces user-specified guides parameterized\nby particles. SMI extends the Nonlinear SVGD framework (Wang & Liu, 2019) to\nthe case of variational Bayes. SMI effectively avoids variance collapse, judging by\na previously described test developed for this purpose, and performs well on standard data sets. In addition, SMI requires considerably fewer particles than SVGD\nto accurately estimate uncertainty for small BNNs. The synergistic combination of\nNSVGD, ELBO optimization and user-specified guides establishes a promising\napproach towards variational Bayesian inference in the case of tall and wide data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "variational inference", "particle-based inference", "variance collapse" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/eef7631be8c30a31b333ac97459eb0443277bf4f.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/44b9e6c3a00bfe6d262797cd2759ef1d68c4beca.zip" }, "title": { "value": "ELBOing Stein: Variational Bayes with Stein Mixture Inference" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2rWbKbmOuM
MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks
main
Active
evaluation of multimodal large language models
datasets and benchmarks
6;6;6;8
4;3;4;3
2;3;3;4
3;2;3;3
3;3;3;4
6.5
3.5
3
2.75
3.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see above" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "# Overall assessment\n\nThis work presents an interesting contribution in a much-needed space (benchmarks for multimodal large models). To address the current scattershot approach to multimodal model benchmarking, the authors attempt to create a single, highly diverse, comprehensive benchmark for a variety of image-language tasks (including video). To construct the benchmark the authors develop and refine a task taxonomy, but some details around the taxonomy and its construction are unclear. I have concerns about how the benchmark would be used in practice related to the 40 different evaluation metrics, and the distribution over various attributes (number of images, task type, etc.) but am willing to increase my score based on discussion with authors and other reviewers.\n\n# Major comments\n\n* The quality of the benchmark ultimately relies on the authors' proposed taxonomy, as this forms the basis for all data collection. However, I found the description of the annotation process somewhat disappointing; it effectively amounts to \"the Feynman Method\" (write down the problem, think hard about it, write down the solution). Critically, the authors provide no discussion or framing around the \"conceptualization stage\" for how they identified the top levels of the taxonomy (perception, planning, reasoning), nor how the annotators were selected or why they are representative of the whole of relevant multimodal knowledge (the sample of annotators could also bias the coverage in various ways). Please provide a clear discussion of (a) what the levels of the taxonomy are (please give the full list) and (b) how these levels were identified and why they comprise a holistic benchmark and (c) the disciplines of the annotators (since the authors state they are graduate or above from diverse disciplines).\n\n* The diversity of output formats is an interesting contribution. However, the diveristy of evaluation metrics (over 40 metrics?!) also makes this benchmark somewhat unwieldy, and raises concerns about usability. These issues arise even in the authors' main findings, stated at the end of Section 1. For example, it is very difficult to understand what it means that GPT-4o is 3.5% better than Claude 3.5? What makes this a \"significant margin\"? If Qwen2-VL is 10% better than other open source models, what does this mean? T\n\n* It is not clear whether all tasks in the benchmark have a single, objective answer. This makes it difficult to assess models' capabilities (for example, failure to write a latex equation may simply be due to a difference in formatting; writing a story containing two animals hinges on many different criteria which are difficult to assess).\n\n* The advantages of a single, diverse, high-coverage benchmark are outlined nicely in the introduction. However, the paper's contribution hinges on whether it does indeed achieve strong coverage of a \"diverse\" suite of tasks. Ultimately, this is nearly impossible to assess, but I have some concerns about the \"concepttualization\" process above that make me unsure that this benchmark is as comprehensive as the authors claim. On the other hand, the existing benchmarks are also imperfect (and a direct comparison to existing benchmarks in terms of content and task design would make it easier to assess whther the benefits of the new benchmark outweigh the potential downsides and complexity).\n\n* It is unclear why certain task distributions are set as the authors designed them in the benchmark. For example, why are only 4% of tasks 6-8 images, while 8% are 9+ images? Why are 16% of tasks open-ended while 22% are structured? These design decisions can have significant effects when averaging over benchmarks, as will likely occur with this benchmark.\n\n* The empirical study is useful, appears comprehensive, and leads to some interesting conclusions.\n\n* It seems unlikely that the benchmark will last very long by relying on GPT-4o as judge. Is it possible to substitute the LLM judge in the benchmark if a future best frontier model emerges?\n\n\n# Minor comments\n\n* Another relevant multimodal baseline the authors may want to reference: Bitton, Yonatan, et al. \"Visit-bench: A dynamic benchmark for evaluating instruction-following vision-and-language models.\" Advances in Neural Information Processing Systems 36 (2023): 26898-26922.\n\n# Typos etc\n\n* \"these models have shown great potential to solve any desired task with a well-designed prompt\" - this is editorlaizing somewhat; please revise.\n\n* L111-113: \"comprehensive studies...have discovered\" passive voice, consider revising" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents a new benchmark for multimodal LLMs. The authors attempt to create a novel, diverse, comprehensive benchmark for vision-language reasoning using a several-stage process for designing the benchmark, refining the questions, and developing appropriate metrics. The authors conduct a comprehensive large-scale evaluation of current SOTA multimodal models using the benchmark." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "see above" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Could you explain more about the background of your 16 annotators and how you make sure for all task instances, the instruction and solution align with each other?\n2. For the open-ended tasks, you mentioned using an LLM-assisted metric. How do you handle the potential for bias in the evaluation process, given that the scoring is dependent on a proprietary LLM? If we use different LLM as judges, will their ratings differ a lot from each other?\n3. What are the considerations and challenges you preview when scaling MEGA-BENCH even further? How do you plan to maintain the benchmark's relevance and diversity as new multimodal tasks emerge?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. MEGA-BENCH has a large scale and coverage, containing over 500 diverse real-world tasks, which allows for an in-depth assessment of multimodal models across various applications and skills.\n\n2. It offers a sophisticated, fine-grained analysis capability by categorizing tasks along multiple dimensions, providing a nuanced understanding of model performance in specific areas and revealing strengths and weaknesses that aggregate scores might obscure.\n\n3. The benchmark's design emphasizes cost-effectiveness and efficiency, demonstrating that increasing task diversity is more valuable for gaining performance insights than simply adding more examples per task." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents MEGA-BENCH, a comprehensive multimodal benchmark scaling up to over 500 real-world tasks, designed to assess the diverse capabilities of vision-language models. It offers a fine-grained analysis across various dimensions, including application, input type, output format, and skills, and provides customized metrics for different output formats. The benchmark reveals significant performance variations among state-of-the-art models, emphasizing the importance of task diversity over increasing examples per task for insightful model evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While MEGA-BENCH offers a vast array of tasks, its large scale may lead to increased computational costs and complexity in evaluation, potentially limiting its accessibility for further research and extensive exploration.\n2. MEGA-BENCH's focus on breadth may result in some tasks being too specific or niche, which could limit the generalizability of the benchmark results to a broader range of multimodal problems and applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1: There are many different input/output formats and metrics in Mega Bench. How does Mega Bench address the challenge of \"Unmanageable Setups\" mentioned in the introduction?\n\nQ2: Are there any copyright / privacy concerns for the tasks in the bench mark?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1: The proposed open-source benchmark includes a large number of diverse tasks for LLMs that can potentially address the limitations of existing benchmarks. It provides valuable resource for the community.\n\nS2: The paper also provides an extensive experiment and analysis of popular LLMs using Mega Bench. It yields many interesting findings.\n\nS3: This paper is well-written and easy to read." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents Mega Bench, a comprehensive benchmark for evaluating multimodal models on over 500 tasks. Mega Bench features a wide range of output formats and uses multiple metrics for evaluation. It includes a detailed capability report for popular language and vision models. The benchmark's assessment of leading models reveals significant performance differences and the importance of task diversity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "### Major weaknesses\n\nW1: The rationale behind the task taxonomy tree is not well-explained. Section 3.1 can be strengthened by discussing the design considerations for the draft taxonomy tree. For example, why do we want perception, planning, reasoning? Are these the limitations of existing benchmarks? How do we know this taxonomy is comprehensive and reflects the real usage of LLMs?\n\nW2: The introduction highlights Mega Bench's contributions in multimodal tasks. However, there is limited information regarding non-text tasks in Section 3. I recommend adding a few non-text tasks in Figure 4 and discussing the image and video tasks included in Mega Bench in Section 3.\n\nW3: It is unconvincing that Mega Bench makes significant contributions over existing benchmarks. In the introduction, the paper lists four limitations of existing benchmarks: (1) limited output diversity, (2) lack of task coverage, (3) expensive inference cost, and (4) unmanageable setups. Section 3 and 4 explain how Mega Bench address limitations (1) and (2), but (3) and (4) remain unaddressed in the paper. I recommend discussing what makes Mega Bench less expensive and easier to run compared to other popular benchmarks.\n\n### Minor weaknesses\n\nM1: Replace $ (L83).\n\nM2: The claim that \"many examples or tasks are highly similar in the capabilities that they assess\" requires evidence to back it up (L83-83).\n\nM3: The tasks in Mega Bench have a lot in common with those in Big Bench. A detailed comparison to Big Bench would be beneficial." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Has the authors' team conducted any analysis on the environmental impact of the computational resources required for the benchmarking process? If so, could they share some insights?\n\nAre there plans to release the annotation tools, pre-processing pipelines, and evaluation metrics as open-source to facilitate community-wide reproducibility and further development?\n\nCould the authors discuss how the tasks in MEGA-BENCH map to real-world applications? Are there any tasks that are particularly relevant to current industry needs or future technological trends?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The creation of MEGA-BENCH is an original contribution to the field of multimodal AI evaluation. It scales up the number of tasks to an unprecedented level, offering a comprehensive assessment of model capabilities across a vast array of real-world applications. The approach of embracing diverse output formats and developing over 40 metrics to accommodate these is innovative, moving beyond the limitations of traditional multi-choice question-based benchmarks.\n\nThe quality of the work is evident in the meticulous construction of the benchmark. With 507 realistic tasks and over 8,000 samples collected from 16 expert annotators, the dataset is both extensive and rich in diversity. The rigorous annotation process, including the development of an annotation GUI and a taxonomy tree, ensures high-quality data that is well-suited for evaluating multimodal models.\n\nThe paper is well-structured and clearly articulated. The figures and tables are effectively used to convey complex information in a digestible manner. The taxonomy tree and the breakdown of tasks across different dimensions are particularly clear, aiding the reader in understanding the scope and organization of MEGA-BENCH." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces MEGA-BENCH, a comprehensive multimodal evaluation suite that encompasses over 500 real-world tasks, addressing the diverse daily use cases of end users. Its goal is to optimize for high-quality data samples that cover a wide range of multimodal tasks while facilitating cost-effective and accurate model evaluation. The authors have compiled 507 realistic tasks with over 8,000 samples from 16 expert annotators, embracing various output formats and developing over 40 metrics to accommodate these formats. MEGA-BENCH provides a fine-grained capability report across multiple dimensions, enabling in-depth interaction with and visualization of model capabilities. The paper also evaluates various state-of-the-art vision-language models using MEGA-BENCH, revealing significant performance variations among models that were previously thought to be similar." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper presents a snapshot of model performance but does not address how these benchmarks might be used to track performance over training time. A good benchmark should be verified by scaling laws." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024megabench,\ntitle={{MEGA}-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2rWbKbmOuM},\nnote={under review}\n}" }, "abstract": { "value": "We present MEGA-Bench, an evaluation suite that scales multimodal evaluation to over 500 real-world tasks, to address the highly heterogeneous daily use cases of end users.\nOur objective is to optimize for a set of high-quality data samples that cover a highly diverse and rich set of multimodal tasks, while enabling cost-effective and accurate model evaluation.\nIn particular, we collected 507 realistic tasks encompassing over 8,000 samples from 16 expert annotators to extensively cover the multimodal task space. Instead of unifying these problems into standard multi-choice questions (like MMMU, MM-Bench, and MMT-Bench), we embrace a wide range of output formats like numbers, phrases, code, \\LaTeX, coordinates, JSON, free-form, etc. To accommodate these formats, we developed over 40 metrics to evaluate these tasks. \nUnlike existing benchmarks, MEGA-Bench offers a fine-grained capability report across multiple dimensions (e.g., application, input type, output format, skill), allowing users to interact with and visualize model capabilities in depth. We evaluate a wide variety of frontier vision-language models on MEGA-Bench to understand their capabilities across these dimensions." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "evaluation of multimodal large language models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6b59cdfbb4eee03ecd5f2a746bfa6b39afa358ad.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MEGA-Bench: Scaling Multimodal Evaluation to over 500 Real-World Tasks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2rnOgyFQgb
SynQ: Accurate Zero-shot Quantization by Synthesis-aware Fine-tuning
main
Active
Network Quantization;Zero-shot Quantization
unsupervised, self-supervised, semi-supervised, and supervised representation learning
5;5;5;6
5;4;4;4
3;3;3;3
3;3;3;3
2;3;3;3
5.25
4.25
3
3
2.75
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does SYNQ handle different types of noise, and is its performance consistent across various noise levels? Before and after the low-pass filter, what is the changes of generated images?\n2. There are more related papers should be included, such as 'Data-Free Learning of Student Networks', ‘Data-free network quantization with adversarial knowledge distillation’ and others." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. SYNQ offers a unique solution to the problem of quantizing models without access to training data, which is a significant contribution to deploying neural networks on edge devices.\n2. Addressing Key Challenges: The paper clearly identifies and addresses three major challenges in ZSQ, providing a comprehensive approach to improving the accuracy of quantized models.\n3. Empirical Validation: Extensive experiments demonstrate SYNQ's effectiveness, showing improvements in classification accuracy over existing methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents SYNQ (Synthesis-aware Fine-tuning for Zero-shot Quantization), a novel framework designed to address the challenges associated with zero-shot quantization (ZSQ) of pre-trained models, particularly in scenarios where training data is inaccessible due to privacy or security concerns. SYNQ tackles three main issues: noise in synthetic datasets, off-target pattern predictions, and misguidance from erroneous hard labels. The proposed method employs a low-pass filter to reduce noise, optimizes class activation map (CAM) alignment to ensure correct image region prediction, and uses soft labels for difficult samples to prevent misguidance. The authors show that SYNQ achieves state-of-the-art accuracy in image classification tasks compared to existing ZSQ methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper focuses on image classification, it's unclear how SYNQ would perform in other tasks such as object detection or segmentation.\n2. The paper could provide more details on the computational overhead introduced by SYNQ, especially the impact of the low-pass filter and CAM alignment.\n3. The paper could benefit from a deeper analysis of SYNQ's robustness to different types and levels of noise in synthetic datasets." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. While SYNQ has been evaluated on W3 and W4, how does it perform under extremely low-bit (e.g., 2-bit) conditions? For example, GENIE [1], one of the ZSQ methods, demonstrated performance not only on W3 and W4 but also on W2. It would be beneficial to add it as a baseline and show performance in low-bit settings as well.\n2. What is the performance variation according to the size of the generated synthetic dataset?\n\n[1] Jeon et al., \"GENIE: Show Me the Data for Quantization. \", CVPR 2023." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The observations regarding the three limitations of ZSQ are interesting, and the proposed method appears feasible.\n2. The performance is validated through a variety of experiments. Specifically, experiments were conducted to verify the performance of SYNQ by comparing it with various ZDQ baselines on not only CNN-based models but also ViT-based models.\n3. The detailed analyses of the three components of SYNQ enhance the persuasiveness of the methodology.\n4. This paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a synthesis-aware fine-tuning method, SYNQ, to improve zero-shot quantization (ZSQ) performance. SYNQ defines the issues of ZSQ as follows: 1) high-frequency noise in the generated synthetic dataset, 2) predictions based on off-target patterns, and 3) misguidance by hard labels. SYNQ effectively addresses these issues to improve ZSQ performance through the use of a low-pass filter, CAM alignment, and hard label filtering." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the observations presented in the paper are interesting, most of the experimental evidence provided was gathered under limited conditions. For instance, in Figure 5, experiments were shown only for TexQ among various baseline models, and the analysis for CIFAR-10 and CIFAR-100 used as benchmarks in Table 1 was omitted.\n2. In Figure 2, the heat map is shown only one sample image.\n\nFor these reasons, it is difficult to be certain whether the presented observations are phenomena that can be observed only in limited baselines and datasets or are generally seen across ZSQ methods. Therefore, the authors should provide experimental evidence across various baselines and datasets beyond the limited settings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The reviewer thinks that off-target prediction problems may occur under the DFQ scenario, not general quantization scenarios. Nevertheless, did the authors examine whether the problem occurred with real data?\n- What is the experimental setting of the baseline in Table 3 such as the quantization algorithm? (the same result is not in the Table 1)\n- In Figure 7,\n - Does this tendency still hold with other models and datasets? For instance, the distribution of model outputs that are used as a measurement of difficulty can be different if the number of classes differs. With various models (e.g., ResNet, MobileNet, etc) and datasets (e.g., CIFAR10, CIFAR100, ImageNet), is the optimal $\\tau$ always 0.5 or values close to it?\n - The magnitude of $\\lambda_{CAM}$ in (a) are much larger than those of $\\lambda_{CE}$. Is the magnitude of CAM loss on average much smaller than that of $\\lambda_{CE}$ loss?\n- In Table 5 of the appendix,\n - Those experiments are based on 3 works that use a generator. However, SynQ adopts noise optimization for generating images. Why aren’t other works that adopt noise optimization addressed?\n - Those 3 works are improved with SynQ. However, they are worse than SynQ itself. Can the authors’ opinions about this provided?\n - How about applying SynQ to other works based on noise optimization?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper tackles the limitations of previous works well.\n- The paper tries to denoise synthesized images with a loss-pass filter. This idea is a good point that highlights the importance of classical techniques and theories in the recent AI era.\n- The paper identifies the off-target prediction problem that occurs only in the data-free quantization scenario. It is a good suggestion that analyzes the performance of a quantized model with grad GAM and uses it as a loss function.\n- The paper executes various experiments and ablation studies for validating proposals." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work points out several problems that prior works of Data-free quantization (DFQ) have.\nFirst, synthesized images are noisy compared to their real counterparts.\nSecond, models quantized with synthesized images tend to predict based on incorrect image patterns.\nIn addition, the paper claims that using hard labels on hard samples can cause misguidance.\n\nTo resolve these problems, the paper proposes three methods.\n- The paper revisits classical signal processing and removes the noise of generated images with a low-pass filter.\n- To align activation maps between a full precision model and a quantized model, the paper proposes to use grad CAM as a loss function.\n- By considering model outputs as the difficulty of the sample, the paper proposes to omit CE loss if the input is a difficult sample." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper refers to the limitations of previous works too much. The same content is repeated over 3 times.\n- The investigation and analysis of prior works is insufficient.\n - The paper notes that using hard labels can be harmful to the performance of quantized models, pointing out that previous works used both hard labels (CE loss) and soft labels (KL divergence loss). It can be a novelty that determines the usage of CE loss according to difficulty. However, there already exist several works that use a soft label instead of a hard label. For instance, Qimera proposed to use the coefficient of superposed latent embedding as soft labels. AIT also pointed out the importance of soft labels and used soft labels only for the loss function.\n - The results of current state-of-the-art works are omitted. In the CNN domain, GENIE shows better performance than this work. Also, in transformer variants, PSAQ-ViT V2 shows better results. Those works should be addressed.\n- Generated images with SynQ can help understand the superiority of the proposal. Please attach generated images with SynQ (before and after applying the low-pass filter)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The authors could refer to the Weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The manuscript exhibits a coherent structure and is straightforward to navigate. Figures 1 through 3 effectively illustrate the key observations and the rationale behind our approach. Notably, the visualization of the Magnitude spectrum in Figure 1 is particularly engaging. To the best of my knowledge, this method is the first zsq to complete experiments on both cnn and vit, which is appreciated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "They propose SYNQ that targets to overcome the following limitations of current ZSQs:\n1. noise in the synthetic dataset; 2. off-target patterns; 3. misguidance by erroneous hard labels." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.line382:The header method of Table 1 is incorrectly written as CIFAR dataset\n2. line237: The Low-pass filter (Section 4.2) directly modifies the image's impact on the model without using artificial visual features to judge whether it is good or bad. Does Low-pass filters have advantages over the existing ZSQ? \n3. Is Fig.2 different at different bit widths/networks? Is this a general situation in ZSQ?\n4. Lack of computational cost analysis comparison with state of the art methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024synq,\ntitle={SynQ: Accurate Zero-shot Quantization by Synthesis-aware Fine-tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2rnOgyFQgb},\nnote={under review}\n}" }, "abstract": { "value": "How can we accurately quantize a pre-trained model without any data?\nQuantization algorithms are widely used for deploying neural networks on resource-constrained edge devices.\nZero-shot Quantization (ZSQ) addresses the crucial and practical scenario where training data are inaccessible for privacy or security reasons.\nHowever, three significant challenges hinder the performance of existing ZSQ methods: 1) noise in the synthetic dataset, 2) predictions based on off-target patterns, and the 3) misguidance by erroneous hard labels.\nIn this paper, we propose SynQ (Synthesis-aware Fine-tuning for Zero-shot Quantization),\na carefully designed ZSQ framework to overcome the limitations of existing methods.\nSynQ minimizes the noise from the generated samples by exploiting a low-pass filter.\nThen, SynQ trains the quantized model to improve accuracy by aligning its class activation map with the pre-trained model.\nFurthermore, SynQ mitigates misguidance from the pre-trained model's error by leveraging only soft labels for difficult samples.\nExtensive experiments show that SynQ provides the state-of-the-art accuracy, over existing ZSQ methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Network Quantization", "Zero-shot Quantization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/91621b0f70174054afe9a2fd478ef758a5103a8c.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/904b0d26cee98226db03f9452245d22ae502135c.zip" }, "title": { "value": "SynQ: Accurate Zero-shot Quantization by Synthesis-aware Fine-tuning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2seVGyWZOX
SR$^2$: BOOSTING 3D LARGE LANGUAGE MODEL WITH SPATIAL RELATION REASONING
main
Withdraw
3D Large Language Model;Spatial Relation Reasoning;3D Segmentation
applications to computer vision, audio, language, and other modalities
Zhenhua Ning;Zhuotao Tian;Shaoshuai Shi;Daojing He;Guangming Lu;Wenjie Pei;Li Jiang
~Zhenhua_Ning1;~Zhuotao_Tian1;~Shaoshuai_Shi1;~Daojing_He1;~Guangming_Lu2;~Wenjie_Pei1;~Li_Jiang3
5;5;5;5;6
4;3;4;3;4
2;2;2;3;3
2;2;2;2;3
2;2;3;3;3
5.2
3.6
2.4
2.2
2.6
0.408248
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness. I especially expect the authors can address my concerns about the motivation and the trade-off between the efficiency and the performance." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The motivation is good. Indeed directly reasoning everything is hard due to the lack of dataset and we should decompose the complex reasoning tasks into simpler tasks. The paper is also well-written. The experiment result also demonstrates that the effectiveness of the results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors proposed a spatial relation reasoning method to tackle the problem of point-cloud reasoning task. Instead of doing reasoning in a one-stage end2end manner, the authors adopt a strategy of first get the target-relevant objects in point-cloud and then reason the relationships between the target objects. The experiment results demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Based on above strength especially about the motivation, I would say however the proposed method seems to be too heavy. I like motivation that first locate the objects then infer the relationship. But I think the method looks very heavy and redundant. it seems that it does not necessarily call the heavy 3D-VLLM twice. It should be able to directly run an efficient 3D VLLM to locate the objects then leverage the localized 3D position for directly reasoning the relationship instead of using complex tokens from features.\n\nBesides, if just look at baseline vs. baseline + SR2, the proposed method does not improve the performance significantly. I would also attribute the slight improvement to the redundant design since maybe the super-point grouping introduce more noisy information. More importantly, I found that the baseline the authors use already achieves very significant improvement compared to other methods. In that case, it seems that using better LLM and more advanced vision encoders are more important compared to the motivation of decomposition.\n\nI would also recommend the author compared the latency for all the experimented baselines. Again, I like the motivation so I do expect that with the new proposed \"two-phase paradigm\", we can use more efficient models to achieve better performance instead of simply calling a heavy model twice while not improving much performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The supplementary materials should be in a separate file, but the author seems to have included them at the end of the main file." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The problem studied in this paper, i.e., improving the 3D-LLM with spatial reasoning, is important and well-motivated.\n\n- This paper is well-organized and easy to follow.\n\n- Contributing a benchmark named 3D ReasonSeg for evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors aim to strengthen relational reasoning capabilities in 3D environments. The Spatial Reasoning framework is proposed to mimic human reasoning behavior. A new benchmark is constructed for more specific training and evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In my view, the proposed framework is a two-stage sequential reasoning process, where stage one detects relevant objects and stage two reasons on these sampled objects. Such a pipeline is quite straightforward, lacking some technical contributions. \n\n- I believe 3D spatial information such as 3D coordinate and geometry information is fundamental in distinguishing 3D tasks from 2D tasks. However, how to better leverage such 3D spatial information to improve 3D-LLM's spatial reasoning is still unexplored. \n\n- Fig.1 is somewhat blurry, making it difficult to distinguish the objects clearly.\n\n- Besides the positional relationships between objects, I believe the geometric shapes and relative sizes of objects at varying scene scales are also crucial for 3D spatial reasoning, which is ignored in this work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I suggest the authors to address the questions raised in the weakness section during the discussion period" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The pipeline makes sense for me. Intuitively, it would be good to decompose a complex spatial reasoning problem into 2 different stages, involving both coarse-grained and fine-grained steps.\n\n2. The teaser figure is clear to demonstrate the paper's motivation and major method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new method to improve the spatial reasoning capability of 3D MLLM. The pipeline consists of two steps: identify all relevant elements first and then determine target among them. The authors have also set up a new benchmark named 3D ReasonSeg. They claim the proposed dataset can more comprehensively evaluate different models' capability in terms of complex spatial reasoning. Experiment have shown the proposed method improves the performance of base model on several datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors have set up a new benchmark and claim that the proposed new benchmark can provide a more comprehensive evaluation in terms of the 3D spatial reasoning capability of the models. It would be better if the authors can have a table to summarise the different between the proposed dataset compared previous ones to make the contributions and differences more clear.\n\n2. As in table 1, the improvement of adding SR^2 is not significant - only about 1% for most of the metrics. It would be more convincing if more improvement is brought by the proposed pipeline." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The viewpoint can heavily influence 3D object relationships, as the definition of 'left' and 'right' depends on the user's perspective. How do the $SR^2$ method and the 3D ReasonSeg dataset account for such viewpoint dependence? This is a core consideration in 3D scene understanding, especially regarding object relationships.\n2. How do other 3D multi-modal large language models perform on the 3D ReasonSeg dataset?\n3. Given that the pre-train dataset includes general datasets like ScanQA, ScanRefer, ScanNet200, 3D-LLM, and 3D ReasonSeg, how can we be sure that the performance superiority over other methods is not simply due to the varied pre-train datasets?\n4. Can you provide some failure cases from the $SR^2$ method? These would help us better understand the characteristics and limitations of the $SR^2$ method." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The spatial relation reasoning module employs a 2-step design tailored for 3D scene understanding, effectively capturing complex object relationships. The paper is commendably clear and easy to follow, with experiments validating the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new spatial relation reasoning method tailored for 3D scene understanding tasks and introduces the 3D ReasonSeg dataset. The spatial relation reasoning approach demonstrates potential effectiveness in enhancing scene understanding." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The experimental results indicate that the improvement brought by $SR^2$ is relatively marginal. Specifically, the performance gain is only 0.1 on ScanRefer Acc@50 and 1.5 on the 3D ReasonSeg dataset.\n\nMinor issue:\nInconsistent terminology: The $SR^2$ method is inconsistently referred to as SPR in L227 and L295." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In line 169: \"Subsequently, the Q-Former compresses the scene’s information into several latent queries $q_l$\". What is the definition of $q_l$? Is it learnable parameters or extracted from the 3D representation?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper is well-written and easy to follow.\n2. The SRR framework is well motivated and interesting.\n3. The performance of the method on three mainstream benchmarks is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a 3D reasoning segmentation method and a corresponding benchmark. It first proposes a baseline reasoning segmentation model following LISA. Then the base model is improved by the presented SRR to segment the target from coarse to fine. The authors collected data to train the model to first segment relevant objects and then segment the target by focusing on the priors. Experimental results on 3 benchmarks validate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The improvement after incorporating SRR is not so significant on most metrics according to Table 1. Considering this point, I think the efficiency of SRR should be provided, e.g., additional inference time, memory footprint, which can demonstrate a comprehensive tradeoff.\n2. In Table1, there is no other method reported on 3D ReasonSeg benchmark. The authors should implement some representative methods on this for fair comparison." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nning2024sr,\ntitle={{SR}\\${\\textasciicircum}2\\$: {BOOSTING} 3D {LARGE} {LANGUAGE} {MODEL} {WITH} {SPATIAL} {RELATION} {REASONING}},\nauthor={Zhenhua Ning and Zhuotao Tian and Shaoshuai Shi and Daojing He and Guangming Lu and Wenjie Pei and Li Jiang},\nyear={2024},\nurl={https://openreview.net/forum?id=2seVGyWZOX}\n}" }, "abstract": { "value": "Recent research in point cloud perception has achieved considerable progress in enhancing scene understanding by means of vision-language alignment through large language models (LLMs). However, existing methods may still encounter challenges in handling complex instructions that require accurate spatial reasoning, even if the 3D point cloud data has provided detailed spatial cues such as size, position, and orientation for identifying the targets.\nTo tackle this issue, this study introduces a new 3D multi-modal LLM framework, Spatial Relation Reasoning (SR$^2$). This framework is designed to strengthen relational reasoning capabilities in 3D environments. SR$^2$ mimics human reasoning behavior by first broadly identifying all relevant elements and then carefully examining them to determine the target.\nIn addition, as current datasets may not comprehensively evaluate the complex spatial reasoning capabilities of various models, we propose a new benchmark named 3D ReasonSeg that consists of 25,000 and 4,152 high-quality samples for training and evaluation respectively.\nBoth quantitative and qualitative experiments demonstrate that SR$^2$ and 3D ReasonSeg effectively endow 3D point cloud perception with stronger spatial reasoning capabilities, and we hope that the proposed SR$^2$ and 3D ReasonSeg can serve as a new baseline and benchmark for future work. The code and model will be made publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Zhenhua_Ning1", "~Zhuotao_Tian1", "~Shaoshuai_Shi1", "~Daojing_He1", "~Guangming_Lu2", "~Wenjie_Pei1", "~Li_Jiang3" ] }, "authors": { "value": [ "Zhenhua Ning", "Zhuotao Tian", "Shaoshuai Shi", "Daojing He", "Guangming Lu", "Wenjie Pei", "Li Jiang" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3D Large Language Model", "Spatial Relation Reasoning", "3D Segmentation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "ning|sr^2_boosting_3d_large_language_model_with_spatial_relation_reasoning" }, "pdf": { "value": "/pdf/4d02aa48ec200cd729efb41d752a111b2b2cfd3d.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SR$^2$: BOOSTING 3D LARGE LANGUAGE MODEL WITH SPATIAL RELATION REASONING" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2snKOc7TVp
VisualAgentBench: Towards Large Multimodal Models as Visual Agents
main
Active
Large Multimodal Models;Agents;Evaluation
datasets and benchmarks
6;6;6;8
4;5;4;5
3;3;3;3
3;2;3;3
3;3;3;3
6.5
4.5
3
2.75
3
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you clarify the role of bounding boxes and object tags in Figure 7? Does this mean that objects and tags must be visible in the input images so that the simulator can recognize and interact with these objects by their tag names? In Section 5.1, the authors discuss the use of object labels in embodied environments. How exactly does the agent operate when no object label or tag is provided?\n\n2. To ensure ease of use, what practice does VAB provide? unified API access or modular code structure across different task environments? More details on engineering side for easy usage could be beneficial." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Comprehensiveness: The main strength of this paper is its comprehensiveness in benchmarking Large Multimodal Models (LMMs) as visual agents. The authors introduce VisualAgentBench (VAB), a benchmark that covers a wide range of real-world application scenarios by including five major task categories: embodied agents, GUI-based agents, web agents, gaming agents, and visual design agents. This breadth makes VAB a thorough evaluation tool, enabling a more holistic assessment of LMMs' capabilities across different domains rather than focusing on a single application area.\n\n- Extensive Experiments: The paper demonstrates substantial experimental rigor by benchmarking 18 different LMMs, encompassing both proprietary and open-source models. This extensive testing provides a solid foundation for the insights presented, which shed light on various LMM challenges, such as visual grounding and error recovery. These experiments allow for more reliable comparisons between models, offering valuable insights into how different LMMs perform in complex, interactive tasks. The conclusion on ReAct framework is also interesting.\n\n- Insightful Analysis: Through the VAB benchmark, the authors provide some useful observations on the current state of LMMs as visual agents. They highlight specific limitations in visual grounding, action planning, and error handling across various environments, which helps to pinpoint areas for future improvement in LMM design. While these insights are not groundbreaking, they add value by identifying practical challenges that developers and researchers may encounter when deploying LMMs in real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces VisualAgentBench (VAB), a benchmark designed to evaluate and train LMMs as visual agents in diverse, realistic scenarios, including embodied, GUI, and visual design tasks. VAB provides a unified, standardized framework for assessing LMMs across multiple domains, synthesizes high-quality multimodal data using a mix of programmatic solvers, LMM bootstrapping, and human demonstrations, and benchmarks 18 LMMs, uncovering both strengths and limitations in real-world task performance. Key insights include challenges in visual grounding, planning, and error recovery, offering a valuable testbed to push LMMs toward more adaptable and practical visual agents." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Insufficient Explanation for VL Model Performance: Some vision-language models perform poorly without adequate explanation. For instance, the paper doesn’t explore why certain models achieved low scores, leaving questions about the benchmark’s application across models.\n- Unclear Role of Visual Information in Certain Tasks: The paper lacks clarity on how specific tasks, such as those in Minecraft, leverage visual information effectively and whether VLM is genuinely necessary for all actions. For instance, Minecraft actions like \"Teleport\" don't inherently require visual information since they can execute without reference to the visual state, raising doubts about the added value of VL models in such contexts. Clarifying how the benchmark ensures each action necessitates visual input, as opposed to pure language model decision-making, would help demonstrate the benchmark’s relevance and justify the use of VL models over text-only approaches in specific environments.\n- Ambiguities in Figure Interpretation and Process Flow: Figures like Figure 2 could benefit from clearer annotations or explanations. The figure includes multiple input and output connections but lacks a clear process flow or indication of sequential dependencies, making it challenging to follow the intended agent behavior across rounds." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Does the physical dimension of robot arm given in Gibson? As shown in Figure 1 Round 3, I don't think the grasp banana is feasible given its lying on the far end of the countertop" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed benchmark provides a unified “function-call” action space as well as diverse task domains for benchmarking the agent ability of LMMs\n2. Experiments and ablation are solid. A wide range of both commercial and open-source LMMs are evaluated on the proposed benchmark. Ablations of input image prompting(labels, SoM), reflection(injecting error) and planner(ReAct w/ & w/o Thought) are conducted." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose to apply large multimodal models to visual agent tasks, which is mored grounded than existing VQA tasks.\nThe authors collected 5 datasets including 1 for robotics in simulation, 1 for game playing, 2 for GUI manipulation and 1 for web page design.\nThe authors design a function calling-based action space for each task.\nThe agent tasks are created by first generating a bunch of templates with placeholders and then instantiating these templates.\nAction trajectories for each task are collected by 1) human-written scripts(for web GUI tasks) 2) prompting existing LMMs like GPT-4o 3) human demonstration.\nThe authors collected 746 test cases and 4482 training trajectories across 5 tasks and benchmarked 18 proprietary and open-source LMMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. No clear advantage over existing benchmarks. There are plenty of existing benchmarks for both high-level planning for robot manipulation in simulation like RoboVQA as well as for GUI agent tasks like Webshop-V and VisualWebArena. The proposed visual agent benchmark is surely more grounded than VQA benchmarks like MMMU, but I don’t see what’s the real contribution is if compared with domain-specific benchmarks. \n2. Low quality training trajectories. Take the GUI agent for instance, the proposed VAB-WebArene-Lite uses code script-based trajectory collection, which is well-known for its limited diversity compared with real-world human web browsing action trajectories. \n3. The function calling action space biases toward LMMs with a strong coding ability so that some LMMs unfairly got low scores(like Qwen-VL-Max and Gemini-1.0-Pro, which both do a good job for pure VQA benchmarks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. When training open LMMs, the paper says that they only use the vision input of the latest turn. Does this mean each turn in the training trajectory is treated as an independent sample (with previous turns provided in context) instead of training as a multi-turn conversation sample?\n2. For evaluating LMM APIs, what do you mean by \"Vision Input image won’t be kept in the dialog\"? Wouldn't the images that appear in the previous turns in one conversation automatically be kept?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed benchmark is a good complement to current multimodal and agent benchmarks to evaluate LMMs in challenging interactive scenarios.\n2. The proposed benchmark has standardized environments with good consistency and reproducibility.\n3. The paper also provides training trajectories for SFT.\n4. The experiments are comprehensive, revealing problems of current models, and the analysis provides good insights." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a multimodal benchmark VisualAgentBench to evaluate LMMs as agents performing interactive tasks. The benchmark includes three scenarios based on five datasets/environments: Embodied (VAB-OmniGibson, VAB-Minecraft), Graphical User Interface (VAB-AndroidLab, VAB-WebArena-lite), and Visual Design (VAB-CSS). Besides benchmark data, the authors also provide additional task trajectories for training. All the task instances in the training and testing data are constructed by prototyping and instantiation. This paper applies a mix of three strategies to collect training trajectories according to the characteristics of different datasets. Experiment results show that the proposed benchmark is challenging for current LMMs and further SFT could improve the performance. The paper also conducts an analysis of visual grounding and planning to provide insights for the future development of LMM agents." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The number of training data is still very limited. The paper does not show whether it is possible to scale up training data in the proposed environments in an efficient way.\n2. There is no analysis and experiments to verify whether the proposed environments could be effectively used for RL training.\n3. It would be helpful to train some non-LLM specialist models in each environment using RL/IL and report their performance as a reference.\n4. After fine-tuning LMMs with the collected training data, authors should also evaluate their general multimodal abilities on other multimodal benchmarks. Also, the authors should explore whether it is possible to maintain the original abilities while improving the performance on the proposed benchmark after SFT.\n5. The authors should provide some trajectories of models tested for better illustration." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I appreciate the diversity of tasks explored, from embodied agents to visual design. I believe that general-purpose LMMs that can perform well on a wide range of tasks are essential, and such benchmarks are necessary. Interaction through text/vision-based environment feedback is an important aspect of the paper. I also appreciate the scale of evaluation in the paper, and the finetuning of open LMMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a benchmark for evaluating LMMs across a wide range of task: including proposing (1) standardized interfaces, prompting, and data formats, (2) a strategy for creating valid test sets, (3) multitask multi-environment trajectory train sets, (4) benchmarking of 18 open-sourced and closed-sourced LMMs, (4) analyses of LMMs' abilities for grounding and planning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Can you elaborate on how the task description, action spaces, few-shot demonstrations, and notices for each environment are formatted as prompts? How are visual outputs of the environment passed in as text in the interaction rounds for embodied and GUI agents? Could you elaborate on how error recovering works in planning? Interested in seeing experiments on error recovery behavior across all models, or in general, some evaluation on the interaction rounds instead of final success rate only (e.g., average number of steps to recover from an error). \n2. I’m also interested in seeing more detailed analyses on the tasks that these models fail on. For example, which “prototypes” lead to failure? Does each model fail in a way that is consistent with one another (e.g., is there correlation between the accuracy on each prototype/subtask?) Does finetuning an open LMM help on the grounding aspect more, or the planning aspect more?\n3. On a high-level, it would be great to have a quantitative metric established in this benchmark that systematically measures performance on grounding vs reasoning, instead of as an analysis only. Also, related to the first point, quantitative evaluation on interaction behavior (perhaps some proxy metric on progress to the goal)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024visualagentbench,\ntitle={VisualAgentBench: Towards Large Multimodal Models as Visual Agents},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2snKOc7TVp},\nnote={under review}\n}" }, "abstract": { "value": "Large Multimodal Models (LMMs) have ushered in a new era in artificial intelligence, merging capabilities in both language and vision to form highly capable visual agents that are postulated to excel across a myriad of tasks.\n However, existing benchmarks fail to sufficiently challenge or showcase the full potential of LMMs as agents in complex, real-world environments. \n To address this gap, we introduce VisualAgentBench (VAB), a comprehensive and unified benchmark specifically designed to train and evaluate LMMs as visual agents across diverse scenarios in one standard setting, including Embodied, Graphical User Interface, and Visual Design, with tasks formulated to probe the depth of LMMs' understanding and interaction capabilities. \n Through rigorous testing across 9 proprietary LMM APIs and 9 open models (18 in total), we demonstrate the considerable yet still developing visual agent capabilities of these models. \n Additionally, VAB explores the synthesizing of visual agent trajectory data through hybrid methods including Program-based Solvers, LMM Agent Bootstrapping, and Human Demonstrations, offering insights into obstacles, solutions, and trade-offs one may meet in developing open LMM agents. \n Our work not only aims to benchmark existing models but also provides an instrumental playground for future development into visual agents." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Multimodal Models", "Agents", "Evaluation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e3320a956b79264c56385a9d16355dc66276aacc.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "VisualAgentBench: Towards Large Multimodal Models as Visual Agents" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2soZBUoG3n
STRUCTDROP: A STRUCTURED RANDOM ALGORITHM TOWARDS EFFICIENT LARGE-SCALE GRAPH TRAINING
main
Active
Efficient Training;Randomized Algorithm
learning on graphs and other geometries & topologies
3;3;3;8
4;3;3;4
1;2;3;3
1;2;2;3
2;3;3;4
4.25
3.5
2.25
2
3
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Novelty Clarification: Can the authors clarify how StructDrop differs fundamentally from existing methods like DropNode combined with instance normalization? What are the unique contributions that set this work apart?\n\nTheoretical Analysis: Is there a theoretical basis for why uniform sampling of columns and rows, along with instance normalization, maintains model performance? Providing theoretical justification or proofs would strengthen the validity of the approach.\n\nComparison with Other Baselines: Why were more recent methods for efficient GNN training not included in the comparisons? For instance, methods involving quantization, advanced graph sparsification, or other sampling techniques. Including these would provide a better context for evaluating StructDrop's effectiveness.\n\nImpact of Instance Normalization: Could the authors provide a deeper analysis of the role of instance normalization? Specifically, how does it mitigate the variance introduced by random sampling, and what is its impact on training dynamics and final model performance?\n\nApplicability to Other GNN Models: Have the authors tested StructDrop on attention-based GNNs or other architectures with different message-passing schemes? If not, what challenges do they anticipate in applying StructDrop to these models?\n\nGuidelines for Sampling Ratio: Is there an optimal range for the sampling ratio that balances efficiency and accuracy? How sensitive is the method to this hyperparameter, and how should practitioners choose it in different scenarios?\n\n\nWhile the paper addresses an important problem in GNN training efficiency, the current form lacks sufficient novelty and theoretical grounding. The method seems to be an incremental improvement over existing techniques without providing significant new insights. To enhance the contribution, the authors should:\n- Strengthen the Theoretical Foundation: Provide theoretical analyses or proofs explaining why the proposed method works and under what conditions it is effective.\n- Compare with Stronger Baselines: Include comparisons with more recent and relevant methods in efficient GNN training to demonstrate the advantages of StructDrop convincingly.\n- Deepen the Analysis of Instance Normalization: Offer a detailed exploration of how instance normalization contributes to the method's success, possibly with ablation studies or theoretical explanations.\n- Discuss Limitations and Applicability: Provide a balanced discussion of the method's limitations and applicability to a broader range of GNN architectures.\n- Provide Implementation Details: Include more information on hyperparameters, implementation specifics, and possibly share code to enhance reproducibility.\n\nBy addressing these points, the paper would offer a more substantial contribution to the field and better meet the standards of a high-impact conference. I can increase my score to 5 based on the rebuttal." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Simplicity of Implementation: The proposed method is straightforward to implement, involving uniform sampling of columns and rows in the adjacency matrix and the application of instance normalization.\n\nEmpirical Performance: The experimental results show that StructDrop can achieve significant speedups in training time while maintaining comparable accuracy to baseline methods on several datasets and GNN architectures.\n\nPractical Motivation: The paper addresses a practical problem in training efficiency for large-scale GNNs, which is of interest to the research community and industry practitioners dealing with big graph data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces StructDrop, a structured random sampling algorithm aimed at improving the efficiency of training Graph Neural Networks on large-scale graphs. Traditional GNN training is computationally intensive due to the message-passing mechanism, particularly the SpMM. Prior methods like top-k sampling, DropEdge, and DropNode attempt to reduce computational costs but often suffer from inefficiencies due to the overhead of reconstructing sparse matrices and can lead to underfitting.\n\nStructDrop proposes to address these issues by uniformly sampling and removing entire columns (and their corresponding rows) from the sparse adjacency matrix, effectively reducing the computational complexity of SpMM without the need for costly sparse matrix reconstruction. To mitigate the variance and distribution shift introduced by random sampling, the authors incorporate instance normalization after the approximated SpMM operations. The method aims to balance computational efficiency with model performance. The results suggest that StructDrop can achieve up to 5.29× end-to-end speedup with a similar accuracy compared to standard GNN training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Lack of Novelty: The method appears to be a combination of existing techniques—specifically, dropping nodes (similar to DropNode) and applying instance normalization. The paper does not sufficiently differentiate StructDrop from these prior methods in terms of novelty.\n\nInsufficient Theoretical Justification: There is a lack of theoretical analysis explaining why uniform sampling combined with instance normalization effectively preserves model accuracy while reducing computational cost. The paper would benefit from theoretical insights or proofs to support the empirical findings.\n\nBaselines: The experimental comparisons are primarily against older methods like DropEdge and DropNode. The paper does not compare StructDrop with more recent or advanced methods for efficient GNN training, such as graph sparsification techniques, quantization methods, or other modern sampling strategies.\n\nLimited Analysis of Instance Normalization: The role of instance normalization in mitigating the effects of random sampling is not thoroughly analyzed. The paper lacks detailed experiments or theoretical explanations demonstrating why instance normalization is essential in this context.\n\nQuestionable Acceleration Claims: The claimed acceleration may not be as significant in practice because the latency reduction from the proposed method could be overshadowed by other bottlenecks in GNN training. Additionally, the paper does not discuss whether the latency improvements are due to algorithmic efficiency or simply hardware optimizations that might not generalize across different environments.\n\nMissing Discussion on Limitations: The paper does not explore potential limitations of StructDrop, such as its performance on extremely large graphs, its impact on memory usage, or scenarios where the method might not provide significant benefits." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can you clarify the difference between the proposed method and previous layer-wise sampling methods?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written. The proposed technique is simple and clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a sampling method to accelerate the neighbor aggregation of graph neural network. The main observation made by the authors is that importance sampling leads to the sampling of same column-row pairs across training iterations. The authors proposed uniform sampling to overcome the problem and show better performance compared to importance sampling." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited novelty. The proposed sampling is very similar to the well-known layer-wise sampling technique for GNNs [Huang et al. 2018, Zou et al. 2019]. The sampling of the adjacency matrix rows corresponds to the sampling of neighboring nodes in a layer. While the authors claim that the proposed sampling technique can be \"seamlessly combined with previous sampling methods\", the difference is unclear to me. In fact, I feel that the proposed technique can be precisely expressed within the previous layer-wiser sampling framework. \n\n\n2. The experiments are insufficient in terms of GNN models and data graphs: \n- The authors evaluated their techniques with GCNs. Is the proposed technique applicable to attention-based models? \n- The graphs used are small. It will be more convincing to evaluate on larger graphs where sampling is indeed beneficial. \n\n3. Lacks technical depth. Sampling column-row pairs to speed up matrix multiplication is a well-known technique. It seems the main contribution of this paper is the experimental observation that importance sampling leads to under-fitting, and naive uniform sampling performs better in practice. The paper will be stronger if the authors can provide some theoretical insight." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The paper's main contribution is training acceleration. However, unlike top-K sampling, which benefits from a high cache hit ratio, uniform sampling only reduces FLOPs, which is insufficient. The authors should explore more advanced sparsification techniques that better leverage hardware properties, such as the memory hierarchy. \n\n2. The analysis of how distribution shift occurs and how instance normalization mitigates this issue lacks clarity. Additionally, the authors should explain why they chose instance normalization over layer normalization. \n\n3. A more comprehensive analysis of how various graph dropout techniques impact training and generation (Appendix E) would be beneficial.\n\n4. Please also address the questions in the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The proposed method is hardware-friendly and can achieve large speedup with negligible accuracy loss. \n\n2. This paper introduces instance normalization to alleviate the distribution shift after sampling, which effectively maintains accuracy. \n\n3. The proposed method provides significantly more acceleration with similar or better accuracies compared to previous baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose StructDrop, a random dropout technique for sparse matrix-matrix multiplication (SpMM) in both the forward and backward processes of Graph Neural Networks (GNNs). StructDrop applies instance normalization following each SpMM to mitigate the training shift due to random column-row pair dropping. Experimental results demonstrate that StructDrop achieves less training time with similar accuracies across different GNN architectures and GNN mini-batch training algorithms." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method combines dropping source nodes and instance normalization, which is relatively straightforward and may not significantly contribute to the GNN community. The justification of system-wise speed up and improving generalization is not sound because it seems a localized optimization and does not consider several systemic aspects in methodology and training (see below). \n\n2. This method is limited to SpMM GNNs and cannot be applied to scatter-gather GNNs like GAT. Could the authors discuss the applicability of StructDrop to other GNN architectures beyond SpMM-based ones, and comment on potential ways to extend the approach to scatter-gather GNNs?\n\n3. The paper claims that SpMM is the major bottleneck, consuming 70–90% of the total runtime. However, it overlooks cross-device data transfer as another bottleneck in mini-batch training on large-scale graphs where a single GPU cannot store the entire training data. Consequently, the proposed technique might not achieve significant speedup in these scenarios. Could the authors discuss how StructDrop would perform in scenarios where cross-device data transfer becomes a significant bottleneck, such as in mini-batch training on very large graphs? Are there ways the method could be adapted or combined with other techniques to address this issue?\n\n4. The claim that DropNode and DropEdge operations are bottlenecks and that replacing them with StructDrop can achieve more than 2 times speedup is questionable. A runtime analysis of these operations with GPU implementations by DGL is necessary. The authors should compare the runtime of DropNode/DropEdge to SpMM under varying sparsity. Moreover, even if these runtimes are significant, the latencies of DropNode/DropEdge can be easily hidden as they are independent of GNN training. Could the authors provide a detailed runtime analysis comparing StructDrop, DropNode, and DropEdge operations using GPU implementations (e.g., from DGL), including comparisons of their runtimes to SpMM under varying sparsity levels? Additionally, could they discuss how the potential for hiding DropNode/DropEdge latencies impacts the overall speedup claims?\n\n5. The baselines used in this paper are weak in terms of data augmentation and training acceleration. Stronger baselines are needed for a more comprehensive comparison. \n\n6. A wider range of large-scale datasets with diverse statistics is required. Current results indicate that the speedup is highly correlated with graph density, with StructDrop achieving significant speedup only on datasets with substantially large average degrees. A thorough discussion of the work's limitations is necessary. Could the authors include experiments on additional large-scale datasets with varying graph densities and other properties? Additionally, could they provide a more comprehensive discussion of how graph properties impact StructDrop's performance, and what the limitations of the approach are for different types of graphs?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "I am a little curious why wouldn't random sampling be one of the first things people try." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The finding of the reason Top-k sampling leads to lower accuracy is both intuitive and well supported by evidence.\nThe proposed solution is also intuitive and apparently effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an alternative way of speeding up SpMM during GNN training. The SOTA method (top-k sampling) involves picking row-column pairs that have the highest norm product. The interesting finding is that this tends to select a substantially similar subset of pairs in consecutive epochs and thus lead to under-fitting and lower accuracy. The proposed solution uses random sampling which shows good accuracy as well as speedup due to reduced workload in SpMM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "N/A" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024structdrop,\ntitle={{STRUCTDROP}: A {STRUCTURED} {RANDOM} {ALGORITHM} {TOWARDS} {EFFICIENT} {LARGE}-{SCALE} {GRAPH} {TRAINING}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2soZBUoG3n},\nnote={under review}\n}" }, "abstract": { "value": "Graph neural networks (GNNs) have gained considerable success in graph-based learning tasks, yet training GNNs on large graphs is still inefficient. The root cause is the graph-based sparse operations are difficult to accelerate with commodity hardware. Prior art reduces the computation cost of sparse matrix based operations (e.g., linear) via sampling-based approximation. However, two under-explored pain points still persist in this paradigm. Inefficiency Issue: The random-based sampling approaches have the non-zero entries randomly distributing over adjacency matrix, which slows down memory access process and is difficult to accelerate with commodity hardware. Under-fitting Problem: The previous sampling methods only utilize the same subset of nodes during the training, which may cause the under-fitting problem on other remain nodes. Aiming to systematically address these two pain points, we propose StructuredDropout, a.k.a, StructDrop. This method involves the selective random sampling of columns and rows from a sparse matrix for computation. Comprehensive experiments validate the efficiency and generalization of our framework: StructDrop achieves up to 5.09x speedup for a single sparse operation and 5.29x end-to-end speedup with negligible accuracy loss or even better accuracy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Efficient Training", "Randomized Algorithm" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c13664d6712a6a17a6236cf76753b5eb45cf9ca0.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "STRUCTDROP: A STRUCTURED RANDOM ALGORITHM TOWARDS EFFICIENT LARGE-SCALE GRAPH TRAINING" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2tIyA5cri8
Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models
main
Active
reinforcement learning;in-context learning;representation learning;sparse autoencoders (SAEs);large language models (LLMs)
applications to neuroscience & cognitive science
3;5;6;8
4;1;3;4
1;2;3;4
3;2;3;4
4;2;3;3
5.5
3
2.5
3
3
0.113228
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could you please provide clarification re: weaknesses 2 & 3?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This is an excellent paper. It asks a very interesting question and provides compelling evidence for the conclusion that Llama represents TD error and uses it to solve RL problems in-context. The section on successor representations was a welcome surprise in section 5, and offered more evidence for TD learning, even absent any rewards. The paper was also quite easy to follow and laid out the argument in a very natural way. I don't have any major complaints." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates whether Llama 3 70B has internal representations that support temporal difference learning. First, it demonstrates that Llama can solve RL tasks significantly better than chance. Next, it trains a sparse autoencoder (SAE) and finds features correlated with TD error. Finally, it causally intervenes on these features to show that in-context RL performance degrades without those specific TD features." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Only minor weaknesses.\n\n1. In the background section on RL, TD is presented for a fixed policy, and then the paper switches to Q-learning, assuming the policy chooses \\argmax_a Q(s,a). But this will change the policy as the Q function is updated, so it's not technically the same setting.\n2. It was a bit unclear what \"control lesion\" referred to in Fig. 2F. And more generally, I was not familiar with the \"lesion\" terminology, so a brief definition would be welcome. I assume it's a form of activation patching?\n3. I would have liked slightly more explanation regarding \"clamping\" the activations. I assume this means setting them to a specific value, but how is that different from deactivating them (i.e. clamping them to zero)? Is the purpose of clamping the activations to show degraded, unchanged, or improved performance?\n4. Line 458, mangled sentence \"our study is, we have explored\"." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Some small additional comments and questions I had:\n- In the definition of the Q function in Section 2 (Methods, at page 2), shouldn't there be a conditioning on the initial state and action inside the expectation? Also, shouldn't the sum start from $t=0$ instead of $t=1$?\n-In Section 3, you claim that Llama 3 most likely implements \"classic\" Q-Learning rather than myopic Q-learning based on the negative log-likelihood. However, in Figure 2, looking at the correlations, it seems that the myopic Q-learning has in general comparable if not higher correlations to the latent representations. Couldn't this suggest that the model is implementing the myopic algorithm instead? Furthermore, is the difference in negative log-likelihood statistically significant?\n-In Figure 5, what do the horizontal lines in subplots B & C represent?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I think the paper is well written and the setting and the experimental details are generally well explained. The contributions are also clearly stated. Furthermore, as far as I can tell, the presented experimental methodology is also sound. Although it is a known fact that Transformers can potentially perform In-Context RL, especially if trained for it, it is the first time, to the best of my knowledge, that a mechanistic analysis is conducted on a model which was pre-trained on next token prediction. In addition, even if the methods used (e.g. SAEs) are already well established in the mechanistic interpretability literature, it is insightful to see how they can be successfully used also to better understand how LLMs solve In-Context RL. Hence, even if the problem of In-Context RL is well studied in the literature and the interpretability methods used are also well established, overall I think the presented results shed more light on the inner workings of how LLMs can solve RL tasks in-context, which can be significant and insightful for the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a mechanistic analysis of internal activations of the Llama 3 70b model during three different in-context reinforcement learning tasks. In particular, the authors use Sparse Auto-Encoders (SAEs) to generate latent space representations of the residual streams and show how they correlate with TD Errors and Q-values of Q-learning agents trained to solve the same tasks. Furthermore, they show that relationship between the latent representations and the TD Errors is causal by intervening on the latent representations, which causes a degrading in the performance of the model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The main weakness of the paper is that being an experimental work, I find the number of experiments conducted to be a bit limited. I think that more experiments should be conducted to further support the contributions of the paper (I saw that the authors mention this in future works/limitations, but I think the current paper would benefit from more ablation to make the evidence stronger). In particular, I suggest that the authors (as they also mention) should try to repeat the experiments they present with different models (at least one more) to prove that their results hold in general for \"big enough\" models. This would be really insightful since it would tell us that different models, even if trained differently, learn similar representations or make use of similar strategies to solve tasks. Furthermore, I think it would be insightful to conduct experiments on larger environments to better understand both to what extent these models are capable of performing In-Context RL and to analyze if, even at larger scale, these models still make use of TD Erros and Q-Values to solve the task\n- One minor concern regards the extent of the novelty of the work: as I mentioned above, although I agree with the authors that it is the first time (to the best of my knowledge) that it was shown that models trained on next-token prediction perform In-Context RL exploiting TD Errors, there are already quite some works exploring TD Learning in Transformers (both at a theoretical and experimental level). Furthermore, the methodology used for the mechanistic analysis is also already well established in the mechanistic interpretability literature." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 1 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper is way out of my expertise and hence I cannot provide a meaningful review." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. In the plots on max correlation with values/errors (eg fig 2c, 2d, 3, 4b, 4c, etc.), is the correlation computed with the value/error of the action predicted by the LLM at the given state? If yes, then it would be valuable to check whether there are features that correlate with value/error of non-optimal actions. This could help in distinguishing whether the LLM is actually implementing TD-learning or the max-point episode algorithm provided above.\n2. Can you provide how the NLL score computed? I couldn't find it in the appendix either. Particularly, are you computing the log probabilities of Q-learning agent by doing a softmax using the Q-values over actions?\n3. Are you using any discount rate for the Grid World Task? If yes, please provide it." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The study evaluates their hypothesis through a series of tasks to substantiate their empirical claims.\n2. Intervention experiment with the features to confirm their causal roles.\n3. The writing is clear and easy to understand. However, some details are missing. See in questions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper looks for evidence of whether Llama3-70B model can simulate the TD-learning algorithm for solving RL tasks. The authors evaluate this using simple toy RL environments. They train different SAEs for the different tasks and find features that correlate highly with TD-errors and Q-values. They confirm that these features are causal in predicting actions by performing interventions on such features. Based on these evidence, the authors conclude that the LLM is simulating the TD-learning algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main objection with the paper is that there is a simpler alternative hypothesis that could equally explain all of the results. Given the simplicity of the task, the LLM could be implementing the following algorithm to solve the tasks:\n\n\nStep 1: Keep a track of the maximum points for each episode in the context.\n\nStep 2: Predict the actions from the episode that has the maximum points.\n\n\nThis algorithm is simple to implement for the LLM given previous works on LLMs implementing greater than circuit [1] and induction heads [2]. Also, for the Two-Step Task, the first 7 episodes are provided by using a random policy, which should cover all the 4 different trajectories possible in the task.\n\nThe features that the authors find using SAEs could be features that are tracking the maximum points across episodes. These features will have high correlation with Q-values, and are also causal so interventions on them should show similar results as shown in the paper.\n\nI recommend that the authors conduct experiments designed to refute this hypothesis. See questions for some suggestions on experiments that can be performed.\n\nReferences:\n\n[1] How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model. https://arxiv.org/abs/2305.00586\n\n[2] In-context Learning and Induction Heads. https://arxiv.org/abs/2209.11895" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "SAEs uncover a temporal-difference learning algorithm used by Llama for in-context reinforcement learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024sparse,\ntitle={Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2tIyA5cri8},\nnote={under review}\n}" }, "abstract": { "value": "In-context learning, the ability to adapt based on a few examples in the input prompt, is a ubiquitous feature of large language models (LLMs). However, as LLMs' in-context learning abilities continue to improve, understanding this phenomenon mechanistically becomes increasingly important. In particular, it is not well-understood how LLMs learn to solve specific classes of problems, such as reinforcement learning (RL) problems, in-context. Through three different tasks, we first show that Llama $3$ $70$B can solve simple RL problems in-context. We then analyze the residual stream of Llama using Sparse Autoencoders (SAEs) and find representations that closely match temporal difference (TD) errors. Notably, these representations emerge despite the model only being trained to predict the next token. We verify that these representations are indeed causally involved in the computation of TD errors and $Q$-values by performing carefully designed interventions on them. Taken together, our work establishes a methodology for studying and manipulating in-context learning with SAEs, paving the way for a more mechanistic understanding." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "reinforcement learning", "in-context learning", "representation learning", "sparse autoencoders (SAEs)", "large language models (LLMs)" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5ab6b9fe938380ea4531fb5725f1db72e9167ffd.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Sparse Autoencoders Reveal Temporal Difference Learning in Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2uPZ4aX1VV
Null Counterfactual Factor Interactions for Goal-Conditioned Reinforcement Learning
main
Active
Goal Conditioned Reinforcement Learning;Factor Interactions;Factored State;Hindsight Experience Replay;Counterfactual
reinforcement learning
5;5;5;8
3;3;4;4
3;3;3;3
3;2;3;3
2;3;3;3
5.75
3.5
3
2.75
2.75
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- While the authors use a learning-based dynamics model to infer the interaction, it can be clearly distinguished from existing work that utilizes other approaches. For example, [1] utilizes proprioceptive state changes to distinguish contact.\n- The explanation of mixture distribution on L189 wasn't clear. How could it mix two distributions with a multiplication factor?\n- The discussion on the limitation of this work can make readers better understand of the method. For example, authors can mention the domain where interaction is actually prohibitive (e.g., drone navigation)\n\n#### Minor typo\n- I believe L187 should be $d_{\\pi}$\n\n### References\n[1] Manuelli and Tedrake, \"Localizing external contact using proprioceptive sensors: The Contact Particle Filter\"" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The introduction of the *Null counterfactual interaction assumption* could be a important contribution, improving sample efficiency across various domains, particularly in manipulation tasks where interaction is minimal.\n- The method details engineering practices to make the approach both manageable and efficient.\n + This includes null state inference with a dynamics model and predicting null operation.\n- The paper presents a rich set of environments, and design choices for these environments, etc." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to leverage the null assumption to filter out states without interaction between the agent and object, improving the sample efficiency of GCRL. The approach begins by using a learned dynamics model to identify null states—where the next state remains the same in the absence of a specific state. It then keeps those trajectories where the agent directly interacts with the object, training the agent with hindsight relabeling. This approach shows comparable or superior sample efficiency in both 2D dynamic and 3D manipulation environments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Scalability to high-dimensional state**\n + How is the state space defined across all environments? Assuming the entire environment has a low-dimensional state space, I’m curious how it computes the difference between states (Eq. 3) and infers the null state (Eq. 4) in a high-dimensional case (e.g., image).\n + From my understanding, inferring the null state should have a complexity of $O(n^2)$ based on state dimensionality, which may limit scalability in high-dimensional state spaces. However, L263 mentions a time complexity of $O(1)$. Could the authors clarify this?\n \n- **Dependence on hyperparameters**\n + The method distinguishes null states based on prediction error (Eq. 3), but setting this hyperparameter could vary depending on environments and tasks.\n + Moreover, certain states, even within an environment or task, may have more complex dynamics than others. In such cases, how does the method define a single $\\epsilon_{null}$?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness sections." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The notion of null counterfactual is interesting.\n- The paper manages to devise a full-algorithm from this notion, and showed practical gain in object-centric robotic domains.\n- Goal-conditioned RL is an important area of research, and using null counterfactual for data augmentation is a promising direction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers a notion of null counterfactual: a cause object is interacting with a target object if in a world where the cause object did not exist, the target object would have different transition dynamics. Using this definition, the paper proposes ways to simulate virtual rollouts and perform data augmentation by leveraging null counterfactual. Hindsight Experience Replay is playing a key role in the algorithm, and the algorithm seems to inject some compositionality into hindsight replay. Toy tasks and robotic tasks are considered for evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- This method seems relatively limited to object-centric domains, where the dynamics between objects is relatively simple.\n- Certain set-based architecture (such as PointNet and some version of GNN) might not work in general domains to model dynamics.\n- The simulated nulling procedure and the filter criterion feel very heuristic and specific to the considered domains." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. On page 5 around line 221, how exactly does the extra action column work with the core \\( n \\times n \\) submatrix corresponding to the states? It appears that the interaction is defined around a pair of states. I also have the same confusion with Figure 2.\n\n2. On page 5 around line 232, the mentioning of the vector \\( \\mathbf{V}^k \\) would need more context. It seems to be a vector to zero out a column of the interaction matrix \\( \\mathbb{B} \\), but it is not very clear. How is it related to, and what exactly is the property that not all tasks exhibit on line 233?\n\n3. How should we deal with cases when there are very few trajectories satisfying the interaction criterion?\n\n4. In Table 1, it is listed as accuracy, but it seems like lower values are better, which is a bit confusing." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The problem setup is well motivated and the proposed algorithm extends , HER, an important technique in goal conditioned RL, to settings where it doesn’t work well and is effective. The presentation from the background of HER to the proposed method is smooth and well thought out, except a few minor places that can use some polish." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In the context of goal-conditioned RL, building on top of Hindsight Experience Replay, the paper proposes a filtering method that aims to improve the efficiency of learning. Under the proposed definition of interaction that is based on the change of the transition probabilities under null counterfactuals, a masked forward dynamic model is learned to identify interaction (NCII). Then the method filters the trajectory to be relabeled and only keeps those that the agent interacted with the target (NInt). The effectiveness of NCII and the improvements of NInt are verified by empirical analysis on simulated environments compared with established methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The null operation in Equation 3 depends on the threshold \\( \\epsilon_{\\text{null}} \\). This is an important part of the algorithm. Discussion on how to choose it and an ablation on the sensitivity of this threshold would make the analysis more comprehensive. More specifically we are interested in answering the following questions (actionable feedback below):\n - How sensitive is NCII to the choice of the threshold?\n - Does one threshold work across different environments in Table 1, or does each environment variant require a different threshold?\n \n Figures showing how the accuracy of HCII varies corresponding to a range of thresholds for environments, or one variant from each environment the authors already considered Table 1, would be compelling. Additionally, for a few selective environments that are sensitive to thresholds in the previous ablation, how does the episode reward change when HCII with different thresholds is used in HInt? This second ablation may not be necessary if HCII is shown to be robust across a range of thresholds in the previous one. The range of thresholds should be selected by the authors to show if there are values on the left or right tail where the algorithm starts to break down and success rates start to fall off. Success rate is the metric. \n\n2. Hindsight Experience Replay (HER) is an important baseline here. HER has several variants for how the goal is chosen, including “future,” “final,” and “episode.” It seems that, but it’s not clear, the HER implementation here refers to the default “final” variant. Expanding the baseline in Figure 4 to include other variants of HER, especially both the “final” and “future” variants, would make the comparison more comprehensive. This is particularly relevant as the performance difference between HInt and HER is small in a few environments in Figure 4, and a stronger variant of HER might change the gap here. This would entail running on the environments in Figure 4 and reporting on the already established metric, only this time under the alternative versions of HER goal selection strategies. \n\n3. In Equation 3, it appears that the logarithm is intended to apply to the entire subtraction; however, the syntax suggests otherwise.\n\n4. There is a typo on line 268, page 5: “using using.”" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* It would be interesting to see wall-clock time comparisons with the baselines as HInt adds quite a bit of complexity to them.\n\n* I would have expected an expectation over the goal in the RL objective in like 181.\n\n* The next paragraph (starting from line 183) is written like the goal space is equal to the state space. However, in the rest of the paper this is not the case.\n\n* ‘min’ in equation (2) should be ‘max’.\n\n* Why is no absolute value taken in equation (3) when thresholding the difference of log probabilities.\n\n* In line 303, the filtering function is defined is defined as a decision to reject a trajectory while in appendix D it seems to be the decision to accept a trajectory.\n\n* I think (left) and (right) are switched in the caption of Figure 1." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The argument for an interaction-based inductive bias in HER is well motivated. Moreover, the interpretation of a deviating transition probability under a null counterfactual as an interaction between objects is intuitive and concise. The existence of a path from an action to the target state as a filtering criterion for HER is well founded in causality and illustrated well by figure 2.\n\nThe domains considered for the experimental evaluation are relevant and sufficiently established. Table 1 indicates that NCII is more accurate in detecting interactions than the considered baselines. The RL performance is demonstrated to benefit significantly from using HInt.\n\nThe writing in the main text is clear and the presentation is well structured." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes Hindsight relabeling using Interactions (HInt), a variant of Hindsight Experience Replay (HER) that leverages interactions to filter candidate trajectories for relabeling. Drawing inspiration from causality, an interaction is defined as an instance where removing (or nulling) an object would have an impact on the next state of another object (Null Counterfactual Interaction Inference or NCII). Given object-centric representations, the paper proposes to learn a masked dynamics model which can predict the next state of an object conditioned on what other objects are present. An influence of object A on object B is then detected by comparing the difference of the predictions for B with and without A against a threshold. During training, interaction detection is amortized in an interaction classifier. The main proposed criterion for using a trajectory for HER is the detection of a path in the unrolled graph corresponding to interactions, leading from an action to a target object (hence, an influence of the action on the target object). Experiments in two abstract and three robotics-inspired continuous control domains show increased sample efficiency when using HInt. An analysis suggests that HInt filters out trajectories in which the target object does not move (in the Spriteworld domain)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In my opinion, moving too much crucial algorithm components to the appendix is a main weakness of the paper. The main text conveys the impression that it presents the whole algorithm except for some unimportant implementation details, and that this algorithm achieves good performance in practice. However, the content of Appendix C and especially Appendix D seem to be quite important, and are probably crucial for obtaining the empirical results that were reported.\n\nIn particular the filtering criteria presented in Appendix D deviate from the intuitive path-from-action-to-target definition in the main text. Moreover, a somewhat simplified and engineered criterion is then used for the experiments. Yet, it is only referred to as one of several “alternatives” in the main text (line 318). In my opinion, it should be made more clear what form of the algorithm is actually used for the experiments and which components are crucial for obtaining good performance. An ablation study with different filtering criteria would be interesting, for example.\n\nMy understanding, based on the last paragraph of appendix D, is furthermore that for the experiments, only interactions between the controlled object and the target object were detected and used as a criterion. This is a much simpler algorithm than what is presented in the main text and effectively uses domain knowledge (as it happens to be sufficient to consider such interactions in the chosen domains). Moreover, another hyperparameter thresholding the interaction frequency in a trajectory is introduced. Combined, this makes me question the claim that NCII is really more general than using heuristics (line 87). \nAs the algorithm used in the experiments is considerably simplified, it seems like running CAI [1] as a baseline is required. CAI simply estimates the influence of actions on the target object. It would be interesting to see how much HInt can add in terms of sample efficiency to this approach.\n\nThe content of Appendix C reads like quite a few tricks were needed to get HInt to work well. In particular the reweighing based on the log probability of a transition seems important and should therefore be mentioned in the main text.\n\nThe writing in Appendix D is sometimes a bit dense and hard to understand, for example the enumeration point “1. Non-passive”. I think there is potential for illustrating these strategies better.\n\n[1] Seitzer, Maximilian, Bernhard Schölkopf, and Georg Martius. \"Causal influence detection for improving efficiency in reinforcement learning.\" Advances in Neural Information Processing Systems 34 (2021): 22905-22918." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "First, introduces a method for inferring general factor interactions using a counterfactual test on learned models, then integrates interactions into hindsight relabeling to improve the sample efficiency of GCRL" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024null,\ntitle={Null Counterfactual Factor Interactions for Goal-Conditioned Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2uPZ4aX1VV},\nnote={under review}\n}" }, "abstract": { "value": "Hindsight relabeling is a powerful tool for overcoming sparsity in goal-conditioned reinforcement learning (GCRL). While effective in some domains like navigation and locomotion, hindsight relabeling can struggle in object-centric domains. For example, suppose that the goal space consists of a robotic arm pushing a particular target block to a goal location. In this case, hindsight relabeling will give high rewards to any trajectory that does not interact with the block. However, these behaviors are only useful when the object is already at the goal—an extremely rare case in practice. A dataset dominated by these kinds of trajectories will make learning more difficult. On the other hand, much of the meaningful behavior is filtered through interactions such as pushing the block with the gripper. To address this issue, we introduce Hindsight Relabeling using Interactions (HInt), which combines interactions with hindsight relabeling to improve the sample efficiency of downstream RL. However, interactions do not have a general consensus statistical definition, and especially one useful for downstream GCRL. Therefore, we propose a definition of interactions based on the concept of null counterfactual: a cause object is interacting with a target object if in a world where the cause object did not exist, the target object would have different transition dynamics. We leverage this definition to infer interactions in Null Counterfactual Interaction Inference (NCII), which uses a “nulling” operation with a learned model to simulate absences and infer interactions. We demonstrate that NCII is able to achieve significantly improved interaction inference accuracy on both simple linear dynamics domains and dynamic robotic domains in Robosuite, Robot Air Hockey, and Franka Kitchen. Furthermore, we demonstrate that HInt improves sample efficiency by up to 4× in these domains as goal-conditioned tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Goal Conditioned Reinforcement Learning", "Factor Interactions", "Factored State", "Hindsight Experience Replay", "Counterfactual" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/05345cef5ecc7a10683accc14762cbc8146725bd.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/3f12fc07fca05c1c7b626d268cb00994b472c92a.zip" }, "title": { "value": "Null Counterfactual Factor Interactions for Goal-Conditioned Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2uQBSa2X4R
Robust Gymnasium: A Unified Modular Benchmark for Robust Reinforcement Learning
main
Active
Robust reinforcement learning;benchmark;reinforcement learning;multi-agent reinforcement learning
datasets and benchmarks
3;5;6;6
4;4;4;3
2;3;4;2
2;3;3;2
2;1;4;2
5
3.75
2.75
2.5
2.25
-0.471405
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How does Robust-Gymnasium handle continuous action spaces and high-dimensional state spaces?\n- Can the benchmark be used to evaluate the robustness of RL algorithms in partially observable environments?\n- What are the limitations of the current implementation of Robust-Gymnasium, and how might these be addressed in future work?\n- How does the benchmark compare to other existing RL benchmarks in terms of robustness evaluation?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Robust-Gymnasium offers a broad range of tasks for evaluating robust RL algorithms, covering various domains.\n- The benchmark is highly modular, allowing for flexible construction of diverse tasks and easy integration with existing environments.\n- It supports different types of disruptions, including random disturbances, adversarial attacks, internal dynamic shifts, and external disturbances.\n- The benchmark is designed to be user-friendly, with clear documentation and examples." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Robust-Gymnasium, a unified and modular benchmark designed for evaluating robust reinforcement learning (RL) algorithms. It addresses the lack of standardized benchmarks for robust RL by providing a platform that supports a wide variety of disruptions across key RL components, including agents' observed state and reward, agents' actions, and the environment. The benchmark includes over sixty diverse task environments spanning control, robotics, safe RL, and multi-agent RL. The paper also benchmarks existing standard and robust RL algorithms within this framework, revealing significant deficiencies in current algorithms and offering new insights. The code for Robust-Gymnasium is available online." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The variety of disruptions and the modular nature might make the benchmark complex to understand and use for some users.\n- The effectiveness of some robust RL algorithms might rely on the quality and quantity of offline demonstration data.\n- The performance of algorithms on the benchmark could be sensitive to hyperparameter tuning, which might not be straightforward." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Q1: In section 2.1, can you elaborate why maximization of the reward is over disturbed actions but not disturbed states? \n\nQ2: L213 “Not all task bases support every type of disruption.” Could you elaborate why not? What is the limitation? This answer should likely be added to the text. \n\nQ3: For Safety Gym, how do disturbances interact with the constraints? \n\nQ4: I am confused about the adversarial disturbance mode. The text states “Any algorithm can be applied through this interface to adversarially attack the process.” L301. Does that mean that there are no standard disruptors implemented and the user has to implement them themselves? \n\nQ5: Does the LLM for the adversarial disturbance mode require the user to run a local LLM? \n\nQ6: Are there any tasks that you believe become significantly harder by introducing the perturbations, so much so that they might be unsolvable now?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Clarity \na) The text uses clear language and is easy to follow. \nb) Figure 1 is very useful as it nicely summarizes environments, agents and disruptions and Figure 2 is a nice addition to describe the environment flow. \n\n2. Problem Motivation \na) I think the motivation for this problem is solid and we do need benchmarks that test real world robustness. Even if this benchmark is not perfect for that as it creates artificial disturbances, this might be the closest we can get with general solutions. I do think the benchmark solves a good problem the community is facing. \n\n3. Novelty \na) I am not aware of any benchmarks for robust RL that are very extensive lending credibility to the novelty of this benchmark. \n\n4. Experiments \na) While I am not familiar with some of the baselines, it seems that the evaluation is somewhat extensive. At least I believe it is sufficient to demonstrate that current algorithms fail on this benchmark which allows for new research to be done. \nb) I do appreciate the two setting evaluations of training and testing. I think it is crucial to demonstrate what happens when training works fine but disturbances occur during testing. This experiment highlights the importance of this work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work proposes a new benchmark for robust reinforcement learning termed Robust-Gymnasium. The manuscript introduces a framework for MDPs under disturbances and models its benchmark after it. There are three types of disturbances: observation, action and environment disruptions. The paper outlines 60 standard tasks that can be used in the benchmark with these disturbances and provides an experimental validation using baselines from standard, robust, safe, and multi-agent RL demonstrating the utility of the benchmark." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Clarity \na) Overall, several sections are very wordy and or redundant, repeating lots of information but missing useful information early on. Some examples:\n* Section 2.1 and 2.2 could be more concise, it feel like they are repeating the same thing multiple times when describing the disruptors. To remedy this it might be good to consolidate the functionality and highlight specific disruptors in section 2.2. For instance, it is not clear to me what random noise on an environment disruptor means. I also don’t quite understand what “The environment-disruptor uses this mode to alter the external conditions of the environment.” entails.\n* The same goes for sections 3.2 and 2.2. Both sections address the design of disruptors and essentially repeat a lot of information. It seems easy to simply combine these two sections which will also avoid confusion about how disruptors work. I understand that there is supposed to be a differentiation between the universal framework and the implementation but nonetheless there would be lots of text that can be cut for clarity. \nb) I find that section 3.2 is missing crucial information. The section can likely be improved by adding additional information about the state-action space and how the different disruptors affect them for each environment. The space for this can likely be obtained by condensing sections 2.1 and 2.2. If action spaces are similar, it might be possible to cluster environments and add information about the action spaces per cluster such as “these environments all use joint control with action spaces according to the number of degrees and an additional grasp action”. \n\n2. Related Work \na) In L 73, the text states “While numerous RL benchmarks exist, including a recent one focused on robustness to environment shifts (Zouitine et al., 2024), none are specifically designed for comprehensively evaluating robust RL algorithms.” I only skimmed the referenced work but it seems that the citation aims to do exactly that. However, they might have a less comprehensive benchmark. We can likely count them as X work but I believe a more thorough differentiation from this paper would benefit the presented manuscript. \nb) I appreciate the additional section on robust benchmarks in Appendix A. In general for benchmark papers, I find it beneficial to demonstrate the novelty of the benchmark but providing citations to benchmarks that are related to demonstrate that there is a gap in the existing literature. Here is a non-exhaustive list of possibly relevant recent benchmarks that might be of use as a starting point [1-11]. There are older benchmarks too such as ALE and DM Control for which I recommend standard citations. Such a differentiation does obviously not have to happen in the main text. \n\n3. Benchmark Feedback \na) “Notably, in our benchmark, we implement and feature an algorithm leveraging LLM to determine the disturbance. In particular, the LLM is told of the task and uses the current state and reward signal as the input” L302 - It seems quite wasteful to have to run a full LLM at every environment step and it might be good to have simpler adversarial features that don’t limit usage to labs with lots of money for compute. The LLM feels a lot like using an LLM for the sake of the LLM. It is unclear to me why this choice was made rather than a simpler adversarial attacker. \nb) What I am missing is metrics other than cost and reward that are useful to determine whether one is making progress on this benchmark. Given two algorithms with the same performance, what let’s us determine whether either of them is more robust? I think providing useful metrics of interest would be good to make this benchmark stand out. For instance, reliability metrics such as those in [12] might be useful to measure. \nc) The second thing I am missing is guidelines on how to choose parameters for the disturbances. I think elaborating on what values are valid in section 3.2 as I mentioned before and providing suggestions would be useful for standardized usage of the benchmark. For instance, it is unclear in section 4.3, why the attacks follow a Gaussian distribution and not a Uniform distribution. Is this more realistic? Maybe it is arbitrary but then it should at least be stated earlier that this is recommended by the work. \n\n4. Experiments \na) It is unclear over how many seeds the experiments were conducted. Given the high variance in RL results in general [13], and the need for many experiments even without disturbances [14], we should conclude that more robust experimental evaluation is needed in Disturbed MDPs. For instance, 5 random seeds would definitely not be enough to draw meaningful conclusions from many of the provided graphs. \nb) It is unclear to me how the tasks were picked and why the evaluations are not incorporating all tasks for all baselines. Running all tasks with all baselines would definitely strengthen the argument for the necessity of the benchmark and avoid uncertainty about how to choose tasks. At least, there should be one experiment that runs one algorithm on all tasks to verify that all tasks are in fact still learnable. I understand that that is computationally costly but I believe it is needed to verify the utility of the benchmark. \n\nMinor suggestions \n* In L156, L180, In Disrupted MDP -> In a Disrupted MDP\n* L192 and L197: for environment disruptor -> for the environment disruptor\n* L201 Disrupted-MDP allows disruptors to operate flexibly over time during the interaction process.\n\nOverall, I do think this work might constitute a good contribution. However, I think there need to be various adjustments for clarity. These are mostly things that require rewriting and not running any experiments. This includes consolidating text and providing insights into how to choose tasks, metrics and disturbance parameters. The latter is especially important if the benchmark ought to provide a standardized basis. If these changes are made I am willing to recommend acceptance. To make this a very strong publication, I think more extensive experiments to validate that all tasks are learnable are needed, and experiments would have to be run over a large number of trials to ensure statistical significance.\n\n[1] Continual World: A Robotic Benchmark For Continual Reinforcement Learning. Maciej Wolczyk, Michał Zając, Razvan Pascanu, Łukasz Kuciński, Piotr Miłoś. NeurIPS 2021. \n[2] LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning. Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qiang Liu, Yuke Zhu, Peter Stone. NeurIPS 2023. \n[3] Tongzhou Mu, Zhan Ling, Fanbo Xiang, Derek Yang, Xuanlin Li, Xuanlin Li, Stone Tao, Zhiao Huang, Zhiwei Jia, and Hao Su. Maniskill: Generalizable manipulation skill bench-mark with large-scale demonstrations. NeurIPS D&B 2024. \n[4] Alex Ray, Joshua Achiam, and Dario Amodei. Benchmarking Safe Exploration in Deep Reinforcement Learning. 2019.\n[5] Ossama Ahmed, Frederik Träuble, Anirudh Goyal, Alexander Neitz, Manuel Wuthrich, Yoshua Bengio, Bernhard Schölkopf, and Stefan Bauer. CausalWorld: A robotic manipulation benchmark for causal structure and transfer learning. ICLR 2021. \n[6] Jorge A. Mendez, Marcel Hussing, Meghna Gummadi, and Eric Eaton. CompoSuite: A compositional reinforcement learning benchmark. CoLLAs 2022. \n[7] Xavier Puig, Eric Undersander, Andrew Szot, Mikael Dallaire Cote, Tsung-Yen Yang, Ruslan Partsey, Ruta Desai, Alexander Clegg, Michal Hlavac, So Yeon Min, Vladimír Vondruš, Theophile Gervet, Vincent-Pierre Berges, John M Turner, Oleksandr Maksymets, Zsolt Kira, Mrinal Kalakr ishnan, Jitendra Malik, Devendra Singh Chaplot, Unnat Jain, Dhruv Batra, Akshara Rai, and Roozbeh Mottaghi. Habitat 3.0: A co-habitat for humans, avatars, and robots. ICLR 2024. \n[8] DACBench: A Benchmark Library for Dynamic Algorithm Configuration. Theresa Eimer, André Biedenkapp, Maximilian Reimer, Steven Adriaensen, Frank Hutter, Marius Lindauer. ICJAI 2021. \n[9] Clément Bonnet, Daniel Luo, Donal Byrne, Shikha Surana, Sasha Abramowitz, Paul Duckworth, Vincent Coyette, Laurence I. Midgley, Elshadai Tegegn, Tristan Kalloniatis, Omayma Mahjoub, Matthew Macfarlane, Andries P. Smit, Nathan Grinsztajn, Raphael Boige, Cemlyn N. Waters, Mohamed A. Mimouni, Ulrich A. Mbou Sob, Ruan de Kock, Siddarth Singh, Daniel Furelos Blanco, Victor Le, Arnu Pretorius, and Alexandre Laterre. Jumanji: a diverse suite of scalable reinforcement learning environments in jax, 2024. \n[10] Heinrich Küttler, Nantas Nardelli, Alexander H. Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktäschel. The NetHack Learning Environment. NeuRIPS 2020. \n[11] Zhaocong Yuan, Adam W. Hall, Siqi Zhou, Lukas Brunke, Melissa Greeff, Jacopo Panerati, and Angela P. Schoellig. Safe-control-gym: A unified benchmark suite for safe learning-based control and reinforcement learning in robotics. IEEE Robotics and Automation 2022. \n[12] Measuring the Reliability of Reinforcement Learning Algorithms. Stephanie C.Y. Chan, Samuel Fishman, John Canny, Anoop Korattikara, Sergio Guadarrama. ICLR 2020. \n[13] Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep\nreinforcement learning that matters. AAAI 2018. \n[14] How Many Random Seeds? Statistical Power Analysis in Deep Reinforcement Learning Experiments. Cédric Colas, Olivier Sigaud, Pierre-Yves Oudeyer. 2018." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The provided overview in Figure 1 is good. \n- Sixty robust RL tasks are offered in this benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a robust reinforcement learning benchmark, designed for facilitating fast and flexible constructions of tasks to evaluate robust RL. This benchmark provides various robust RL tasks by adding various perturbations to standard tasks from multiple RL benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper made an effort in transforming diverse RL tasks into robust RL tasks where environmental perturbations are considered. However, it might be of limited significance, since there are some existing benchmarks ([1], [2], [3], [4]) that allow to add disturbances to RL tasks to test the robustness of RL algorithms. Besides, it offers a limited technical contribution, as the main technical work is to add a wrapper to the existing RL benchmarks that implements disturbances. Therefore, I recommend rejection.\n\nI have some other concerns about the current version. \n- The author stated that this is the first unified benchmark specifically designed for robust RL in the introduction. It is a bit overstated, as RRLS focuses on the evaluations for robust RL and some other benchmarks allow for evaluating the robustness of RL algorithms.\n- In Section 3.2, the authors present several disruptors that are used in previous works. Providing citations to them is suggested. \n- The discussion about the limitation of the benchmark is missing. \n\n\n[1] https://github.com/utiasDSL/safe-control-gym \n[2] RRLS: Robust Reinforcement Learning Suite \n[3] Datasets and benchmarks for offline safe reinforcement learning \n[4] Natural Environment Benchmarks for Reinforcement Learning" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Remarks: \n- Emphasize the introduction of the \"disrupted MDP\" by bolding its first mention.\n- There is a minor formatting issue on line 132 with a space before \"environment-disruptor.\"\n- Providing examples in the appendix on how to modify external parameters like wind would enhance usability." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper is well written\n- The benchmark is an important contribution to the robust reinforcement learning community, offering a unified framework that fills a significant gap. It is comprehensive, covering a broad spectrum of robustness types, making it a valuable tool for evaluating and designing Robust RL algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce a robust reinforcement learning benchmark that addresses multiple types of robustness. These include robustness concerning the transition kernel, observation noise, action noise, and reward noise. The framework considers both random noise and adversarially selected worst-case noise. To generalize robustness, the concept of a \"disrupted MDP\" is introduced. The environments proposed are diverse, primarily involving robotics and continuous control tasks, covering both single and multi-agent settings.\n\nAgents are evaluated on this benchmark across multiple tasks, using various baselines such as SAC and PPO for standard RL approaches. For Robust RL with a nominal transition kernel, baselines like RSC are used. The paper also includes evaluations for robust learning under dynamic shifts (OMPO), state adversarial attacks (ALTA), visual distractions (DBC), safe RL (PCRPO and CRPO), and multi-agent RL (IPPO)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- M2TD3, a state-of-the-art baseline for robustness under model misspecification, is not cited. Its inclusion would strengthen the paper’s coverage of relevant baselines.\n- The explanation of adversarial disturbance via LLMs is interesting but could be more general. Instead of focusing on LLMs, the paper should emphasize the adversarial setup and consider an adversary such as two player Markov games with potential LLM integration as an example.\n- While the benchmark is nearly exhaustive, baselines like RARL and M2TD3 are missing. It is unclear how uncertainty sets can be built with the benchmark. Including examples in the appendix on constructing such sets, as proposed in the M2TD3 paper, would be beneficial.\n- The environments are primarily robotics-based, except for Gymnasium Box2D. Including use cases like autonomous driving or drone simulations would diversify the benchmark and offer more relevant challenges to the community, fostering the development of more general RRL algorithms.\n\nM2TD3 Reference: \nTanabe, T., Sato, R., Fukuchi, K., Sakuma, J., & Akimoto, Y. (2022). Max-Min Off-Policy Actor-Critic Method Focusing on Worst-Case Robustness to Model Misspecification. *Advances in Neural Information Processing Systems*." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024robust,\ntitle={Robust Gymnasium: A Unified Modular Benchmark for Robust Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2uQBSa2X4R},\nnote={under review}\n}" }, "abstract": { "value": "Driven by inherent uncertainty and the sim-to-real gap, robust reinforcement learning (RL) seeks to improve resilience against the complexity and variability in agent-environment sequential interactions. Despite the existence of a large number of RL benchmarks, there is a lack of standardized benchmarks for robust RL. Current robust RL policies often focus on a specific type of uncertainty and are evaluated in distinct, one-off environments. In this work, we introduce \\name, a unified modular benchmark designed for robust RL that supports a wide variety of disruptions across all key RL components—agents' observed state and reward, agents' actions, and the environment. Offering over sixty diverse task environments spanning control and robotics, safe RL, and multi-agent RL, it provides an open-source and user-friendly tool for the community to assess current methods and foster the development of robust RL algorithms. \nIn addition, we benchmark existing standard and robust RL algorithms within this framework, uncovering significant deficiencies in each and offering new insights. The code is available at the website." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Robust reinforcement learning", "benchmark", "reinforcement learning", "multi-agent reinforcement learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d97d5ceab8b930dda51cee3c51681a5a6bbca025.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Robust Gymnasium: A Unified Modular Benchmark for Robust Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2umZVWYmVG
Assessing Large Language Models for Valid and Correct Code Reasoning
main
Active
LLM Reasoning;Code Execution Reasoning
causal reasoning
3;3;3;6
4;4;4;3
2;2;3;3
2;2;2;3
2;3;3;3
3.75
3.75
2.5
2.25
2.75
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Why not use paths that always start from the beginning of the program?\n\n- Are there other explanations for RQ4?\n\n- Why restrict to HumanEval programs? \n\n- Did you explore other programming languages other than Python?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I think the paper is bringing out an important research question.\n\nThe general idea of expecting that LLMs can emulate code if we want to use them for more general software engineering tasks is an interesting one. I would encourage the authors to continue along this research direction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Code Execution Simulation (CES), a new benchmark aimed at advancing the evaluation of large language models (LLMs) in code reasoning, particularly for complex programming tasks. CES addresses limitations in existing techniques, which lack comprehensive flow sensitivity and diagnostic capability, by unifying output prediction with key intermediate state evaluations—focusing on flow-sensitive execution paths. CES prompts LLMs at essential decision points (e.g., loop variables, conditions) and leverages adaptive in-context examples for clarity, providing a scalable framework that supports diagnosing reasoning divergence and consistency across varying test coverages. Evaluating thirteen models, including GPT-4 Turbo, Gemini-1.5 Pro, CodeLlama, DeepSeekCoder, Magicoder-S, SemCoder-S, and StarCoder2, on the HumanEval dataset of Python problems, the study finds that while LLMs generally exhibit a high rate of valid reasoning steps (82.32%), their reasoning quality remains predominantly random (55.59%) or weak (41.69%), often falling short in complex flow-sensitive tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall I think the paper is not ready for publication yet. The writing of the paper could be improved in places. \nFor example, in Equation 1 the notation is not clear. CES and GT should have something to differentiate from the variables in the equation. In Figure 7, the radar plot and legend are unclear. \n\nThe definition of prime paths being between any two program points requires justification. Could the authors justify this decision. I can imagine that the there are a lot more inputs and dependencies at some intermediate point. An alternative that seems natural would be to consider acyclic paths from the start of a program to some intermediate point. This way the inputs are clearly defined as the inputs to the program. \n\nRQ4 is the most important part of the paper. However, the results are underwhelming currently. The fact that there is no correlation between an LLM correctly emulating a piece of code and the LLM doing well on the programming task for that same piece of code does not support the hypothesis of the paper. Are there other explanations for this observation?\n\nThough I agree with the authors that it would be better if we the LLMs could also emulate the code, I do think this is neither necessary nor sufficient to be able to find bugs, as an example. A lot of humans also find bugs by just pattern matching based on their experience. \n\nI would recommend that the authors explore programs outside of HumanEval, perhaps also exploring other programming languages (C/C++, for instance). The reason being that these programs and programming languages are \"too simple\" and might require detailed understanding of the program semantics. Perhaps using more complex C/C++ programs involving bitwise operations, pointer arithmetic, etc. and looking at tasks requiring a more detailed semantic understanding of the program (such as finding security vulnerabilities) might be more conducive to proving the hypothesis of the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- It is said that valid reasoning is at 83.32% but still only has a low accuracy of 30.79% being correct. Isn't this a bit misleading for the reader before looking into the definition of valid reasoning? The valid reasoning looks like anything but the invalid reasoning, and the invalid reasoning is not defined by whether the intermediate prediction results are wrong. So the valid reasoning containing errors should not be a surprising thing, right?\n- Is there a typo in line 362 or line 20 about the number for valid reasoning? (83.32 vs 82.32)" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper provides a very thorough investigation of LLMs' capability for code execution. They not only provide a reasonable framework to define strong or weak code execution capability but also have detailed error analysis. They also investigate more than 10 models across small to large, closed to open. This will be a valuable resource for readers interested in the capabilities of current LLMs.\n- It is interesting to study the \"invalid reasoning path,\" which they define as incorrect intermediate output but correct end results or branch selection, etc. It shows how the model may not follow exactly how to execute the instructions for the current state, unlike a program, and then still get the final answer correct.\n- Many other insights are backed by results from many different models. For example, they also investigate the consistency of code execution ability across different test inputs that cover different paths and show that most LLMs are not consistent in the sense that while they can execute some test cases successfully, even with test cases going through the same path, they often may still get them wrong." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a Code Execution Simulation (CES) task to evaluate how well current language models understand and are able to execute code. The tasks is simulated the execution of a program by following the execution trace and producing the intermediate states. They introduce two aspects that go beyond code execution results correctness: checking if the simulated execution trace deviates from the correct program execution trace, and identifying situations where the model gets the right answer through questionable means. They also investigate how consistently these models perform with different test cases covering different execution paths. They find that LLMs still struggle with execution, especially for tasks that are more control flow sensitive." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper thoroughly investigates many aspects related to the execution path, like strong vs weak reasoning etc. However, it is not clear if the impact of variable values is discussed. For example, it isn't clear how things like large intermediate integers or long lists would affect the CSE results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please provide a short statement or clarification to the points raised above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-written and nicely structured. The figures and tables are well formatted and legible.\n- The story is engaging and the tackled topic interesting.\n- The proposed method promises improvements of previous work regarding the ability to pinpoint errors made by LLMs during reasoning at a lower inference cost." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The presented paper proposes a new method for assessing the capability of LLMs to predict intermediate states of program execution. The method picks specific/relevant parts of the program such as branching conditions and loops and prompts the LLM to predict the state at these lines during execution of the program with a specific input. The authors then analyze how well the LLM prediction aligns with the program state and use this to assess the capability of LLMs to correctly and consistently reason about program states and to diagnose at which point the LLM starts incorrect predictions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Choice of predicted values\n\nThe proposed method compares to previous work that assesses the internal state of the program at other positions. It is not clear why exactly the proposed positions introduced in Sec 3.1. (branch taking, predicates, return value) specifically are the mainly relevant positions. The main argument appears to be that these are the most relevant values to detect and diagnose inconsistencies, and predicting further values would confuse the model.\n\nIt is possible that such assertions hold for the given dataset (or even more general programs) but I did not find any evidence pointing in this direction.\n\n2) Confusing Definitions of valid/invalid reasoning\n\na) No correspondence to \"consistency\"\n\nThe authors mark any reasoning as invalid (Sec 3.3.) if an intermediate state (i.e. predicate) is incorrectly predicted but the consequence is predicted correctly (i.e. the branch-taking based on the predicate). This appears to not accurately capture whether the _reasoning_ was indeed wrong, since the intermediate state and consequence could still be _consistent_ (i.e. for \"if p or not p:\", it does not matter what is predicted for p to correctly predict that the branch is taken) and thus not represent a case of incorrect reasoning. It could or could not be that in the given dataset the introduced invalid reasoning always constitutes incorrect reasoning, but such evidence is missing from the paper.\n\nb) Incorrect outputs are valid reasoning\n\nThe definition of \"valid reasoning\" includes (by definition) all instances where the model outputs incorrect output. This naming is confusing at best, since I would not expect that incorrect instances can constitute valid reasoning. As already mentioned in 2) this is due to a lack of evaluation of _consistency_ which I would consider indicative of reasoning.\n\n3) Weak performance at CES may imply subpar benchmark design\n\nIn Sec 5.2 the authors mention many cases of \"suspiciously correct\" outputs based on natural language reasoning and inconsistent with the produced code reasoning. My interpretation of this would be that the presented code evaluation is potentially unnatural and confusing to the language model and thus artificially reduces performance, where free-form reasoning in natural language allows the models to correctly derive a result. Interesting counter-evidence for such an interpretation would be that models also often override correctly reasoned code states with incorrect (i.e. biased through function names) natural language reasoning results.\n\nSimilarly in Sec 5.5. weak correlation of models in CES and other related program understanding tasks do not necessarily imply that models are subpar reasoners, instead it could also imply that CES is not a format in which models can effectively express their code understanding. \n\n\nThe following are some smaller points that left me confused after reading:\n- In Sec 5.1. the authors mention that there is no control for the coverage of test cases on programs. This appears weird, it would be interesting to somehow establish a controlled experiment for different path coverage. The detailed Figure 6 partially makes up for this.\n- Figure 7 is very difficult for me to parse, especially the legend, but also the choice of chart format, respective the choices of grouping (it might make more sense to overlay models on triangular LO, CO, LC chart?)\n- Figure 8: The instruction to the model reads \"You are given a piece of Python code and its _output_\" while the model is clearly given _input_ below. I hope this is a typo, otherwise it might have implications for the presented numbers.\n- In Sec 5.2. \"Invalid Reasoning\" it reads \"[…] LLMs with good performance in valid reasoning also make more invalid reasoning\". This seems contradictory since reasoning is either valid or invalid, and the sum of it should be constant - thus increasing one would necessarily decrease the other. Please do clarify what is meant by this statement." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Do the authors use the same prompts for different LLMs? How does the in-context learning examples affect the models' performances?\n- To what extent does CoT (Chain of Thought) contribute to the results? Considering the hallucination phenomenon in LLMs , could the authors perhaps sample the output multiple times to observe the model's pass@k results?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper presents a novel framework, Code Execution Simulation (CES), to assess code reasoning in LLMs. Code execution capability is an important aspect to evaluate LLMs.\n- CES's design is diagnostic. By defining the notions of valid or invalid reasoning process, it can detect suspiciously correct output predictions under invalid reasoning.\n- CES uses a novel reasoning consistency metric to benchmark LLMs' code reasoning abilities as strong, weak and random, by executing multiple tests per program with same or different prime path coverage." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Code Execution Simulation (CES), a framework for evaluating code reasoning capabilities in large language models (LLMs). CES measures LLMs’ capacity to predict both output and intermediate program states, including loop variables, iterables, conditional predicates, and branches. Except for the prediction accuracy, CES can identify whether models have valid reasoning process and can determine their reasoning consistency by executing test cases with different prime path coverage. Through experiments on multiple LLMs, the authors find that while LLMs can achieve a high level of valid reasoning (82.32%), their reasoning strength remains inconsistent, often performing at random (55.59%) or weak (41.69%) levels." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The primary weakness in this work lies in using code execution simulation as a measure of a model’s code reasoning abilities, which is debatable. While I agree with the authors' choice of evaluating reasoning validity (valid or invalid) and reasoning consistency as indicators of reasoning ability, as they reflect the model's understanding of code logic, the execution process itself requires substantial computational accuracy and strict adherence to instructions. For example, executing `a = b * c` demands multiplication skills, and executing `x = a[5]` requires precise indexing. The relationship between these computational abilities and code reasoning capabilities remains a research question. A model can easily compute `2 * 3`, yielding correct outputs in simple cases, but as inputs scale in complexity, the model's computational skills are challenged. However, this does not necessarily imply a lack of logical understanding or reasoning capability regarding the code’s logic. Thus, code execution simulation is inherently complex, and the authors do not sufficiently discuss this in the paper.\n- The definition of the 'invalid reasoning process' is ambiguous. In Equation 4, the authors consider a compound property to be 'invalid' when it contains both correct and incorrect predictions. However, the example provided here involves the loop variable `o` and the loop iterable `zip(evens, odds)`. According to the definition given in Section 3.1, these two do not belong to the same property.\n- The authors found in Section 5.5 that CES seems to have no correlation with other coding tasks, but they did not analyze the reasons for this. Is it because CES or bug-related tasks cannot represent the model's code reasoning ability, or do they focus on different aspects of reasoning ability? The authors also did not use other code comprehension tasks, such as code summarization, etc.\n- It seems that there are several 'why' questions left unanswered in the evaluation. Why the predictions differed from ground-truth values? Why LLMs make suspiciously correct output predictions. The authors have relied solely on case analysis without providing quantitative data analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024assessing,\ntitle={Assessing Large Language Models for Valid and Correct Code Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2umZVWYmVG},\nnote={under review}\n}" }, "abstract": { "value": "Frontier large language models (LLMs) consider reasoning as first-class citizens: they learn to refine their reasoning process and try different strategies during training. Thereby, when prompted, can think through problems and respond better with proper reasoning. For programming tasks, this makes code reasoning a must. In this paper, we propose the task of Code Execution Simulation (CES) as a proxy for evaluating the code reasoning capabilities of LLMs. CES defines the notions of valid or invalid reasoning process, which enables it to promptly (1) determine where the execution simulation diverges from ground truth for incorrect output predictions (essential to understanding limitations of LLMs in code reasoning) and (2) identify suspiciously correct output predictions (essential to understanding reasoning shortcuts, hallucinations, or potential data leakage). In addition to evaluating LLMs’ execution reasoning on a program with a single test, CES measures their reasoning consistency across tests with the same or different prime path coverage. This enables it to evaluate the code reasoning of LLMs in a spectrum: strong, weak, and random. Our results show that LLMs, to a great extent (82.32%), follow a valid reasoning process (results in 30.79% correct and 51.53% incorrect output predictions). However, their reasoning is mostly random (55.59%) or weak (41.69%), which explains their weakness in programming tasksthat require flow- or path-sensitive program analysis to succeed." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLM Reasoning", "Code Execution Reasoning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e5d47a5ac15afea373e020cbc82f467da972c68d.pdf" }, "presentation": null, "primary_area": { "value": "causal reasoning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Assessing Large Language Models for Valid and Correct Code Reasoning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2v405jBQ5X
Shape Assembly via Equivariant Diffusion
main
Active
Diffusion;Equivariant diffusion;Shape assembly
applications to computer vision, audio, language, and other modalities
3;5;5;6
5;3;3;3
3;3;2;3
2;2;2;3
2;2;2;3
4.75
3.5
2.75
2.25
2.25
-0.927173
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please elaborate what the difference is from Hosseini et al., 2023 \nPlease elaborate more why equation 4 is equivariant feature embedding?\nWhy the same shape with different R and T should have the same embedding in L242? The reviewer thinks that the embeddings should be dependent on not only shape but also R and T." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The major difference from the prior works is to design intermediate layers to make embedding equivariant. And, the performance on 2D and 3D puzzle dataset is better than prior works." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes 3D (or 2D) puzzle solver using only geometric information (with no textural information). The proposed method assumes that the piece consists of polygonal faces, straight edges and sharp corner. The puzzle problem is formulated as estimating rotation and translation Euclidean transformation for each corner that minimizes the loss function (MSE of noise and matching). The optimization is done by diffusion model process. The major novel contribution is to propose a layer that generates equivariant feature embedding. The proposed method presents better performance on 2D and 3D shape puzzles than prior works." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The reviewer concerns novelty of the paper. The reviewer is unable to understand major differences from Hosseini et al., 2023. The problem formulation, optimization and loss functions are very similar with slight modification. And, the reviewer is unable to find a connection between the equation 4 and equivariant feature embeddings. The explanation of core idea is ambiguous to the reviewer. \n\nSome tentative typos and unclear statements\n- L226, “In the sense of puzzle, it is considered …” what is ‘it’ mean? Unclear to understand the meaning of the statement.\n- Please verify, Line 85, “f(T(x)) = f(x) or f(T(x)) = T(f(x))”.\n- L322, in the equation “m” should be “l”?\n- The author did not explain “design a dataset generating framework” specified in the contributions L46.\n- The author did not explain “a novel evaluation metric” in detail specified in the Conclusion section.\n- There are more ambiguous statements." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The problem is challenging and highly ill-posed. \nThe dataset contribution." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work addresses the challenge of reassembling randomly partitioned 2D and 3D shapes, relying solely on geometric features in pattern-free, irregular contexts. The authors propose a generative diffusion process that maintains roto-translational equivariance, demonstrating its effectiveness through experiments and ablation studies on various puzzle benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The model is trained mainly supervised by ground truths, while for this problem, a re-assembling loss should be able to drive self-supervised training. For such supervised networks, the outputs may overfit on training set. The generalization ability is not evaluated. \nIn Figure 3-4, comparisons on 2D puzzles show improvements by the proposed method are minor. \nColors in Figure 5-6 are hard to recognize, making the figures hard to read. \nThe method based on feature matching is much better than the proposed one, making me confused about the contributions and improvements. Other than memory and computational costs, feature matching based approaches seem much better." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses above" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. A new dataset for the task of 3D puzzle is proposed. This is an interesting dataset and could be useful for future research in this direction.\n\n2. Theoretical analysis and empirical results are both provided with additional ablation experiments also included.\n\n3." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper looks at the problem of shape assembly in the setting of geometric assembly in which assembly has to rely on geometric features since appearance features are not present. This is a challenging problem and has practical applications in computer vision, computer graphics and robotics. This paper proposes a method based on diffusion where the process of solving the shape assembly problem can be modeled as a diffusion process. Results show some improvements over the competing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In Table 5, the proposed method performs much worse compared to existing methods such as jigsaw. It is unclear what advantages the proposed method has over Jigsaw.\n\n2. The rendering style is a bit confusing. For example, in figure 6 it is unclear how many fractures this example has.\n\n3. It is unclear how the proposed method compares to other methods for the example in figure 6" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to previous section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The motivation of imposing SE(3) invariance and equivariance in feature embedding is straightforward, and the way these constraints are injected is very novel.\n\n2. The theroetical analysis and extensive ablation studies are very solid, indicating the effectiveness of the design.\n\n3. The paper is well written and easy to read." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper utilizes diffusion models to tackle 2D and 3D shape puzzles problem. It proposes several features that make shape representations invariant across both 2D and 3D environments and provides extensive empirical experiments as well as solid theoretical analysis to prove the effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There are some grammar mistakes and formatting issues in the paper, please polish the writing.\n\n2. Section 4.1 does not give the clear definition of the overcomplete representations $a_{i, j}$, I assume it's the arrangement parameter of each corner point?\n\n3. In Section 4.3 when introducing the anchor centering mechanism, the author does not define the notation $a_{p, 1}$, does it mean the arrangement parameter for the first corner point of anchor piece? Does this anchor remain consistent for all pieces?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024shape,\ntitle={Shape Assembly via Equivariant Diffusion},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2v405jBQ5X},\nnote={under review}\n}" }, "abstract": { "value": "We tackle the problem of solving shape puzzles, that is, reassembling randomly-partitioned and scattered pieces of 2D or 3D shapes into an original shape. This task is challenging since it only relies on geometric features without rich visual information. Specifically, we are supposed that target shapes and their randomly-partitioned pieces are pattern-free and irregular. Existing methods tend to rely on specific constraints regarding piece shapes and neglect the consideration of invariance and equivariance. We propose learning a robust puzzle solver through a generative diffusion process in which the roto-translational equivariance holds. Experiments on 2D and 3D puzzle benchmarks including the Breaking Bad dataset demonstrate that our method successfully assembles given geometric pieces into a target shape. We also provide in-depth ablation studies showing the effects of our equivariant design and the components in our proposed framework." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion", "Equivariant diffusion", "Shape assembly" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/de11d4c351ca393e3aff0b28dc7b9de372e8c739.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Shape Assembly via Equivariant Diffusion" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2vHIHrJAcI
Revisit the open nature of open vocabulary segmentation
main
Active
Open vocabulary segmentation;Evaluation
applications to computer vision, audio, language, and other modalities
5;6;6
4;4;3
3;3;3
2;2;2
2;2;3
5.666667
3.666667
3
2
2.333333
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1: The analysis in Section 3 appears disconnected from subsequent sections.\n\nQ2: In Figure 2, $\\mathbb{A}$ represents a set of predicted binary masks. How are the predicted masks in $\\mathbb{B}$ and $\\mathbb{C}$ derived from $\\mathbb{A}$? If they are matched to GT masks based on IoU using bipartite matching, it seems Figure 2 suggests that the number of predicted masks by the model exceeds that of the ground truth, which is not realistic. Additionally, predicted masks in $\\mathbb{B}$ and $\\mathbb{C}$ should not overlap according to $\\mathbb{C} = \\mathbb{A} \\backslash \\mathbb{B}$.\n\nQ3: The correlation between Algorithm 1 and Section 4 is weak: For example, (1) The CM is not referenced outside the Algorthm 1. (2) The calculations for the core evaluation metrics -- front, back, and errors -- are not represented in Algorithm 1 or any other equations. (3) How is the best threshold $\\tau^*$ used in Algorithm 1? \n\nQ4: What constitutes a good evaluation metric? The last sentence of the introduction (line 83 on page 2) implies that the authors equate higher performance values with better evaluation metrics, which is unreasonable. \nIn Figure 3, the authors seem to suggest that more stable evaluation metrics are preferable; however, this should also be compared with other metrics like Open-mIoU and SG-IoU." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The primary contention of this manuscript is to shift the focus of evaluation from textual to mask similarity in assessing OVS models. The authors have identified a gap in the current assessment metrics, which are deemed inadequate for evaluating OVS models, and have proposed a novel metric to address this issue." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study proposes new evaluation metrics for Open-Vocabulary Segmentation (OVS) tasks. A key limitation of evaluating OVS methods on fixed-category datasets is that traditional image segmentation metrics may misclassify visually similar objects as errors, even when they are semantically related but belong to different categories. This issue intensifies with an increasing number of category labels in the test dataset. This issue becomes more pronounced as the number of category labels in the test data increases. Previous research has addressed this challenge, resulting in improved metrics such as Open-mIoU and SG-IOU. The central premise of this work is to focus evaluation on mask similarity rather than textual similarity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The manuscript exhibits a lack of clarity and organization in its writing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. A significant concern is that the proposed evaluation protocol relies on having sufficient data to identify semantically similar categories. In real-world applications, if the training data lacks adequate masks to differentiate similar categories (e.g., \"sofa\" and \"couch\"), the protocol may struggle during testing. To address this, it would be helpful if the authors could analyze the performance of their method with limited training data or provide insights into the minimum data requirements necessary for effective improvement. Additionally, experiments or discussions on the robustness of data scarcity and the impact of potentially misleading information would strengthen the evaluation.\n\n\n2. While the authors' approach to handling ambiguities through the visual modality is quite interesting, it may be more intuitive to identify similar categories based purely on semantic meaning. For instance, using the text modality to assess semantic similarities could potentially provide greater improvements than relying solely on visual information. To explore this, it would be valuable for the authors to compare their visual-based approach with a text-based semantic similarity approach. Or add more discussions about the potential advantages and disadvantages of incorporating textual semantic information into their method." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Good motivation, authors pointed out the current OVS evaluation sets have many semantic similar categories, which may influence the training&testing stages of model, which further influence the inference ability of current OVS methods. Based on this, authors proposed a new evaluation protocols to alleviate this issue.\n\n2. The whole paper is relatively clear and easy to follow. \n\n3. Very comprehensive experiment results on multiple datasets and multiple OVS methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The performance of Open Vocabulary Segmentation (OVS) models will decrease as the query vocabulary size increases, especially when semantically similar category names are present, contradicting the original purpose of OVS. To address this, the authors proposed a mask-wise evaluation protocol based on match/mismatch between prediction and annotation mask pairs, avoiding forced category matching. Key innovations include reducing ambiguity and constructing an ambiguous vocabulary graph. Comprehensive experiments and analysis reveal numerous ambiguous categories in current OVS datasets. Utilizing the proposed protocols during the training and testing stages can help to improve the model’s zero-shot inference capability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Writing Suggestions: \n\n1. In the Abstract, authors claim that OVS models perform better under the new mask-wise protocol needs further clarification. To make fair comparisons between the mask-wise and pixel-wise protocols, the authors should add more details about how they determine \"better\" performance. Providing such details would help readers understand the basis for this improvement claim.\n\n2. In the Abstract, the phrase “enhances zero-shot inference capabilities” likely refers to the capabilities of OVS models. Clarifying this would improve readability. \n\n3. Given the similarity between open-vocabulary segmentation and open-vocabulary semantic segmentation, the authors should add a brief section comparing these two concepts. Highlighting key differences in their applications or objectives would help avoid potential confusion and clarify the unique focus of their work.\n\n4. For Equation (5), the authors should provide more detailed motivation for choosing this to determine the best threshold, rather than simply listing the source. It would be helpful if they could explain why this method was selected over alternative approaches and how it specifically benefits their evaluation protocol.\n\n5. The equation at lines 324 to 327 is missing a number." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to weakness. It is important to give more experiments for ambiguous vocabulary graph and more comparsion." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper presents an interesting analysis on the openness of open-vocabulary semantic segmentation.\n\n2. The mask-wise evaluation protocol sounds reasonable.\n\n3. The experiments are conducted on multiple existing methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper gives a deep observations on open-vocabulary semantic segmentation. To address the ambiguous category issue, the authors propose mask-wise evaluation protocol and a confusion vocabulary graph for open-vocabulary datasets. The experiments validate method defectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The quality of ambiguous vocabulary graph seems important for performance. Currently, the related experiments are not enough. I think it is better to provide more experiments to verify the quality of ambiguous vocabulary graph.\n\n2. The accuracy for front and back is not very clear. I suggest that the authors give an equation to explain it.\n\n3. The comparison of whether reducing ambiguities during training or not is necessary." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024revisit,\ntitle={Revisit the open nature of open vocabulary segmentation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2vHIHrJAcI},\nnote={under review}\n}" }, "abstract": { "value": "In Open-Vocabulary Segmentation (OVS), we observe a consistent drop in model\nperformance as the query vocabulary set expands, especially when it includes se-\nmantically similar and ambiguous vocabularies, such as ‘sofa’ and ‘couch’. The\nprevious OVS evaluation protocol, however, does not account for such ambiguity,\nas any mismatch between predicted and human-annotated pairs is simply treated\nas incorrect on a pixel-wise basis. This contradicts the open nature of OVS, where\nambiguous categories can both be correct from an open-world perspective. To\naddress this, in this work, we further study the open nature of OVS and pro-\npose a mask-wise evaluation protocol thatis based on matched and mismatched\nmask pairs between prediction and annotation respectively. Extensive experimen-\ntal evaluations show that OVS models consistently perform better under the pro-\nposed mask-wise protocol compared to the previous pixel-wise one. Moreover,\nanalysis of mismatched mask pair reveals that large amount of ambiguous cate-\ngories exist in commonly used OVS datasets. Interestingly, we find that reducing\nthese ambiguities during both training and inference enhances zero-shot inference\ncapabilities. These findings and the new evaluation protocol encourage further\nexploration of the open nature of OVS and broader open-world challenges." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Open vocabulary segmentation", "Evaluation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b8c5412e9dc44c716ed436302577c7e8d65811ae.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Revisit the open nature of open vocabulary segmentation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2vMGPrk0SW
FaceGPT: Self-supervised Learning to Chat about 3D Human Faces
main
Active
face reconstruction;vision language model;unsupervised learning
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;5
4;4;5
2;2;2
2;2;2
2;2;2
3.666667
4.333333
2
2
2
1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weaknesses above. Moreover:\n\n- Please provide a more comprehensive comparison with existing state-of-the-art methods for both 3D face reconstruction and text-driven face generation.\n- What unique capabilities or insights does the language component provide that purely vision-based approaches lack? Please provide concrete examples of how the VLM enhances face understanding beyond existing methods.\n- Do the authors believe that the generated faces of Fig. 3 accurately capture the input text descriptions? \n- What's the dataset used for the evaluation of Table 3? Are you comparing with SOTA? As said the results of Fig. 4 don't look very compelling as there are many details missing from the reconstructed faces and the identity is not well preserved." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The combination of VLMs with face-related tasks has not been explored in literature and in its current instantiation in this paper presents some amount of novelty. Moreover, training the VLM with a face reconstruction training objective in a self-supervised manner bears some degree of novelty." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper describes a method where a VLM is trained with LORA to be adapted for the task of 3D face reconstruction. The VLM is supposed to provide textual information describing the face and in the end of the text a \"face\" token is used to predict 3DMM face parameters." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unfortunately, I cannot grasp the motivation behind the proposed work as in the end of the end day it boils down how to fine-tune a VLM for 3D face reconstruction. But there are already several state-of-the-art methods of high accuracy for this task. Similarly there are already several methods for text-driven face generation. It's not clear if the proposed method is any better than methods tailored to these tasks. Importantly, these are vision tasks so it is unclear why a VLM is needed and what extra capabilities are brought into play by using for solving these tasks. The paper fails to demonstrate some newly introduced capability regarding the understanding of human faces that we have seen before. The speculative face generation task is poorly described and the evaluations do not make a lot of sense. This can be illustrated by looking at the results of Fig. 3. Clearly the method has not really been trained successfully to produce high quality realistic faces corresponding to the textual descriptions used as input. Finally, even for face reconstruction the proposed method produces poor results as the visual results of Fig. 4 show.\nOverall, the paper fails to show why VLM is needed for traditional 3D tasks, does not introduce any new capability and also fails to show decent results for the tasks it's evaluated for" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Choice of 3DMM Model:\nWhy does the framework utilize an older 3DMM model like BFM instead of more recent models that can capture finer facial details?\n\n2. Reasoning Capabilities of VLMs:\nIs there empirical evidence to support that VLMs possess the reasoning capabilities to accurately interpret human faces? If not, why prefer this framework over specialized existing frameworks designed for such tasks?\n\n3. Reliability of VLM Outputs:\nThe framework presupposes that the VLM will consistently output the <FACE> token when analyzing a face. Are there instances where the VLM fails to produce a <FACE> token even when expected?\n\n4. Verification of VLM-Generated Descriptions:\nIs there a method to verify the accuracy of the descriptions generated by the VLM? [Ref. Lines 274-276]\n\n5. Training Methodology:\nThe approach of using descriptions generated from VLM to re-train the VLM for estimating 3DMM parameters appears circular, akin to using knowledge distillation within the same model. Is there a more effective method to accomplish this?\n\n6. Contribution of VLM to the Framework:\nTo what extent does the VLM contribute to the overall framework's effectiveness? Could similar results be achieved using simpler language models or the CLIP text encoder alone? [Ref. Lines 299-300]\n\n7. Necessity of Detailed Descriptions:\nIn scenarios such as \"Predict the face of a person who is excited about a surprise party\", it seems that a simple description of the expression (e.g., \"excited\") might suffice. If a human would be asked to draw/imagine a face with this description, there are pretty good chances they will simply draw/imagine a face with \"excited\" expression on it. The additional narrative appears redundant. Do language models require this excess information to generate accurate facial expressions? Why do we really need the accompanying redundant information simply to generate a face with \"excited\" expression. I made the same observation in the Fig.3 examples where the faces only convey the main expression like \"surprise\", \"lost\", or \"angry\".\n\n8. Modeling complex expressions:\nCould the authors demonstrate complex expressions or combinations of expressions that existing models fail to capture to show the effectiveness of this framework?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well written and easy to follow.\n2. The paper proposed a framework that can leverage large VLMs to generate 3D faces from natural description of emotions.\n3. The framework doesn't require any coupled text and 3D face data.\n4. The framework achieved good 3D face reconstruction results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a novel framework to use large VLMs to reason about 3D human faces from text/images by embedding the 3DMM face parameters into the token space. The framework is train in a self-supervised manner using image-based reconstruction and differentiable rendering. The authors claim that the proposed framework is able to leverage the existing work knowledge in VLMs to achieve semantic reasoning capabilities for 3D face generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Insufficient Justification for Using VLMs:\nThe paper does not provide adequate justification for employing Visual Language Models (VLMs) in the 3D face synthesis task. The outcomes presented could potentially be replicated by many existing methods if trained under a conditional framework incorporating a CLIP text encoder along with detailed textual descriptions.\n\n2. Subpar Quality of Generated Faces:\nThe quality of the generated faces significantly lags behind the current state-of-the-art face reconstruction methods. This is primarily attributed to the use of the outdated 3DMM model—BFM—which yields a linear texture and a very coarse mesh, limiting the detail and realism of the synthesized faces.\n\n3. Lack of Standard Benchmarking:\nIt would be beneficial to evaluate the performance of this framework against standard 3D face reconstruction benchmarks, such as the Now and Realy3D datasets. Additionally, an analysis of the quality of the reconstructed mesh would provide a clearer picture of the framework's capabilities." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What is the definition of self-supervised learning according to the authors, and how does it differ from conventional interpretations?\n2. How does the paper's approach to training and data construction align with or deviate from traditional self-supervised learning methods?\n3. Can the utilization of off-the-shelf models for generating 3DMM data and textual descriptions from 2D face images be considered a form of supervisory signal?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper presents a valuable topic: constructing a unified model for generating 3D faces from both images and texts. Specifically, speculative face generation holds significant value in fields such as criminal tracking. The experiments also demonstrate the effectiveness of the constructed model in speculative face generation, explicit text-based 3D face generation, and image-based 3D face reconstruction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper fine-tunes a VLM (LLaVA-1.5-7B), unifying image-based 3D face reconstruction and language-based 3D face generation. Although the paper claims to be a self-supervised learning framework, the actual content indicates the use of supervision signals provided by off-the-shelf face reconstruction methods and VLM. It is effectively a supervised learning approach! Its loss function comprises two parts: the loss function for generating 3DMM output and the loss function for instruction-tuning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The core idea of self-supervised learning is to set up proxy tasks that allow the model to train and capture the intrinsic structure and features of the data in the process. Although the paper claims to use a self-supervised learning framework, there seems to be some deviation from the conventional definition of self-supervised learning.\nBased on the details of training and data construction in the paper, the method employed appears to be a straightforward supervised learning approach, similar to the instruction-based fine-tuning executed in papers like LLaVA. From the content on lines 193 and 236, it seems that the authors believe an algorithm can be considered self-supervised as long as it does not use manually annotated data. This perspective might reflect a different interpretation of the concept of self-supervised learning.\nAlthough the paper does not introduce manually annotated data, it utilizes off-the-shelf face reconstruction methods and VLMs to construct 3DMM data and textual descriptions from 2D face images. This effectively means that off-the-shelf models are being used to provide supervisory signals for training." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nwang2024facegpt,\ntitle={Face{GPT}: Self-supervised Learning to Chat about 3D Human Faces},\nauthor={Haoran Wang and Mohit Mendiratta and Christian Theobalt and Adam Kortylewski},\nyear={2024},\nurl={https://openreview.net/forum?id=2vMGPrk0SW}\n}" }, "abstract": { "value": "We introduce FaceGPT, a self-supervised learning framework for large vision-language models (VLMs) to reason about 3D human faces from images and text. Typical 3D face analysis algorithms are specialized and lack semantic reasoning capabilities. FaceGPT overcomes this limitation by embedding the parameters of a 3D morphable face model (3DMM) into the token space of a VLM, enabling the generation of 3D faces from both textual and visual inputs. FaceGPT is trained as a model-based autoencoder in a self-supervised manner from in-the-wild images. In particular, a dedicated face token is projected to 3DMM parameters and then rendered as a 2D face image to guide the self-supervised learning process through image-based reconstruction. Without relying on expensive 3D annotations, FaceGPT learns to generate 3D faces based on visual or textual inputs, achieving a competitive performance compared to methods that are specialized to each of these tasks. Most importantly, FaceGPT is able to leverage the world knowledge in VLMs to achieve semantic reasoning capabilities, allowing the model to perform speculative generation of 3D faces purely from subtle textual prompts that do not explicitly describe facial features. This opens a new way of generating 3D faces from subtle descriptions of emotions or general everyday situations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Haoran_Wang3", "~Mohit_Mendiratta1", "~Christian_Theobalt2", "~Adam_Kortylewski1" ] }, "authors": { "value": [ "Haoran Wang", "Mohit Mendiratta", "Christian Theobalt", "Adam Kortylewski" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "face reconstruction", "vision language model", "unsupervised learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "wang|facegpt_selfsupervised_learning_to_chat_about_3d_human_faces" }, "pdf": { "value": "/pdf/0822c55b3653ccf9f675f97774fdc667028ddf22.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "FaceGPT: Self-supervised Learning to Chat about 3D Human Faces" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2vaTZH31oR
Flex3D: Feed-Forward 3D Generation with Flexible Reconstruction Model and Input View Curation
main
Active
3D Generation;3D Reconstruction;Large 3D Models
generative models
5;5;5;6;6;6
4;4;5;5;3;5
2;3;3;3;3;2
3;1;2;3;3;2
2;2;4;3;3;3
5.5
4.333333
2.666667
2.333333
2.833333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "How many views are used for calculation metrics for Flex3d in Table 1? More than baseline methods? If so, is the comparison fair?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The writing is well-organized and easy to follow.\n- This work proposed to select condition images from the generated multi-view images based on the quality, thereby improving the 3d reconstruction quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work focuses on feed-forward 3d generation. Following previous work, this paper adopts a synthesis-then-reconstruction method, where a multi-view diffusion generates multiple images at different camera views, and a regression model then reconstructs 3d representation based on multi-view images. The main contribution the author claimed is the view selection trick that curates generated multi-view images based on the back-view quality and consistency. Also, the proposed method uses 3DGS as a 3d representation for rendering efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- This work basically follows previous work like Instant3D and replaces the triplane NeRF representation with triplane Gaussian (as in [1]). The main contribution thus lies in the candidate view selection. It is evident that the reconstruction quality would improve with better generated multi-view images, but the key is how to define 'better' and automatically filter the better images. The proposed method adopts SVM trained with 2,000 manually labeled data to select back view, but the paper does not describe how to label the data and does not give the criterion. Also, 2,000 images are small and restricted by the bias of labelers. This would lead to very biased and uninterpretable labels for training a good classifier. How about the success rate of the selection model? How to determine whether it is a good classification? There is a lack of sufficient analysis and experiments that support the claim. There are similar concerns to the consistency selection model. Why do you choose manually crafted rules for selection, like using DINO, SVM, LOFTER? Are they the best choices? Any insights? \n- Based on the aforementioned comment, I would suggest the authors to compare with automatic selection with large multimodal model like GPT4V. It is straightforward to give the grid of images to the large model, and ask it to select images. Would it be better than the proposed method?\n- There is a lack of comparison with diffusion-based baselines that predict 3d via 3d diffusion or 3d dit directly.\n- The proposed method comprises two stages. Which stage does the improvement mainly come from? multi-view generation and selection, or flex reconstruction model? In Fig.4, and table 1, do the baselines use the same multi-view images as the proposed method? I would suggest evaluating two stages separately. Specifically, you may apply the view selection to baseline methods to check whether there are consistent improvements. Also, use the same selected multi-view images to evaluate different reconstruction model.\n- For ablation results like table 3,4,5, do you use Blender rendered images or generated images as the multi-view condition? Could the data simulation address the domain gap of data? How about metrics in table 5 using GT multi-view images rather than generated multi-view images?\n\n\n[1] Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-organized, with a clear delineation of the contributions and methodologies. The progression from problem identification to solution proposal is logical and easy to follow.\n\nThe key contributions of this paper are two-fold, which seem to be of effectiveness according to the experimental analysis:\n1. candidate view generation and curation: Introduction of a multi-view generation strategy that produces a diverse set of candidate views from varying azimuth and elevation angles, followed by a selection process that filters views based on quality and consistency.\n2. flexible reconstruction model (FlexRM): Development of a robust 3D reconstruction network capable of ingesting an arbitrary number of input views with varying viewpoints. FlexRM efficiently processes these views to output high-quality 3D Gaussian representations using a combination of tri-plane features and 3D Gaussian Splatting.\n\nThe authors conduct detailed ablation studies to validate the effectiveness of each component of their proposed framework." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Flex3D is a novel two-stage framework for generating high-quality 3D content from text prompts, single images, or sparse-view images. In the first stage, it generates a diverse pool of candidate views using fine-tuned multi-view image diffusion models and video diffusion models. A view selection pipeline then filters these views based on quality and consistency. The second stage employs the FlexRM, a transformer-based architecture capable of processing an arbitrary number of input views with varying viewpoints. FlexRM combines tri-plane features with 3D Gaussian Splatting to produce detailed 3D Gaussian representations. Experimental results demonstrate that Flex3D outperforms state-of-the-art methods in both 3D reconstruction and generation tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My major concerns lay in the following several aspects. If some of the may concerns can be solved during the discussion section, I would like to raise the final score.\n\n1. The paper does not specify whether the proposed method has been tested across various datasets or object categories. Evaluating Flex3D on diverse and challenging datasets would demonstrate its generalizability and robustness to different types of input data.\n\n2. The paper evaluates performance using 2D image-based metrics such as PSNR, SSIM, LPIPS, and CLIP image similarity. While these metrics are informative, they do not fully capture the geometric accuracy and consistency of the 3D reconstructions. Incorporating 3D-specific metrics, such as Chamfer Distance or Earth Mover's Distance, would provide a more comprehensive assessment of the reconstructed 3D models' quality.\n\n3. The user study conducted to evaluate the overall quality of the generated content lacks detailed methodology. Information regarding participant demographics, selection criteria, and statistical significance testing is absent. Providing these details would enhance the credibility of the user study findings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "You mention all data was \"ethically sourced\"... but a pointer to a study that confirms that this is the case would be good to add. But how can the reader be confident this is the case... given the dataset is internal and will not be released? And what does ethically sourced really mean...?\nDid you pay the 3D artists individually for the models used, or did you just scrape data from web repos?" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I don’t really have technical questions, and it is rather unlikely that I will change my score (I hesitated between 5:weak reject and 3:reject). \nThis is because, while the quality of writing is decent and results marginally improve on the state of the art, the paper reads more like a white-paper for re-engineering a large-scale system rather than answering any specific scientific question.\n\nWhat are the **insights** that were not proposed before that could be adopted in follow-up research?\nOr is this work just about combining previous techniques with the sole purpose of getting (very marginal) improvements to metrics?\n\nAnd given the metrics improvements are so marginal (as revealed by the ablations), why does all of this complication really matter?\nPerhaps the small improvement in metrics does not reflect a drastic improvement in qualitative performance… but I wasn’t able to see a drastic improvement in qualitative results on the supplementary website… so I am having a very hard time to consider all the proposed complications to be truly worth it.\n\nFor a system paper that needs 128 A100 to train, I would have expected a **much** larger improvement in performance to justify a white-paper as a technical conference paper. The story would be different if the pre-trained model and/or code+data was released, and the method tested on public benchmarks." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Visual quality: the results look good and similar/slightly better than previous works\n- Back view quality assessment: using a multi-view video classifier to tackle typically lower back-facing views generation seems interesting, even though little information is provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Flex3D, a method for feed-forward 3D generation. The method is split into two stages, i.e., multi-view generation and subsequent conversion of these generated multi-view images into 3D Gaussians for arbitrary view rendering. The first stage uses a multi-view image model and a image-to-video model to generate multiple viewpoints of a scene. The second stage uses a LRM-like pipeline to generate 3D Gaussians. The results show competitive quality compared to previous works." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There is a general lack of technical insights\n- FlexRM stage already proposed (Stage 2): Previous works [1,2] in feed-forward 3D generation already proposed last year to decode triplane features into 3D Gaussian attributes.\n- Multi-view image generation already proposed (Stage 1): MVDream [3] and follow-up works already turn pre-trained image generators into multi-view generators.\n- Multi-view image generation with video model already proposed (Stage 1): Previous works [4,5] already proposed to use video generators for novel view synthesis given an image as an input.\n- Conditioning with camera already proposed and marginal (Stage 2): previous works such as SV3D [5] already proposed to condition the generation with camera matrices. In this work it is used in the image encoder DINO. However, the ablation in Tab. 3 shows that the model with “No stronger camera cond” only shows very marginal improvement?\n- Imperfect data simulation with marginal improvements (Stage 2): the data simulation part in the method section sounds rather complicated and unnecessary given its little impact in Tab. 5? Similar to the camera conditioning, the metrics only show very marginal improvement?\n- No computational cost analysis: The method seems very complicated, it would be good to compare training and inference time against previous works.\n\nReferences:\n- [1] Zou et al., Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers, arXiv 2023\n- [2] Xu et al., AGG: Amortized Generative 3D Gaussians for Single Image to 3D, TMLR 2024\n- [3] Shi et al., MVDream: Multi-view Diffusion for 3D Generation, ICLR 2024\n- [4] Kwak et al., ViVid-1-to-3: Novel View Synthesis with Video Diffusion Models, CVPR 2024\n- [5] Voleti et al., SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion, ECCV 2024" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. If the model is designed to be robust across various poses, view counts, and noise levels, could you provide visual results demonstrating this? For example, does the model perform well when given a side or back view as a single input? Additionally, how much inconsistency can be tolerated during the multi-view selection process?\n\n2. Does the performance continue to improve as the number of views increases? How does the processing time scale with more views? If more views are beneficial, what strategies could be used to efficiently handle a greater number of input views?\n\n3. It could be confusing if the notation for f in line 294 differs from f in line 288.\n\n4. Where are the results for the 32-view test reported in line 489?\n\n5. What would the outcome be if the selected views were used for a NeRF-based approach, similar to Instant3D? While GS may be preferred for faster training, NeRF could potentially yield better final performance.\n\n6. Why are the two-stage training and imperfect input view simulation conducted as separate processes?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Logical Model Design and Well-organized Writing. Good logical model design and clarity of writing. It effectively identifies the limitations of existing models and systematically addresses them step by step, making the problem-solving process easy for readers to follow. This demonstrates a well-structured research design, facilitating readers’ comprehension of the methodology and approach.\n\n2. Practicality. The techniques for multi-view generation, view selection, and robustness through data augmentation provide substantial applicability and reusability. The paper builds on an existing Instant3D architecture and employs a systemically optimized approach, suggesting high utility. It would be beneficial If authors release all the pre-trained models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a robust feedforward 3D generation pipeline to address inconsistent multiview inputs. Specifically, it fine-tunes multiview and video diffusion models to generate diverse viewing angles and incorporates a key view selection module using an existing feature-matching model. This approach ensures that high-quality and consistent views are chosen for 3D reconstruction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Incremental technical improvement. The suggested pipeline combines and optimizes existing techniques rather than introducing innovative algorithms. The approach appears to rely heavily on integrating and optimizing pre-existing technologies rather than presenting a novel concept or unique contribution. \n\n2. Complex Pipeline Requiring Extensive Fine-Tuning and Training. While the pipeline is logically structured, it is complex and demands extensive fine-tuning and training. Five rounds of fine-tuning are required. Initial multi-view generation involves data creation and two rounds of fine-tuning. The view selection step also utilizes existing models to build a new system module. Subsequently, the feed-forward model undergoes two additional rounds of fine-tuning, and the process includes one more phase of training with data augmentation. This level of complexity could hinder full implementation and reproducibility.\n\n3. Performance Concerns Relative to Complexity. Given the overall complexity, the proposed model’s 3D generation performance shows rather minor improvements. For instance, as shown in Table 1, the performance metrics are slightly higher than those of other models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) Please show me some failure cases, especially for your view-selection method that failed.\n\n\n2) Missing some Reference:\n\n[1] Li W, Chen R, Chen X, et al. Sweetdreamer: Aligning geometric priors in 2d diffusion for consistent text-to-3d[J]. arXiv preprint arXiv:2310.02596, 2023.\n[2] Qiu L, Chen G, Gu X, et al. Richdreamer: A generalizable normal-depth diffusion model for detail richness in text-to-3d[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 9914-9925.\n[3] Chen R, Chen Y, Jiao N, et al. Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2023: 22246-22256." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1) The paper is well-written and easy to follow. \n\n2) The results the author shows are compelling. \n\n3) The chosen multi-view strategy for improved quality is somewhat new." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author follow the classical two-stage 3D generation model: 1) Multi-view Generation; 2) A Large Gaussian reconstruction model conditioned on multi-view images from stage one to generate 3D Gaussian model. The author present a simple but effective sample strategy to choose some high-quality multi-view images among generated images as the inputs for the reconstruction stage." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The multi-view generation model is too heavy. It requires two multi-view generations to produce dense-view proposals. I believe this process is time-consuming and memory-intensive. What is the inference time required to produce the multi-view image proposals? Does it possible to apply video diffusion mode to generate a trajectory where elevation varies according to the sine function, and azimuth is selected at equal intervals instead of two multi-view diffusion model?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could the authors provide more insight into how the two multi-view generation models (focused on elevation and azimuth) avoid consistency issues, given the limited overlap between generated views?\n2. How does FlexRM handle scenarios where significant view inconsistencies occur, especially as noisy input augmentation does not seem to address cross-view consistency?\n3. Is there a visual or quantitative comparison available regarding FlexRM’s reconstruction flexibility when provided with a varying number of input views?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The two-stage process of generating and selecting views for flexible multi-view 3D generation is innovative and well-aligned with the goal of improving reconstruction quality.\n2. The paper extensively validates each proposed module, demonstrating their significance through ablation studies and metrics across various tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes Flex3D, a two-stage framework for generating high-quality 3D content using multi-view input views from a flexible reconstruction model. Initially, multiple candidate views are generated using separate multi-view diffusion models with distinct focus areas (elevation and azimuth), followed by a quality and consistency-based view selection. These selected views are then passed to a flexible reconstruction model (FlexRM), which leverages a tri-plane representation combined with 3D Gaussian Splatting (3DGS) for efficient 3D generation. Flex3D is shown to be effective in generating high-quality 3D representations and demonstrates state-of-the-art performance across several metrics in 3D generation and reconstruction tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Lack of Cohesion in Core Contributions**: The proposed approach, although effective, seems overly complex and tricky, and doesn’t clearly reflect Flex3D's core innovation. For instance, using two different models to generate two groups of multi-view images, and adding noisy inputs during reconstruction make the approach appear fragmented and difficult to generalize.\n2. **Inconsistency Concerns**: The method’s use of two different models for elevation and azimuth views results in overlapping views limited to one view (that is the view with elevation of 6), raising questions about cross-model consistency. This single overlap view may not fully capture the complete object appearance, potentially leading to inconsistencies between two view sets.\n3. **Inadequate Simulation of Multi-View Inconsistencies**: The noisy input augmentation during FlexRM training accounts for view quality but does not adequately model cross-view inconsistencies, due to its operation on the 3DGS.\n4. **Lack of Flexibility Analysis**: The paper lacks a visual ablation study on FlexRM’s performance with varying input views to illustrate the model's robustness to input flexibility." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A two-stage pipeline for generating high-quality 3D assets in a feed-forward manner." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024flexd,\ntitle={Flex3D: Feed-Forward 3D Generation with Flexible Reconstruction Model and Input View Curation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2vaTZH31oR},\nnote={under review}\n}" }, "abstract": { "value": "Generating high-quality 3D content from text, single images, or sparse view images remains a challenging task with broad applications.\nExisting methods typically employ multi-view diffusion models to synthesize multi-view images, followed by a feed-forward process for 3D reconstruction. However, these approaches are often constrained by a small and fixed number of input views, limiting their ability to capture diverse viewpoints and, even worse, leading to suboptimal generation results if the synthesized views are of poor quality.\nTo address these limitations, we propose Flex3D, a novel two-stage framework capable of leveraging a flexible number of input views.\nThe first stage consists of a candidate view generation and curation pipeline. We employ a fine-tuned multi-view image diffusion model and a video diffusion model to generate a pool of candidate views, enabling a rich representation of the target 3D object. Subsequently, a view selection pipeline filters these views based on quality and consistency, ensuring that only the high-quality and reliable views are used for reconstruction. In the second stage, the curated views are fed into a Flexible Reconstruction Model (FlexRM), built upon a transformer architecture that can effectively process an arbitrary number of inputs. FlexRM directly outputs 3D Gaussian points leveraging a tri-plane representation, enabling efficient and detailed 3D generation. Through extensive exploration of design and training strategies, we optimize FlexRM to achieve superior performance in both reconstruction and generation tasks. Our results demonstrate that Flex3D achieves state-of-the-art performance, with a user study winning rate of over 92% in 3D generation tasks when compared to several of the latest feed-forward 3D generative models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3D Generation", "3D Reconstruction", "Large 3D Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e9f4337495ab063aace50edffae5cc4a1851ea97.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Flex3D: Feed-Forward 3D Generation with Flexible Reconstruction Model and Input View Curation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2veex1oOtc
MQuant: Unleashing the Inference Potential of Multimodal Large Language Models via Full Static Quantization
main
Active
Multimodal Large Language Models;Quantization
foundation or frontier models, including LLMs
3;5;5;6
3;3;3;2
2;3;2;3
2;2;2;3
1;3;3;3
4.75
2.75
2.5
2.25
2.5
-0.662266
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper focuses on a valuable question, i.e. quantization in MLLMs.\n2. Well presented with figures and tables.\n3. Overall performance is superior to some LLM quantization baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the quantization problem in Multi-modal LLMs. Specifically, the authors investigate three aspects that lead to performance degradation when applying the straightforward per-tensor static quantization for prefilling multimodal tokens. To address these challenges, this paper presents MQuant with Modality-specific Quantization (MSQ), Attention-Invariant Flexible Switching (AIFS), LayerNorm-to-RMSNorm transformation and Rotation Magnitude Suppression (RMS)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. MSQ and AIFS are simply trivial adaptions of per-token dynamic quantization to MLLMs. It's better that this serves as a baseline model.\n2. MSQ and MSQ + AIFS exhibit marginal improvement over the per-tensor static baseline in Table 4.\n3. Please discuss the overhead of MSQ, otherwise why don't we use token-specific quantization?\n4. Although MSQ + AIFS is proposed to address the token increase brought by larger resolution of images, the speedup fails to exhibit great advantages over per-token dynamic baseline with resolution scaling.\n5. SliceGPT [1] has already proposed converting LayerNorm to RMSNorm and provides a solution, which you do not mention in the related work. Please discuss the difference between your method in Section 4.2 and the one in SliceGPT.\n6. Lack of sufficient technical contribution. Most of the techniques used are from previous work and adapt to MLLM with trivial modifications.\n7. Typos. e.g. whthin in line 304 and grammatic errors, e.g. 305 (should be \"to show how to transform xxx\")\n\n[1] Ashkboos, Saleh, et al. \"Slicegpt: Compress large language models by deleting rows and columns.\" arXiv preprint arXiv:2401.15024 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. For the proposed AIFS scheme, are the positional embeddings adjusted accordingly as the attention mask changes?\n2. What batch sizes were used when evaluating the inference latency?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and easy to follow.\n2. The modality-specific quantization and Layernorm-to-RMSNorm transformation are well-motivated by the distributional differences of various modality modules and architectural designs.\n3. Comprehensive experimental results are provided on various MLLMs, with comparisons to several popular recent LLM quantization methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces several techniques to enhance the accuracy and reduce the inference latency of Multimodal Large Language Models (MLLMs), which are affected by the additional vision encoder/adaptor. Empirical results demonstrate that the quantized model obtained using the proposed method outperforms other quantization methods in terms of accuracy and inference speed under certain settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Attention-Invariant Flexible Switching (AIFS) Scheme: The authors claim that the proposed AIFS scheme is computationally equivalent to the original attention computation. However, it is unclear whether the corresponding positional embeddings are adjusted accordingly. If not, the equivalence may not be ensured.\n\n2. Experiment Settings: There are concerns regarding the experimental settings. In Section 5.1, the authors conducted experiments under the \"text-image-text\" setting with 15 textual tokens. However, inference settings can be more complex:\n- In a batch, the number of textual tokens varies, resulting in different attention masks after AIFS.\n- There can be interleaved image-text inference with more image-text turns.\n- There can also be multi-image inference with single or multiple turns.\nMore clarifications under these cases are required to further show the efficacy of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. In Eq (6), should the denominator of equation $s$ be $2^b-1$? since for b-bit, the value range would be (0, $2^b-1$).\n2. In line 321, \"easier to quantize\". What does easy mean in this context?\n3. In line 287, what do the \"outliers\" mean? Extremely low or high values?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper follows an intuitive approach to study MLLM quantization. The authors identify the issues based on some observations in the experiments and resolve the problem in a step-by-step manner.\n2. The efficacy of the method is supported by extensive experiments. The paper shows the quantization performance of 5 mainstream MLLM models on various multi-modal tasks. The ablation studies demonstrate the usefulness of different components in maintaining the performance near the float-point baseline." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a quantization method which is specifically tailored towards MLLM. Because of the distributional differences between visual tokens and text tokens, the authors intuitively calculate separate quantization scales for two modalities and calibrate the attention mask accordingly. Further, they adapt some techniques from the LLM quantization literature to visual encoders in MLLM. By combining these two, MQuant maintains lower performance degradation under challenging quantization settings on multiple state-of-the-art retrained MLLM models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The delivery of the paper needs significant improvement. The text is highly redundant. \n- Introduction: The content of the second last paragraph mostly overlap the main contribution part. It could be beneficial if these two parts are reorganized or condensed.\n- Methodology: In 4.1, there are abundant words to explain the reason why we need MSQ and AIFS and the benefits brought by these two. To me, these are intuitive and simple operations which only need concise words for explanation. For 4.2 and 4.3, which are the techniques adapted from LLM quantization, it would be better if the authors could emphasize their novel improvements or adaptations rather than putting too many words to explain other people's contributions. \n- Although using separate figures for different components are informative, it could be easier for the readers to follow without reading the algorithm 1 in Appendix first if the authors could add a figure to show the overall quantization pipeline with the novel parts highlighted. \n- For some abbreviations used in the paper, like GEDD and W4A8, it would be friendly to readers not in the area if adding the explanations in the first place.\n\n2. The paper does not demonstrate enough novelty. First, both LayerNorm-to-RMSNorm transformation and Hadamard rotation are borrowed from LLM quantization literature (Ashkboos et al., 2024a, b). Second, although adopting a simple Divide-and-Conquer strategy like paper does to cope with the distribution outliers or differences may be sufficient, it is worth thinking about other systematic alternatives after getting more insights from the observations in the experiments. For now, the paper is more like a technical report. The paper should be concise and highlight the actual novel contributions.\n\n3. Experiments: It would be better to see the latency comparisons among the proposed quantization methods could be added in Table 5. \n\n4. Minor Errors:\n\n- The font size of the legend in Figure 1 (left side) is too small to read.\n- Line 85-87: the meaning of the sentence Is not clear. Two \"slightly\" exist.\n- For Table 3/4. the arrow directions showing the relative difference are counter-intuitive. Showing the decrease of latency with down arrows and adding \"lower is better\" could be an alternative.\n- In Table 5, should that be \"MSQ\" rather than \"MDQ\"?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the comments above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Strength:\n\n1. Extensive experiments demonstrate the approach's effectiveness in the PTQ of MLLMs.\n2. The motivation is clear and quantization for MLLM is an important topic.\n3. This paper is well-organized and clearly-written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes MQuant, an accurate and efficient post-training quantization solution for multimodal large language models (MLLMs). MQuant reduces the time to first token (TTFT) with per-tensor static quantization and introduces modalityspecific quantization (MSQ) to handle distribution discrepancies between visual and textual tokens. Experiments on five mainstream MLLMs demonstrate that MQuant attains state-of-the-art PTQ performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weakness:\n\n1. My only concern is that i'm not familiar with quantization. So i will adjust my rating depending on the other reviewers' opinions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mquant,\ntitle={{MQ}uant: Unleashing the Inference Potential of Multimodal Large Language Models via Full Static Quantization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2veex1oOtc},\nnote={under review}\n}" }, "abstract": { "value": "Recently, multimodal large language models (MLLMs) have garnered widespread attention due to their ability to perceive and understand multimodal signals. However, their large parameter sizes and substantial computational demands severely hinder their practical deployment and application. While quantization is an effective way to reduce model size and inference latency, its application to MLLMs remains underexplored. In this paper, we conduct an in-depth analysis of MLLMs quantization and identify several challenges: slow inference speed of the visual tokens, distributional differences across modalities, and visual outlier clipping degrades performance.\nTo address these challenges, we propose **MQuant**, a quantization framework tailored for MLLMs. Specifically, 1) we design Modality-specific Quantization (MSQ) and Attention-Invariant Flexible Switching (AIFS) to support per-tensor static quantization and facilitate efficient inference. 2) we introduce a unified LayerNorm-to-RMSNorm transformation, achieving seamless integration of the MLLM vision encoder with Hadamard rotation. 3) we propose Rotation Magnitude Suppression (RMS) to mitigate outliers introduced by Hadamard rotation. Experiments conducted on five mainstream MLLMs demonstrate the superior performance and broad applicability of MQuant. For example, it maintains around 98\\% of the floating-point accuracy under the W4A8 setting. To the best of our knowledge, **MQuant** is the first quantization solution for MLLMs, paving the way for future advancements in their application." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Large Language Models", "Quantization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6190383aa8e72438b63e7027e06b2e815402cd74.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MQuant: Unleashing the Inference Potential of Multimodal Large Language Models via Full Static Quantization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2vgcDW2blS
Residual Kernel Policy Network: Enhancing Stability and Robustness in RKHS-Based Reinforcement Learning
main
Active
policy learning;reproducing kernel Hilbert space;representation learning;variance reduction
reinforcement learning
3;5;8;8
4;3;3;4
2;2;3;3
3;2;3;3
2;3;3;3
6
3.5
2.5
2.75
2.75
-0.235702
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Based on the numerical results, it appears that the main improvement stems from the residual design. However, the comparison models are baseline models without any variance reduction techniques, raising questions about the fairness of the comparison. Additionally, variance reduction methods introduced in previous works should be considered.\n2. There is existing literature on combining RKHS with residual networks, and a discussion of these studies would add valuable context.\n3. The numerical section would benefit from testing in more complex environments to strengthen the evaluation." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper introduce a new RKHS policy learning algorithm.\n2. This paper introduces a variance reduction technique by designing a residual layer for the RKHS policy\n3. The numerical results demonstrate the validity of the proposed method" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper applies Reproducing Kernel Hilbert Space (RKHS) methods to policy gradient to enhance sample efficiency and stability in training. Additionally, it introduces a variance reduction technique inspired by residual networks, further improving the stability and effectiveness of the policy training process." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While applying RKHS to reinforcement learning (RL) is not novel, this paper lacks a discussion of existing methods. Relevant references include: \n[1] Mazoure, Bogdan, et al. \"Representation of reinforcement learning policies in reproducing kernel Hilbert spaces.\" arXiv preprint arXiv:2002.02863 (2020). \n[2] Wang, Yiwen, and Jose C. Principe. \"Reinforcement learning in reproducing kernel Hilbert spaces.\" IEEE Signal Processing Magazine 38.4 (2021): 34-45. \nAdditionally, some kernel-based methods, although not specifically RKHS-based, are also relevant to consider. \n2. Existing work, such as reference [2], introduces variance reduction techniques. A comparison or discussion of these approaches with the methods in this paper would provide valuable insights. Although RKHS is rarely applied to RL, there is extensive work on integrating RKHS with general machine learning problems. \n3. The idea of applying RKHS to RL appears straightforward, and the key distinctions from previous approaches remain unclear." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could the authors expand on how ResKPN might handle multi-agent or cooperative environments? Given the scalability challenges, it would be valuable to understand the model's limitations in such settings. How would the approach adapt to environments where action spaces vary significantly in scale or complexity? Neural networks often succeed in settings with large amount of training data, would such a setting be appropriate for a non-parametric method such like RKHS? 106: h is a functional (function -> values), but notation h(s) is used, s is a state, not a function so why do we call h a functional?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The technical claims are well-founded, and the experimental results are robustly supported by rigorous methodology. The integration of residual layers with RKHS gradients appears to reduce gradient variance, as confirmed by extensive empirical evidence on MuJoCo environments. The variance analysis is theoretically grounded, and experimental setups align well with the claims, ensuring soundness across technical aspects.\n\nThe presentation is clear overall, though there are instances where dense technical language or unclear phrasing makes comprehension difficult, especially in theoretical sections. Improved structuring or additional context around complex derivations could enhance readability.\n\nThis work contributes meaningfully to reinforcement learning research by empirically identifying a weakness in a common reinforcement learning approach. It attempts to solve this by introducing a model with enhanced stability and robustness through representation learning and a residual layer. The originality lies in effectively merging RKHS gradient variance reduction with neural network-based feature extraction, a strategy not previously well-addressed. The approach is promising for applications requiring adaptive, high-dimensional policy learning. However, just adding a residual neural network to an existing method has limited originality.\n\n- Significance: Tackling gradient variance in RKHS-based reinforcement learning is critical for real-world applications, and the results demonstrate potential for improved robustness.\n- Experimental Rigor: Extensive tests across six MuJoCo tasks validate ResKPN’s efficacy and its edge over comparable baselines in terms of episodic rewards and convergence rates.\n- Practical Impact: The adaptability of ResKPN to complex, high-dimensional environments shows promise for real-world reinforcement learning scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper, titled \"Residual Kernel Policy Network: Enhancing Stability and Robustness in RKHS-Based Reinforcement Learning,\" addresses the instability and sensitivity in RKHS-based reinforcement learning policies. The authors show significant gradient variance and hyperparameter sensitivity and propose the Residual Kernel Policy Network (ResKPN). This network incorporates representation learning to adaptively align observations with the kernel's structure. The Authors also employ a residual architecture to further stabilize training. Experiments on MuJoCo tasks demonstrate ResKPN's performance, reportedly surpassing baseline algorithms like PPO and DPO by up to 30% in episodic rewards." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Complexity of Variance Analysis: While theoretically thorough, the variance analysis may benefit from simplification or additional visual explanations. This complexity could present a barrier for researchers less familiar with RKHS.\n\nComputational Cost: Given the use of RKHS, the method may face scalability limitations in more extensive settings or when applied to multi-agent environments.\n\nLimited Discussion on Alternative Kernels: While Gaussian kernels are utilized effectively, the paper could explore the feasibility of other kernels or adaptive kernel selection strategies to further broaden the model's applicability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How slow is this? Please provide some training time comparisons with PPO.\n- Do you have any further explanation or intuition for the variance-kills-RKHS-methods argument? Would minibatches mitigate this?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The problem is clearly explained. It is clear what problems with RKHS RL the authors are setting out to fix, and how those problems motivate the produced algorithm.\n- The mathematics and notation are professionally done, and are easy enough to follow (though I didn't go through the derivations in the appendices).\n- The writing is clear.\n- The experiments in the experimental section are comprehensive and convincing.\n- A more effective RL baseline is significant... if it's usable (see weaknesses/questions)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper sets out to make a new SOTA in RL policy gradient algorithms, by modifying a method from reproducing kernel Hilbert space (RKHS) reinforcement learning, where policies are represented as Gaussians in a RKHS. This allows policies to be learned in a space that captures relationships and correlations in large-dimensional action spaces.\n\nThe paper argues that previous RKHS RL approaches have suffered for two reasons. First, the selection of the kernel is important and difficult, and an improper kernel choice leads to underperformance. Second, RKHS RL is particularly vulnerable to high variance in the gradients, leading to unstable learning.\n\nThe paper addresses these issues by introducing ResKPN. This policy algorithm addresses the representation problem by applying the kernel to a learned representation of the state rather than to the state itself, and the high variance problem by introducing a residual layer in the representation to empirically decrease this variance.\n\nThe paper shows that policies learned via ResKPN outperform or compete closely with benchmark algorithms like PPO on a variety of standard RL problems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- An important argument of the paper is that representation and variance problems cause RKHS RL to fail. Accordingly, something like the illustrations of the representation and variance problems (Figure 1) are probably necessary, but I do not find these particular illustrations very effective. They show that on one problem, some Gaussian kernels are ineffective, and that on another (single) problem, high variance can be seen. I don't think they reinforce the strong causal relationship that the authors intend to convey, particularly when the high variance is itself dependent on the kernel selection. Representation problems are certainly easy enough to believe, but the fact that the fully connected layer is effective *because it diminishes variance*, rather than (for example), just because it augments the representation, is not so clearly argued.\n- How slow is this? Seems like it might be very slow... Is it slow enough to be near-unusable? I think this should be addressed with a table of training times in the appendix.\n\n**Minor things**\n- The system being trained in this algorithm is complex, with lots of different sets of parameters ($\\theta, \\iota, \\delta...$). I think Figure 6 is important enough that it should probably be promoted to the regular paper, as the explanation is not clear enough to stand on its own without it. The critic network should also be integrated into this figure.\n- On line 96, $U(w)$ should be $U(\\pi_w)$.\n- On line 191, should be $\\sigma^2=0.3, 0.5, 0.7,$ and 0.9.\n- Wording on lines 262-263 is not correct.\n- On line 299, \"The key idea of residual layer\" should be \"They key idea of the residual layer\" (or \"motivating the residual\")." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper presents a clear and well-defined contribution by addressing the instability and sensitivity issues in RKHS-based reinforcement learning methods. The introduction of the ResKPN and the integration of representation learning and a residual layer provide a novel solution to these challenges. The contribution is clearly articulated, with a strong emphasis on how the proposed method improves stability and performance in complex environments. The significant 30% improvement in episodic rewards further highlights the effectiveness of the approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenges of achieving optimal performance in RL using policies modeled in reproducing RKHS. While RKHS-based methods offer efficient exploration of local optima, they suffer from significant instability due to high variance in policy gradients and sensitivity to hyperparameters. The authors analyze the causes of instability, particularly highlighting the increased gradient variance with wide-bandwidth kernels. To resolve these issues, they propose the ResKPN, a novel approach that integrates representation learning to process complex observations and introduces a residual layer inspired by advantage functions. This residual layer reduces gradient variance, thereby improving training stability and policy robustness. The ResKPN algorithm achieves state-of-the-art performance, with a 30% increase in episodic rewards across multiple complex environments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "A notable weakness of the paper is the absence of available code for the proposed ResKPN. The lack of code limits reproducibility and hinders other researchers from validating the results or building upon the work. Providing access to the implementation would significantly enhance the paper's impact and facilitate further exploration of the proposed methods." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce the Residual Kernel Policy Network (ResKPN), which integrates representation learning with a residual layer to mitigate gradient variance and enhance the robustness of the RKHS-based policy." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024residual,\ntitle={Residual Kernel Policy Network: Enhancing Stability and Robustness in {RKHS}-Based Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2vgcDW2blS},\nnote={under review}\n}" }, "abstract": { "value": "Achieving optimal performance in reinforcement learning requires robust policies supported by training processes that ensure both sample efficiency and stability. Modeling the policy in reproducing kernel Hilbert space (RKHS) enables efficient exploration of local optimal solutions. However, the stability of existing RKHS-based methods is hindered by significant variance in gradients, while the robustness of the learned policies is often compromised due to the sensitivity of hyperparameters. In this work, we conduct a comprehensive analysis of the significant instability in RKHS policies and reveal that the variance of the policy gradient increases substantially when a wide-bandwidth kernel is employed. To address these challenges, we propose a novel RKHS policy learning method integrated with representation learning to dynamically process observations in complex environments, enhancing the robustness of RKHS policies. Furthermore, inspired by the advantage functions, we introduce a residual layer that further stabilizes the training process by significantly reducing gradient variance in RKHS. Our novel algorithm, the Residual Kernel Policy Network (ResKPN), demonstrates state-of-the-art performance, achieving a 30% improvement in episodic rewards across complex environments." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "policy learning", "reproducing kernel Hilbert space", "representation learning", "variance reduction" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/af4c11a8b4d875d3c0c5f47e0aa763aa285d743f.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Residual Kernel Policy Network: Enhancing Stability and Robustness in RKHS-Based Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2vlhdheveh
One Step Diffusion-based Super-Resolution with Time-Aware Distillation
main
Active
Efficient diffusion;Super-resolution;Knowledge distillation
generative models
5;5;5;5;6
4;4;5;4;3
2;2;3;3;3
2;2;3;3;3
3;2;3;2;3
5.2
4
2.6
2.6
2.6
-0.790569
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper targets at an important problem of distillation of SR diffusion models. While diffusion distillation is a popular research area, it is interesting to see some insight particularly designed for SR models\n\n2. The paper introduces a novel technique to reduce the bias of the score estimate of generated samples in SDS, which particularly fits in the insights from SR.\n\n3. Empirical results shows promising improvements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a method to distill a super-resolution diffusion model into one step, by combining 3 losses: direct regression loss, GAN loss, and a modified score distillation loss. The main contribution is the score distillation part." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The biggest concern is insufficient baselines. The method compare against a large number of non-diffusion based methods or diffusion based iterative methods, but it lacks comparisons against the most closely related methods: other diffusion distillation algorithms. This method distill a pre-trained SR diffusion model into one step with some specific design for SR, but there are many distillation methods designed for general diffusion models, such as consistency model and the family of distribution matching distillation. The authors should run controlled experiment with the same teacher model with different algorithms to emphasize the relative advantage. For example, personally I found CM works well in distilling SR model into one step, and DMD and its variant can distilled the more complicated T2I model into one step. Their relative performance on SR diffusion is what we really care.\n\n2. It seems like the method requires teacher model to generate clean samples, which can be computationally expensive, even if you pre-compute the data off-line. \n\n3. The background of SDS and how to reduce the bias is unclear to readers without prior knowledge." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weakness part." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper is well-written.\n* Experimental results demonstrate that the proposed method achieves state-of-the-art performance with high efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a time-aware diffusion distillation method named TAD-SR, which enables the student model to focus on high-frequency image details at smaller time steps and eliminates inherent biases in score distillation sampling. The authors also design a time-aware discriminator that fully leverages the teacher model’s knowledge by injecting time information to differentiate between real and synthetic data. Experimental results demonstrate the effectiveness and efficiency of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The evaluation is not comprehensive. Some image fidelity metrics are lacking, such as PSNR and SSIM on ImageNet-Test, where the competing methods ResShift and SinSR all reported.\n\n* The improvement over the previous single-step distillation method SinSR is minor. Considering that LPIPS—a crucial metric for perceptual quality—is very important, the increase from 0.221 to 0.227 represents a big drop in quality and is not slight.\n\n* The ablation study examines only the presence or absence of the discriminator, neglecting other important aspects—for example, the number of scales used in the discriminator." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tSince this is a distillation method, please compare more diffusion-based distillation SR methods, like OSEDiff [1], quantitatively and qualitatively. (Why are the comparison with diffusion-based distillation SR methods missing in some tables and figures?)\n\n2.\tSince you claim that TAD-SR can achieve better reconstruction of high-frequency information, please present the spectrum images of the LR input, GT, baseline methods’ reconstruction, and TAD-SR’s reconstruction. Examine the differences in the high-frequency patterns around the periphery of the spectrum images.\n\n3.\tPlease compare the inference time of TAD-SR and baseline methods.\n\n4.\tIn Fig. 10 and Fig. 12, TAD-SR’s results appear to contain many fragmented particles, which make the images look sharper at first glance; however, this is actually due to the addition of pseudo-textures or unnatural details. Could you explain the cause of this? For instance, could it be due to the adversarial loss?\n\n5.\tFollowing the concern raised in my 4th question, could you please provide more qualitative comparisons that contain fine details or small textures?\n\n[1] Rongyuan Wu, et al. One-Step Effective Diffusion Network for Real-World Image Super-Resolution. \n\n\n(I apologize for my previous review comments, which were not fully aligned with your article due to a heavy review workload. I am providing corrected feedback here, and if your response addresses these points well, I will consider adjusting the score.)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThis paper proposes a time-aware distillation method that accelerates diffusion-based SR models into a single inference step.\n2.\tThe writing of this paper is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a time-aware diffusion distillation method, TAD-SR, to achieve one-step SR inference with competitive performance. It applies a score distillation strategy make efforts to eliminate the inherent bias SDS focus more on high-frequency image details when sampling at small time steps. A time-aware discriminator is also designed to differentiate between real and synthetic data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "See the questions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The motivation is not clear. If the proposed method wants to achieve one-step SR, why it is important for student model to learn how to deal with the intermediate steps?\n\nWill increase the inference steps contribute to the improvement of the performance?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The topic is interesting and meaningful.\n2. Extensive experiments demonstrate that TAD-SR achieves results comparable to or exceeding multi-step diffusion models, espeically in some non-reference IQA metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces TAD-SR, a time-aware diffusion distillation method designed to enhance the efficiency and performance of diffusion-based image super-resolution (SR) models. By aligning the student and teacher models with the proposed score distillation strategy and incorporating a time-aware discriminator to distinguish real and synthetic data across varying noise levels, TAD-SR achieves strong performance across several metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The organization of the paper needs improvement, as it is challenging to clearly understand the core idea. For instance, Fig. 2, which aims to illustrate the paper's motivation, has a caption that provides limited information.\n\n2. The paper lacks essential metrics, such as PSNR and SSIM, to evaluate model fidelity. As shown in previous works, there is a trade-off between PSNR, SSIM, and CLIPIQA, MUSIQ. Reporting only LPIPS and non-reference IQA metrics is insufficient to demonstrate performance. Both the main results and ablation studies should include these metrics.\n\n3. Although I understand that StableDiffusionXL also employs adversarial loss, it appears less elegant to me due to the inherent limitations of GANs.\n\n4. In addition to the difficulty of assessing performance without PSNR and SSIM, the reported improvements seem marginal compared to existing methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the Weakness part. \nThe author should carefully describe the details of the method to enhance the readability and clarity of the paper. In addition, the comparison of the most relevant methods (including complexity comparison) should be added to clarify the innovation and effectiveness of the method, and the advancement of the method should be proved through relevant experiments.\n\nI tend to improve the score if the author can solve my concerns." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed distillation strategy is simple and straightforward, which can eliminate the inherent bias in score distillation sampling (SDS) and enable the student models to focus more on high-frequency image details. \n2. The proposed time-aware discriminator can differentiate between real and synthetic data, contributing to the generation of high-quality images.\n3. The presentation of this work is written well and is easy to read." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author proposes a time-aware diffusion distillation method, named TAD-SR, where a novel score distillation strategy is introduced to align the score functions between the outputs of the student and teacher models after minor noise perturbation. Such distillation strategy eliminates the inherent bias in score distillation sampling (SDS) and enables the student models to focus more on high-frequency image details by sampling at smaller time steps. Furthermore, a time-aware discriminator is designed to mitigate performance limitations stemming from distillation, which distinguishes the diffused distributions of real and generated images under varying noise disturbance levels by injecting time information." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It is confusing which is the final output of the model when inference, z_0^{stu} or z ̂_0^{stu}? It is not clearly indicated in Figure 4. Please explicitly state in the text and figure.\n2. The authors should clarify if the teacher model is used at all during inference, or if it is only used during training. If I understand correctly, only the student model samples one step, and then the teacher model is used later to sample multiple steps to get the final clean latent, so the model performance relies heavily on the performance of the teacher model, and is not exactly efficient.\n3. What is the purpose of setting the weighting function (ω = 1/CS )? Please provide intuition for why this weighting function was chosen, and what effect it has on the training process or results. \n4. In order to eliminate the dependence of the proposed method on the teacher model of ResShift, the relevant ablation experiments should be conducted by replacing the different teacher models to validate the effectiveness of the proposed method.\n5. The experiments lack comparisons with the most relevant distillation methods, including DMD, DEQ[1], DFOSD[2], etc. Among them, DMD, a new diffusion model, utilizes similar score distillation techniques to the proposed HSD. DEQ and DFOSD are both efficient and relevant diffusion models, which require one-step diffusion distillation or even no distillation.\n6. In the experimental section, the authors compare many GAN and transformer-related methods. However, the proposed method is a diffusion model and should be compared with the most relevant diffusion models to validate its efficiency, especially accelerated diffusion models, including OSEDiff[3], DPM++[4], Unipc[5], etc. \n7. The authors claim that the method is designed to accomplish effective and efficient image super-resolution, but did not include a complexity comparison of the different methods (including parameters, sampling steps, running time, MACs, etc.), which is crucial for diffusion models. Please provide a Table to compare these computational complexity metrics with the key baselines.\n8. Are there any limit conditions for using the method? The author should discuss and analyze the limitations of the proposed method. It is recommended to add a discussion of a discussion of potential limitations or where the proposed method might not perform as well.\n\nReferences\n\n[1] Geng Z, Pokle A, Kolter J Z. One-step diffusion distillation via deep equilibrium models[C]. Advances in Neural Information Processing Systems, 2024.\n\n[2] Li J, Cao J, Zou Z, et al. Distillation-free one-dtep diffusion for real-world image super-resolution[J]. arxiv preprint arxiv:2410.04224, 2024.\n\n[3] Wu R, Sun L, Ma Z, et al. One-step effective diffusion network for real-world image super-resolution[J]. arxiv preprint arxiv:2406.08177, 2024.\n\n[4] Lu C, Zhou Y, Bao F, et al. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models[J]. arxiv preprint arxiv:2211.01095, 2022.\n\n[5] Zhao W, Bai L, Rao Y, et al. Unipc: A unified predictor-corrector framework for fast sampling of diffusion models[C]. Advances in Neural Information Processing Systems, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024one,\ntitle={One Step Diffusion-based Super-Resolution with Time-Aware Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2vlhdheveh},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion-based image super-resolution (SR) methods have shown promise in reconstructing high-resolution images with fine details from low-resolution counterparts. However, these approaches typically require tens or even hundreds of iterative samplings, resulting in significant latency. Recently, techniques have been devised to enhance the sampling efficiency of diffusion-based SR models via knowledge distillation. Nonetheless, when aligning the knowledge of student and teacher models, these solutions either solely rely on pixel-level loss constraints or neglect the fact that diffusion models prioritize varying levels of information at different time steps. To accomplish effective and efficient image super-resolution, we propose a time-aware diffusion distillation method, named TAD-SR. Specifically, we introduce a novel score distillation strategy to align the score functions between the outputs of the student and teacher models after minor noise perturbation. This distillation strategy eliminates the inherent bias in score distillation sampling (SDS) and enables the student models to focus more on high-frequency image details by sampling at smaller time steps. Furthermore, to mitigate performance limitations stemming from distillation, we fully leverage the knowledge in the teacher model and design a time-aware discriminator to differentiate between real and synthetic data. This discriminator effectively distinguishes the diffused distributions of real and generated images under varying levels of noise disturbance through the injection of time information. Extensive experiments on SR and blind face restoration (BFR) tasks demonstrate that the proposed method outperforms existing diffusion-based single-step techniques and achieves performance comparable to state-of-the-art diffusion models that rely on multi-step generation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Efficient diffusion", "Super-resolution", "Knowledge distillation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/58d77251c8277da573859beab6af505a21c56f65.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "One Step Diffusion-based Super-Resolution with Time-Aware Distillation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2wDXNF0Gv4
Prompt-Agnostic Erasure for Diffusion Models Using Task Vectors
main
Active
Concept Erasure
generative models
5;5;5;6
4;3;4;4
2;1;2;3
2;2;2;3
1;1;2;4
5.25
3.75
2
2.25
2
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The technical contributions are sound and interesting.\n2. The paper is well written. \n3. The paper included thorough evaluations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an interpretability study focused on understanding the second-order effects of neurons in CLIP. The authors propose a novel \"second-order lens\" to analyze neuron contributions that flow through attention heads to the model output." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Multiple concept erasure - How does the proposed method perform on multi-concept erasure? The baselines considered in this paper (UCE and ESD) evaluate their model on erasing multiple objects simultaneously. Therefore it is fair to compare this method for multi-concept erasure.\n2. Missing baselines - Comparison to Selective Amnesia (SA) (a strong and very similar baseline in my opinion) is missing from the paper. I believe the proposed method lie under a similar umbrella as SA. \n3. Underperforms baselines on NSFW concepts—The authors state that TV only reduces nudity in 52% of images compared to SD1.4, which is worse than the baselines (ESD, UCE, etc.) considered in the paper. This is a major drawback of the method in a real-world setting." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "* What could the prompts look like for a given complexity class L? Does it directly translate to the number of words?\n* Can this method actually remove small parts of the image such as copyright logos? It was used in motivation but seems to not be tested?\n* How well does the method work when using other adversarial techniques such as UnlearnDiffAtk and P4D - quantitative evaluation, not only qualitative that is already provided?\n* Does the approach work well also on other diffusion models than Stable Diffusion v1.4?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "* There is a decent initial analysis to motivate the approach and explain why it may be suitable.\n* The method seems to perform well, maintaining the quality of the generated images for non-erased concepts, and successfully erasing the selected concepts.\n* In general the paper is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a technique for erasing concepts from diffusion models. The method is based on using task vectors to erase the concepts, in combination with diverse inversion, a form of textual inversion. A key feature is that the erasure is prompt-agnostic and is designed to work with diverse prompts, especially adversarial ones." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The evaluation is quite limited, it would be good if quantitative evaluation included diverse adversarial techniques in addition to concept inversion. There are some qualitative results for UnlearnDiffAtk and P4D in the appendix, but the paper would benefit from using these and maybe even others for more extensive quantitative evaluation. Also it would be good to show the method works also on other models than Stable Diffusion v1.4 specifically.\n* The method seems to be primarily a combination of task vector technique and a version of text inversion, applied to the problem of concept erasure, so it may lack significant novelty.\n* There are quite a few issues with the writing and presentation - the font is different than the standard one, this should be corrected; various typos, grammar issues or missing words, e.g. “jailbraking” L145, “might in some cases the usability might degrade” L358, “Fig. 6 demonstrate” L410, “how how” L414, …" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Was the vector from the Diverse Inversion set used in evaluating the robustness of the methods against Concept Inversion? If so, could you please provide information on how the metrics would change if this vector were excluded from the Diverse Inversion set?\n\n2. Could you provide a step-by-step description of the Diverse Inversion Set selection procedure? Additionally, please include details on the number of restarts for the Concept Inversion procedure.\n\n3. Why is the Control Task not utilized for selecting alpha, alongside the Diverse Inversion set?\n\n4. Can you elaborate on the toy example, specifically regarding the embedding grid search procedure?\n\n5. It would be beneficial to include additional visual examples to illustrate the results presented in Table 2." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors clearly identify the problem of “input dependence” associated with previous methods and provide compelling evidence of these issues via the MNIST toy experiment, which emphasizes prompt complexity rather than using a fixed set of prompts. \n\n- They propose a method to address these challenges, which combines an existing concept-forgetting technique Task Vectors with a novel procedure called Diverse Inversion to optimize parameter selection for Task Vectors. \n\n- Although Task Vectors is an already existing technique, the authors unveil its previously unexplored property of Concept Inversion Robustness.\n\n- The Diverse Inversion idea is an interesting approach that could be applied to other research areas, potentially enhancing our understanding of concept learning and erasure processes. \n\n- Overall, the text is straightforward and presents all ideas clearly and concisely." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel method for concept erasure in pre-trained generative models. This method consists of two key components: (1) the development of a Task Vector Method for concept erasure; and (2) the selection of optimal parameters through novel Diverse Inversion procedure. Notably, this approach is input-independent and does not rely on specific pre-defined prompts that contain concepts. As a result, it demonstrates enhanced robustness against concept inversion when compared to previous methods, while maintaining comparable results on unrelated concepts generation tasks and within the \"given prompt generation\" setting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Certain aspects of the experimental workflow are not sufficiently detailed. For instance, the setup of the toy experiment on MNIST lacks information regarding the embedding grid search procedure. Additionally, the Diverse Inversion Set selection procedure may need more clarification, particularly regarding the number of restarts of the Concept Inversion procedure and a comprehensive step-by-step description.\n\n- Furthermore, it appears that the vector from the Diverse Inversion set, which is utilized for selecting the parameter alpha, was also employed in evaluating the robustness of the methods against Concept Inversion. If this is the case, it would be helpful to report how the metrics would be affected if this vector were removed from the Diverse Inversion set.\n\n- It would be beneficial to include additional visual examples to illustrate the results presented in Table 2." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Edit Block Selection: What was the rationale for choosing to edit only the first three blocks in the model? Would the authors consider expanding on why these specific blocks were selected for editing?\n- Alpha Parameter Choice: The choice of the α parameter remains somewhat unclear, with few details provided outside of Figure 7. Could the authors specify the α values used throughout the experiments and clarify whether they evaluated multiple α values to determine the optimal edit strength?\n- Figure Placement: Would the authors consider moving Figure 1 closer to its first reference on page 4 to improve readability and flow?\n- Table Clarity: Could the authors clarify the meaning of “SLD-Med” in Table 2 (page 10) and confirm if it is the same as “UCE” mentioned briefly in the related work section? Including these definitions would improve comprehension.\n- Equation Definition: In Equation 4, the terms and are not clearly defined. Could the authors provide explicit definitions for each variable, or alternatively, replace the equation with a detailed textual description if that would improve clarity?\n- Typos and Formatting: There are minor typos and formatting inconsistencies (e.g., “Sec.3.2” instead of “Sec. 3.2”). Would the authors consider addressing these issues to enhance overall readability?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Clarity and Structure: The paper is well-organized and clearly written, making it accessible and easy to follow, even for readers less familiar with the technical aspects of concept erasure and Task Vectors.\n- Visualization Quality: The visualizations of generated images are well-crafted, effectively illustrating the model’s concept erasure capabilities and supporting the clarity of experimental results.\n- Clear Literature Review: The related work section thoroughly covers relevant research on concept erasure and on jailbreaking generative models. This strong contextual foundation helps to situate the authors’ contributions within the broader field and underscores the necessity of robust model editing methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of preventing style mimicry in text-to-image models by proposing an unconditioned approach to concept erasure, independent of user prompts. This approach uses Task Vectors (TV) for concept erasure, offering greater robustness to unexpected user inputs. Also, the authors introduce Diverse Inversion, a technique that estimates the required TV edit strength by identifying a broad set of word embeddings within the model’s input space, each capable of generating the target concept." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Edit Block Selection: The rationale for editing the first three blocks is not fully explained. A discussion on why these specific blocks were chosen would strengthen the methodological foundation. I suggest that the authors provide a brief explanation of the model architecture and how the blocks relate to different levels of abstraction or functionality.\n- Alpha Parameter Choice: The choice of α is not well-clarified. While Figure 4 mentions α, no figure or table apart from Figure 7 details the specific α values used. Since Diverse Inversion is intended to estimate the optimal strength of the Task Vector (TV) edit, it would be beneficial to provide explicit α values and clarify if the authors tested a range of α values to identify the best-performing option. I suggest that the authors include a table or figure to illustrate how they arrived at optimal strength.\n- Figure Placement: Figure 1 appears on page 2, yet it is first referenced on page 4. It would improve readability and flow by moving the figure closer to its initial mention or adding an earlier reference to it in the text\n- Table Clarity: In Table 2 (page 10), the acronym “SLD-Med” lacks explanation, and the term “UCE” is only briefly mentioned in the related work section (page 3). It’s unclear if SLD-Med and UCE refer to the same concept; clearer definitions would enhance comprehension. I suggest that the authors include a brief explanation of these terms in a footnote or in the table caption.\n- Equation Definition: In Equation 4, the variables [a, b] and [c, d] are not clearly defined. While the meaning can be inferred from the surrounding text (Lines 341-343), each variable in the equation should be explicitly defined. I suggest that the authors consider adding a brief explanation of these variables immediately following the equation, which would maintain the mathematical formalism while improving readability. Alternatively, consider replacing the equation with a detailed textual description if it enhances clarity. \n- Typos and Formatting Issues:\n - Line 285: \"Sec.3.2\" should be \"Sec. 3.2\".\n - Line 343: \"e.g. Van Gogh\" should be \"e.g., Van Gogh\".\n - Line 354: \"I.e.\" should be formatted as \"I.e.,\" or, for clarity, replaced with \"For example,\".\n - Line 355-356: The sentence lacks a verb; it currently reads “we can the value of the edit strength α.” Please revise for clarity.\n - Line 360: \"i.e. setting\" should be \"i.e., setting\". \n - Line 400: \"In Figs\" should be \"In Fig\"." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024promptagnostic,\ntitle={Prompt-Agnostic Erasure for Diffusion Models Using Task Vectors},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2wDXNF0Gv4},\nnote={under review}\n}" }, "abstract": { "value": "With the rapid growth of text-to-image models, a variety of techniques have been suggested to prevent undesirable image generations. Yet, these methods often only protect against specific user prompts and have been shown to allow undesirable generations with other inputs. Here we focus on \\textit{unconditionally} erasing a concept from a text-to-image model rather than conditioning the erasure on the user's prompt. We first show that compared to input-dependent erasure methods, concept erasure that uses Task Vectors (TV) is more robust to unexpected user inputs, not seen during training. However, TV-based erasure can also affect the core performance of the edited model, particularly when the required edit strength is unknown. To this end, we propose a method called \\textit{Diverse Inversion}, which we use to estimate the required strength of the TV edit. Diverse Inversion finds within the model input space a large set of word embeddings, each of which induces the generation of the target concept. We find that encouraging diversity in the set makes our estimation more robust to unexpected prompts. Finally, we show that Diverse Inversion enables us to apply a TV edit only to a subset of the model weights, enhancing the erasure capabilities while better maintaining model utility." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Concept Erasure" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/922f3dd0c204b56e090f7af56cfb1044f744a368.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/cccac54e130dc5fdc662465bce3b4955a68cf8d3.zip" }, "title": { "value": "Prompt-Agnostic Erasure for Diffusion Models Using Task Vectors" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2whSvqwemU
FM-TS: Flow Matching for Time Series Generation
main
Active
Time Series Generation;Flow Matching;Generative AI
generative models
1;3;5;5
5;3;2;4
2;1;2;3
1;1;2;3
2;1;3;2
3.5
3.5
2
1.75
2
-0.6742
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Could you clarify how the drift function $ v(Z_t, t) $ is modeled, and why you chose a linear interpolation $ Z_t = t \\cdot Z_1 + (1 - t) \\cdot Z_0 $? How does this linear interpolation impact the model’s ability to capture complex time dependencies in non-linear time series data?\n\n- The authors claim that this work represents a novel contribution to the field of Flow Matching. However, how does it build on or differ from the existing work presented in [2]?\n\n- The authors assert that the unconditional model can be directly applied to conditional generation without retraining. Could you elaborate on the mechanisms or transformations that enable this adaptation? Does this adaptation require additional architectural components, or is conditional information handled implicitly by the model?\n\n**References**\n\n[2] Kerrigan, G., Migliorini, G., & Smyth, P. (2023). Functional flow matching. arXiv preprint arXiv:2305.17209." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper makes an interesting attempt to apply Flow Matching, a technique that has shown promise in image generation, to the complex domain of time series generation. \n\n- The paper claims substantial efficiency gains over diffusion-based methods. Diffusion models, while powerful, suffer from high computational costs due to their iterative nature. Flow Matching theoretically offers a more straightforward ODE-based path, which could reduce the number of forward passes required for inference and training." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes FM-TS, a framework for time series generation based on Flow Matching (FM), as an alternative to diffusion models. The authors argue that FM-TS addresses the computational inefficiency and complexity of diffusion models by simplifying the generation process through continuous trajectory optimization. FM-TS is presented as being able to support both conditional and unconditional time series generation without retraining. However, significant gaps in the paper’s theoretical foundation, experimental validation, and clarity raise questions about the viability and originality of the approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper’s flow and organization present significant challenges in readability, largely due to unclear transitions between key concepts such as computational efficiency and generalization. It seems that the authors do not consistently differentiate these concepts in their approach. The dense and complex sections lack clear explanations, detracting from the paper’s overall coherence.\n\n- There is an unusual conflation of generalization and computational requirements, resulting in ambiguity. For instance, the authors assert that diffusion contributes to generalization on lines 056–057, yet they appear to refute this on lines 057–058, leading to further confusion.\n\n- The paper also lacks reproducibility experimentation (**no available code or scripts**), as no implementation details or code are provided. Including code would facilitate verification of the results and support a broader understanding. Moreover, the appendix consists of only a few lines of explanation in general. It seems that the paper is not ready for publication at this stage.\n\n- The claim that FM-TS can generalize to conditional generation tasks without retraining is intriguing but underdeveloped. The paper lacks comparisons with models specifically designed for conditional tasks, and no compelling evidence is presented to validate FM-TS’s performance in such scenarios. A deeper exploration of FM-TS’s generalization capability would strengthen this claim.\n\n- The authors assert that their model outperforms the current state-of-the-art (SOTA); however, the results in Table 2 do not support this claim, as high standard deviation values suggest inconsistent performance. It would be valuable for the authors to discuss these variations and integrate them into their analysis.\n\n- The paper suggests that imputation and forecasting tasks are nearly identical, differing only in the choice of point masking $M$. This assumption oversimplifies the nature of imputation, which often requires bidirectional information to accurately infer missing points. In contrast, forecasting typically operates with unidirectional data. Recognizing these differences is essential for model design and performance.\n\n- The implementation details of \"t power sampling\" are missing. Without an explanation of how this method improves results, it is difficult to assess its functional role. Providing a detailed description of the sampling process would enhance transparency and reproducibility, offering insight into whether this is an optimization layer or a refinement in sampling for conditionality.\n\n- The paper concludes with a vague claim that the unconditional model can be “directly used for conditional generation.” However, no details or references are given to substantiate how the model adapts to conditional tasks without retraining. A brief explanation or citation would clarify this point.\n\nAt this stage, it is challenging to recommend acceptance of this paper, primarily due to concerns regarding reproducibility. Without access to code, it remains unclear how to replicate the authors' results. Furthermore, ***the improvements in the paper's tables do not align well with the contributions claimed in the introduction.***\n\n**References**\n\n[1] Qi, M., Qin, J., Wu, Y., & Yang, Y. (2020). Imitative non-autoregressive modeling for trajectory forecasting and imputation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12736-12745)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Given that the performance of the logit-normal distribution appears comparable or inferior to uniform sampling in the ablation study, can you clarify its advantages?\n2. What’s the underlying mechanism that t-power sampling enables the direct application of unconditional models for conditional tasks? Are there any trade-offs?\n3. What’s the runtime of the training phase and inference phase of FM-TS? How does this efficiency compare to other generative approaches, such as GANs and traditional diffusion-based models?\n4. What are the primary limitations of the FM-TS model?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. FM-TS effectively utilizes rectified flow to address the high computational demands and slow inference with traditional diffusion models, offering a more efficient generation.\n2. The introduction of the t-power sampling method is innovative, generalizing the generative models trained in unconditional setting to conditional scenarios without retraining.\n3. Experiments for unconditional generation are well-designed and the results are solid across various metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes FM-TS, a new approach for time series generation, based on the rectified flow. Leveraging the features of the rectified flow, FM-TS reduces the computational cost during training and handles the slow inference observed in traditional diffusion-based models. In addition, several proposed methods enable the direct use of models trained on unconditional generation for conditional tasks like forecasting and imputation without retraining. Experimental results show that FM-TS achieves better performance than existing methods in both effectiveness and efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The overall writing quality is good, but some statements are confusing or misleading. For example, the descriptions about the capability of diffusion models in handling long-term dependency are contradictory in line 058 and line 061. The former states that diffusion models can capture long-range dependencies and generate diverse, high-quality samples, while the latter asserts that diffusion models struggle to preserve long-range dependencies and intricate patterns in time series data. \n2. There is insufficient discussion on the conditional generation, particularly on why the unconditional models are adapted for conditional tasks by Algorithm 1. The introduction of concepts like t-power sampling lacks sufficient context and explanation, which makes it challenging for readers unfamiliar with them to understand their implication. Can the authors provide a brief example of how Algorithm 1 adapts unconditional models for conditional tasks? Can you also expand the intuition behind t-power sampling and its role in this adaptation?\n3. The ablation study on the logit-normal distribution does not convincingly demonstrate its superiority; its performance is comparable or inferior to uniform sampling. Could the authors provide more analysis on why the logit-normal distribution is beneficial despite the results shown in Table 4? Are there any qualitative differences or theoretical advantages not captured by the metrics used?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please see Weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "**S1** Time series generation, though highly significant, remains less explored compared to image generation. The authors have made a commendable effort in addressing this challenging task.\n\n**S2** The paper demonstrates strong experimental results in both unconditional and conditional time series generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the task of generating both conditional and unconditional time series data. Diffusion models have proven effective for this purpose but are computationally expensive. To address this, the authors propose a model called FM-TS, which leverages rectified flow for efficient time series generation. A key advantage of FM-TS is its ability to generate conditional time series data without requiring retraining after being initially trained on unconditional generation tasks. The model is evaluated across multiple tasks, including unconditional generation, forecasting, and imputation, demonstrating superior performance compared to existing approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**W1** The paper lacks clarity in presenting the core model illustrated in Figure 2. Although rectified flow is explained thoroughly as a preliminary concept, the main components of the model are only briefly introduced in the final paragraph of Section 3, without adequate explanation. The authors should explain the model components, clarify the rationale behind the chosen architecture and its relevance to time series.\n\n**W2** In its current form, the paper appears to be an application of rectified flow to time series without addressing the specific challenges in adapting rectified flow from image data to time series such as causality of time series, seasonality, trends; limited novelty.\n\n**W3** The experimental evaluation is insufficient.\n- **W3.1** What is the motivation behind choosing squared error as the evaluation metric? Squared error is a metric of evaluation for point prediction. For a generative time series model, evaluating solely on squared error for forecasting and imputation is inadequate. A more suitable evaluation would be based on metrics like Continuous Ranked Probability Score (CRPS) for univariate or Negative Log-Likelihood for both univariate and multivariate distributions.\n- **W3.2** What is the reason for choosing only the current set of baselines for forecasting and imputation? It would be beneficial to compare FM-TS against point-estimation models since the chosen evaluation metric is squared error; ex. PatchTST and TS-Mixer for forecasting tasks.\n- **W3.3** For the imputation task, the choice of only 70% and 80% missing data rates should be justified. For direct comparison with existing work, consider the settings from Toshiro et al. (10%, 50%, and 90%) and Alcaraz et al. (70%, 80%, and 90%).\n-**W3.4** Since the main motivation for the FM-TS is the computational inefficiency of diffusion models, authors should show the runtime of training FM-TS and other baseline models (runtime per epoch, number of epochs until convergence, and/or total training time). Figure 3 attempts to do this job in terms of inference speed only.\n\n**Minor:**\n\n- **M1** In line 242, should the function \\( v: {R}^{l \\times d} \\times [0,1] \\to {R}^{l \\times d} \\) include \\( t \\in [0,1] \\) as an argument?\n- **M2** Please increase the font size of legends and labels in Figures 4 and 5 for readability.\n- **M3** Enhance the captions for Tables 3 and 4 to clarify the evaluation metrics used." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What are the possible reasons that diffusion models show bar-like synthetic PCA plot in Figure 5? It is strange to have bar-like shape in a PCA plot.\n\n2. Figure 4 does not seem to suggest good performance of FM-TS. Why is that, and why present it in the paper? \n\n3. When you do conditional time series generation, how do you do diffusion model baseline? Is that also tuned to be conditional, or does it still learn the entire density of the time series?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Clear presentation.\n3. Proper literature review." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors incorporate Flow Matching into the diffusion model for time series modeling. They test it on multiple datasets with multiple metrics with ablation studies and efficiency tests." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper does not do enough efforts to distinguish itself from similar works in the literature, such as CFM-TS in ICML 2024. \n\n2. This paper's experiment does not compare with ODE for time series works, which could also be used for time series generation. What is unique in flow matching that is beneficial for time series modeling?\n\n3. It seems to me that the predictive score is most important -- unless the authors could suggest other usage of generated fake time series, if not for privacy-protected learning. However, the proposed model does not seem significantly better than baselines in the predictive score.\n\n4. Figure 1 does not seem to be a comprehensive comparison in efficiency. It only compares FID against one baseline.\n\n5. Very limited interpretation/reasoning about the experimental results. Mostly it is only about listing all the numerics, but readers can hardly understand why the result looks like the ones presented in the paper and the implication." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce FM-TS, a groundbreaking flow matching framework for time series generation that achieves state-of-the-art performance in both conditional and unconditional settings." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024fmts,\ntitle={{FM}-{TS}: Flow Matching for Time Series Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2whSvqwemU},\nnote={under review}\n}" }, "abstract": { "value": "Time series generation has emerged as an essential tool for analyzing temporal data across numerous fields. \nWhile diffusion models have recently gained significant attention in generating high-quality time series, they tend to be computationally demanding and reliant on complex stochastic processes. \nTo address these limitations, we introduce FM-TS, a rectified Flow Matching-based framework for Time Series generation, which simplifies the time series generation process by directly optimizing continuous trajectories. This approach avoids the need for iterative sampling or complex noise schedules typically required in diffusion-based models. \nFM-TS is more efficient in terms of training and inference.\nMoreover, FM-TS is highly adaptive, supporting both conditional and unconditional time series generation. \nNotably, through our novel inference design, the model trained in an unconditional setting can seamlessly generalize to conditional tasks without the need for retraining. Extensive benchmarking across both settings demonstrates that FM-TS consistently delivers superior performance compared to existing approaches while being more efficient in terms of training and inference. \nFor instance, in terms of discriminative score, FM-TS achieves $0.005$, $0.019$, $0.011$, $0.005$, $0.053$, and $0.106$ on the Sines, Stocks, ETTh, MuJoCo, Energy, and fMRI unconditional time series datasets, respectively, significantly outperforming the second-best method which achieves $0.006$, $0.067$, $0.061$, $0.008$, $0.122$, and $0.167$ on the same datasets.\nWe have achieved superior performance in solar forecasting and MuJoCo imputation tasks, significantly enhanced by our innovative $t$ power sampling method." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Time Series Generation", "Flow Matching", "Generative AI" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4157c9fc038fa48eeb8b6df93425522886eac07f.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/db9308f382bec338356a62d23a6b81e7bfb157a8.pdf" }, "title": { "value": "FM-TS: Flow Matching for Time Series Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2wkjYEYoss
Gamma: Toward Generic Image Assessment with Mixture of Assessment Experts
main
Active
Image assessment;Mixture of Experts (MoE);Mixed training
applications to computer vision, audio, language, and other modalities
5;5;5;5
5;4;5;4
3;3;2;2
2;2;2;2
3;3;3;2
5
4.5
2.5
2
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the above comments" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The experiments show the better performance of the proposed method" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This submission propose a generic image assessment model using mixture of assessment experts, named Gamma. To deal with the problem of applying the image assessment model across various scenarios, Gamma proposes two techniques: 1) proposing a Mixture of Assessment Experts (MoAE) module, which employs shared and adaptive experts to dynamically learn common and specific knowledge for different datasets; 2) introducing a Scene-based Differential Prompt (SDP) strategy, which uses scene-specific prompts to provide prior knowledge and guidance during the learning process. Although the experiments shows the better performance of the proposed method, there are still some concerns for its acceptance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1)\tThe relationship between the adaptive experts and the scene-based differential prompt is unclear. The adaptive experts is also a type of prompt engineering to capture the specific knowledge of the datasets, which is much similar to the scene-based prompt with scene-specific priors. Much analysis on their inner mechanism is suggested to be added.\n2)\tFurthermore, rather than the statistical results on the datasets, I would like to see the analysis and experiment results to prove that the adaptive experts indeed capture the specific knowledge of different datasets and how the specific knowledge is reflected in the adaptive experts.\n3)\tAblation studies include experiments on the number of experts. What is the relationship between their number with the number of the datasets? As shown in Table 2, there are five datasets used for ablation, but three experts get the best performance. Could you help to analyze how the three experts capture the knowledge of the five datasets?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The idea of using MoAE to overcome the dataset distribution gap is reasonable. \n2. The performance is better than baselines (but may be unfair, see Weaknesses).\n3. The paper is presented clearly and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript introduces a mixture of assessment experts and scene-based prompts to achieve high-performing, unified image quality and aesthetics assessment across diverse datasets and scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **The comparison with baseline methods is unfair**. Table 1 contains some blanks for baseline methods like UNIQUE and LIQE, which raises concerns about the experimental setup. I have carefully checked the results of UNIQUE and LIQE and ensured that these numbers are directly copied from previous papers. The training datasets of UNIQUE and LIQE differ from this manuscript, which is unfair.\n2. **The generalization experiments are not enough**. Though this manuscript mentions that 12 datasets are involved, most of them are used in training. Only two of them are used to evaluate the generalization ability. More results from cross-dataset experiments are needed. For example, for the seven datasets in Table 1, how about the results of training with three datasets and evaluating with another four datasets?\n3. **The manuscript does not compare with an important baseline**, Q-Align, which also proposes a unified framework that can co-train multiple datasets. Moreover, only training on three datasets, Q-Align’s results on some datasets have surpassed this manuscript. \n4. **There is no analysis of efficiency**, though this manuscript claims the proposed method is both effective and efficient. Please report the comparison of the number of parameters, FLOPs, and training / inference time to support the claim. \n5. **There is no sensitivity analysis of prompts**. This manuscript uses scene-based differential prompts to improve the ability across multiple datasets and scenes. However, it is risky that the model will be highly reliant on such prompts. During testing, if the prompts are changed, the performance may significantly drop. Therefore, a detailed analysis of the sensitivity to prompts should be included." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I hope the author can prove that the results after data tuning by the hired experts are closer to the real situation than the previous results.\n2. I would like the author to explain the basis on which the model was chosen.\n3. I hope that the author can add an experimental result according to the theory of the model in the paper without referring to the data of several hired experts.\n4. I'm concerned about how SCENE-BASED DIFFERENTIAL PROMPT is implemented?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The author can take into account the annotation bias of different data sets and creatively propose a hybrid evaluation expert module. This paper will help implement a unified evaluation standard on different data sets in the future. This achievement is commendable. Experiments conducted by the authors on multiple datasets effectively demonstrate the universality of their model, and the introduction of the scenario-based differential cue strategy also proves its effectiveness through these experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a generic image evaluation model, Gamma, based on hybrid evaluation experts, which is trained to efficiently evaluate images from different scenes through mixed data sets. Taking into account annotation bias of different datasets, the authors propose a hybrid evaluation expert module that uses shared and adaptive experts to dynamically learn common and specific knowledge of different datasets respectively. At the same time, a scenario-based differential cue strategy is introduced to enhance the adaptability to various scenarios. They conducted an empirical study on 12 data sets and compared it with the existing models, and the results reached the most advanced level at present." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The article itself is to solve the annotation bias of different data sets, but by adding data, there is a suspicion of writing answers for people to change the question. The personalized tendencies among the hired experts are not addressed, and it is difficult to say that the results of the hired experts' data tuning are closer to the real situation than the previous results\nAt the same time, the author chooses to adjust the model only in the rear module instead of all modules, and says that this method reduces the computing power requirements of the model, which is obvious, but for the reason why this choice is made, does the benefit in terms of reducing the computing power really outweigh the model quality? Is this really worth it? The author doesn't offer a convincing explanation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "This paper has a reasonable motivation but fail to solve key problems existing in IQA from my opinion. My main concerns are as follows:\n\n1. In Line 079, the author mentions that 'the primary challenge in mixed-dataset training is the mean opinion score (MOS) bias'. However, I do not find a reasonable solution on this challenge. It my understanding is correct, the author just try to adopt different experts and directly train these experts on the labeled data. This does not convince me on solving the bias across different datasets considering that the labels inherently have bias.\n\n2. The proposed approach still heavily relies on training data, though the bias can be alleviated by more data, but it may still fail in practical scenarios. The proposed approach is trained and tested on the same data sources, i.e., the 12 benchmarks mentioned in the paper. This cannot validate the generalization ability of the proposed approach, which is the key point that existing IQA approaches cannot overcome. The author may consider test on some other sources that have never been used during training for further validation.\n\n3. The proposed approach lacks interpretation which is still another problem that existing approaches commonly have, i.e., what does each expert actually learn? Though with some reasonable designs, the proposed approach is still a black box. The author may consider explaining what each expert learns after training. Moreover, the improvement of 5 experts compared with 3 experts in Table 2 is very marginal and sometimes even worse. This is contradicted with the claim in Line 413. The author should provide more explanation.\n\n4. There should be model complexity verification, including parameters, flops and inference time compared with other baselines.\n\n5. I am a little confused about why a frozen CLIP can be directly adopted as a generic expert w/o finetuning on IQA datasets. Since it is never trained to do so. The author may provide a more detailed motivation on this." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ The motivation to combine multiple experts towards different IQA tasks is reasonable.\n+ The paper is easy to follow" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a MOE approach for generic IQA approach. The main idea is to adopt a frozen CLIP as a general expert combined with trainable experts for different IQA tasks. The proposed approach demonstrate superiority on different IQA tasks compared with several existing baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed approach still heavily relies on training data, though the bias can be alleviated by more data, but it may still fail in practical scenarios and training and testing on the data from the same sources cannot validate the generalization ability of the proposed approach, which is the key point that existing IQA approaches cannot overcome.\n- The proposed approach lacks interpretation which is still another problem that existing approaches commonly have, i.e., what does each expert actually learn? Though with some reasonable designs, the proposed approach is still a black box." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024gamma,\ntitle={Gamma: Toward Generic Image Assessment with Mixture of Assessment Experts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2wkjYEYoss},\nnote={under review}\n}" }, "abstract": { "value": "Image assessment aims to evaluate the quality and aesthetics of images and has been applied across various scenarios, such as natural and AIGC scenes. Existing methods mostly address these sub-tasks or scenes individually. While some works attempt to develop unified image assessment models, they have struggled to achieve satisfactory performance or cover a broad spectrum of assessment scenarios. In this paper, we present \\textbf{Gamma}, a \\textbf{G}eneric im\\textbf{A}ge assess\\textbf{M}ent model using \\textbf{M}ixture of \\textbf{A}ssessment Experts, which can effectively assess images from diverse scenes through mixed-dataset training. Achieving unified training in image assessment presents significant challenges due to annotation biases across different datasets. To address this issue, we first propose a Mixture of Assessment Experts (MoAE) module, which employs shared and adaptive experts to dynamically learn common and specific knowledge for different datasets, respectively. In addition, we introduce a Scene-based Differential Prompt (SDP) strategy, which uses scene-specific prompts to provide prior knowledge and guidance during the learning process, further boosting adaptation for various scenes. Our Gamma model is trained and evaluated on 12 datasets spanning 6 image assessment scenarios. Extensive experiments show that our unified Gamma outperforms other state-of-the-art mixed-training methods by significant margins while covering more scenes." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Image assessment", "Mixture of Experts (MoE)", "Mixed training" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/575bb545765933d2b2be99b039ed517192a795c5.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Gamma: Toward Generic Image Assessment with Mixture of Assessment Experts" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2wmxxYxVF0
SEE: See Everything Every Time - Broader Light Range Image Enhancement via Events
main
Active
Event Camera;Image Brightness Enhancement;Brightness Adjustment Dataset
datasets and benchmarks
3;5;6;6
5;4;4;4
2;2;3;3
2;2;3;3
2;3;3;3
5
4.25
2.5
2.5
2.75
-0.942809
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why is broader brightness adjustment using event cameras necessary when exposure control can be achieved through established techniques? How does SEE-Net theoretically outperform these approaches?\n \n2. What specific performance gains justify the choice of cross-attention over simpler fusion techniques in the context of this problem?\n \n3. Could the authors provide quantitative metrics or examples to verify the SEE-600K dataset’s consistency and quality, addressing the observed artifacts?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The SEE-600K dataset expands upon previous datasets, offering more diverse lighting scenarios, which could be useful for broader experimentation in event-based imaging.\n\n- The lightweight architecture of SEE-Net (1.9M parameters) suggests computational efficiency, which may be beneficial in practical applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces SEE-Net, a framework for image brightness adjustment using event cameras across a broad range of lighting conditions. It also presents the SEE-600K dataset, containing event-image pairs under various lighting scenarios. The model employs cross-attention to fuse event and image data, enabling prompt-based brightness control. Experimental results suggest SEE-Net’s improved performance over baseline methods on the SDE and SEE-600K datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed problem of enhancement with event cameras across  broader brightness is not particularly novel. Prior works on event-based HDR (Cui et al., 2024; Yang et al., 2023; Messikommer et al., 2022) have already explored similar concepts, partially addressing the needs this paper claims as unique. The distinction in this paper’s approach does not clearly add new knowledge to the field.\n \n\n- Also, the problem’s importance is unclear, especially given that established techniques can already perform exposure adjustments during enhancement. Techniques like [1, 2] allow exposure control with brightness factor as prompts. The paper does not demonstrate how SEE-Net outperforms these approaches when combined with event-based imaging theoretically and empirically.\n \n- The core methodology of using cross-attention to merge event and image data is not new and has been applied extensively in similar tasks [3, 4]. Furthermore, the proposed cross-attention module and prompt mechanism are insufficiently justified. There is no clear rationale for why these choices improve performance over simpler fusion techniques, such as concatenation, or why they surpass existing multi-modal enhancement frameworks. The theoretical foundations for the encoding and decoding processes are limited, leaving the importance of each component unclear.\n \n- The SEE-600K dataset is primarily an expanded version of SDE (Liang et al., 2024), constructed with similar strategies and devices, and addressing a similar problem. Although it extends certain aspects through refined engineering techniques, these modifications alone do not constitute a significant novelty or research contribution.\n \n- The SEE-600K dataset shows quality issues, particularly in the normal-light images. Figures 6 and 12 exhibit noticeable artifacts, such as blurriness (e.g., the tree textures in Row 3 of Figure 12, toy contours in Row 1), saturation (e.g., the toys in Row 1), noise (e.g., grass behind bicycles in Row 4), and other visual defects (e.g., ground in Row 1 of Figure 13). These issues detract from the dataset’s value as a high-standard resource and raise questions about its suitability for rigorous research.\n \n\n[1] Kindling the darkness: A practical low-light image enhancer, ACM MM, 2019\n\n[2] Learning to See in the Dark, CVPR, 2018\n\n[3] Event-Based Video Frame Interpolation With Cross-Modal Asymmetric Bidirectional Motion Fields, CVPR, 2023\n\n[4] Event-Based Fusion for Motion Deblurring with Cross-modal Attention, ECCV 2022" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Table 2, the proposed method shows the worst results when trained on SED for both high light and normal light but achieves the best results when trained on SEE. Which part of the proposed method contributes to this significant improvement for high light and normal light? \nAdditionally, I noticed that some methods trained on SED are missing when trained on SEE. What is the reason for removing these methods?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The SEE-600K dataset is carefully designed and captured, which is also suitable for other low-light enhancement and HDR reconstruction methods to test their results.\n2. The brightness adjustment methods take brightness prompt into consideration, which reduces the difficulty of recovering actual brightness level without prior knowledges." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an image enhancement and brightness adjustment method using SEE-600K, a carefully captured dataset spanning different brightness levels. \nThe SEE-600K dataset is significantly larger than existing datasets and was captured under diverse lighting conditions, making it well-suited for both low-light enhancement and HDR imaging applications. \nThe proposed enhancement method uses cross-attention to fuse events and images, while the brightness adjustment method leverages brightness prompts to produce results tailored to different brightness levels. \nThe proposed approach achieves superior results compared to previous methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The results of the proposed method are not good enough for over-exposed areas. Some details are missing in saturated areas, e.g., Figure 19 and Figure 20. They are also not good enough for under-exposed areas, e.g., Figure 5.\n2. The results of different methods in Figure 5 are not well-aligned. If these results are from different frames, the comparison may not be fair." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "•\tSince B controls the brightness of the output image, it is not related to the input images. Consider a case, if I want to reconstruct a bright image (set B=0.8), but with two different input images (one is bright, one is dark), how the result images will be?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "•\tThis paper proposed a dataset that contains images different light conditions, which may contribute to the event-based vision community. \n\n•\tThe appendix is detailed with dataset samples, additional results, and explanation of the proposed network." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel dataset comprising RGB frames and synchronized event data captured across various scenarios. To simulate diverse lighting conditions, the RGB frames are collected using four distinct ND filters, each representing a unique lighting intensity. Additionally, the authors present a network designed to recover normally exposed images from inputs under varying lighting conditions, leveraging the ND-filtered data. A notable feature of the proposed method is its capacity to control the brightness of output images through a brightness prompt." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "•\tThe contribution seems incremental, resembling an extension of previous work, specifically Liang et al.’s SDE Dataset (CVPR24). While the authors introduce some novel components, the dataset and approach appear to build closely on existing work without clear distinctions in scope or objectives.\n\n•\tThe quality of the normally lit images in the proposed dataset is suboptimal. The dataset relies on APS frames from the Color DAVIS sensor, which suffers from dynamic range limitations. As a result, these frames lead to a notable disparity in quality. This limitation is visible in the normal-light images presented in Figure 13 (c), where details captured by the event sensor are underrepresented.\n\n•\tThe motivation for designing specific position and Bayer pattern embeddings within the network architecture is not adequately justified. The authors introduce these components, but it remains unclear how they enhance the model’s performance or if they address particular challenges within the task. Clarifying their role and potential benefits would improve understanding and transparency.\n\n•\tThe proposed method’s loop function may result in long processing times, which could hinder its usability, particularly in real-time or low-latency applications. Without detailed analysis of the computational demands and latency, it is challenging to assess the network’s practicality in deployment scenarios. Although the size of the proposed network is small (1.9M), the FLOPs is pretty high (405.72).\n\n•\tIn Figure 5, the output of the proposed method appears visibly blurred, especially when compared to the sharpness of baseline methods like EvLowLight (Liang et al., ICCV23) and EvLight (Liang et al., CVPR24). This blurring is particularly noticeable around edges, such as those of the box under the desk, which could impair the network’s effectiveness in applications requiring high-detail preservation.\n\n•\tTable 3, case #6, reveals that disabling the prompt merge component results in a slight PSNR decrease but a corresponding SSIM increase. This discrepancy suggests that while prompt merging contributes to maintaining overall pixel-level fidelity (PSNR), it may slightly compromise structural similarity (SSIM). Further analysis of this trade-off could provide insights into the optimal configuration for different scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* The proposed dataset contains some artifacts, such as defocus blur (the normal light one in the first group of Fig12), false color (the normal light one in the first group of Fig13), \\etc. I wonder why the authors do not consider to remove them. In addition, please analyze the influence of such kinds of artifacts to the performance of the proposed method and the compared methods.\n* Does the proposed method consider the dynamic scenes? Does the proposed dataset contain frames with motion blur? Please analyze the influence of motion blur to the performance of the proposed method and the compared methods.\n* Could you please show some examples with different prompts (\\ie, for each example, let us set multiple different B and check the results) and compare with other methods?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The dataset is the first event-based dataset covering a broader luminance range. \n* The proposed method achieves the state-of-the-art performance. I like the idea about adjusting the brightness of images across a broader range of lighting conditions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper collects a dataset named SEE-600K, consisting of 610126 images and corresponding events across 202 scenarios, each featuring an average of four lighting conditions with over a 1000-fold variation in illumination. Besides, it proposed a framework effectively utilizes events to smoothly adjust image brightness through the use of prompts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* It seems that the proposed method cannot reconstruct HDR images, \\ie, the output images are still LDR. However, in Line52, the authors mention the weakness of the event-based HDR reconstruction, but do not provide a solution. I think since the event camera could have some HDR properties, the output image should also have some HDR properties.\n\n* The comparisons may not comprehensive enough. Please compare to more methods designed for event-based low-light enhancement such as [a,b,c]. Besides, it seems that the compared methods are not trained with the same loss function used in this paper, which could be not that fair enough. In addition, please also evaluate the results in the dataset used in EvLowlight.\n\n* The writing quality can be further improved. There are some typos (\\eg, line 234, cna --> can) need to be fixed, and the conference names in Ref should be unified (\\eg, for CVPR, the authors use both \" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition\" and \" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition\").\n\n [a] Event-Guided Attention Network for Low Light Image Enhancement\n\n [b] Low-light video enhancement with synthetic event guidance\n\n [c] Exploring in Extremely Dark: Low-Light Video Enhancement with Real Events" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We develop a novel framework using event cameras and the SEE-0.6M dataset to enhance and adjust image brightness across a wide range of lighting conditions, enabling robust high dynamic range image restoration from day to night." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024see,\ntitle={{SEE}: See Everything Every Time - Broader Light Range Image Enhancement via Events},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2wmxxYxVF0},\nnote={under review}\n}" }, "abstract": { "value": "Event cameras, with a high dynamic range exceeding $120dB$, which significantly surpasses traditional cameras,demonstrate superior robustness under various lighting conditions, including both low-light and high-light situations.\nHowever, recent event-vision research only consider low-light image enhancement and neglected image enhancement and brightness adjustment under a broader range of lighting conditions, \\eg, normal or high illumination.\nBase on this, we propose a novel research question: how to employ events to enhance and adjust brightness of images captured under \\textbf{broader lighting conditions} —including low light, normal light, and high light — aiming to restore clear images from day to night.\nTo investigate this question, we first collected a new dataset, SEE-0.6M, comprising 610,126 images and corresponding events across 202 scenarios spanning from day to night, each with an average of four lighting conditions exhibiting more than a 1000-fold differences in illumination from low-light to high-light.\nSubsequently, we propose a framework that effectively employ the high dynamic range information from events to smoothly adjusts brightness of the image through prompts.\nOur framework considers the camera sensor's patterns to capture color, utilizes sparse learning to represent events as a brightness dictionary, and adjust dynamic range of images through cross-attention to form a broader light range representation (BLR).\nFinally, the BLR is decoded at the pixel level into an image of corresponding brightness via prompts.\nExperimental results demonstrate that our method not only performs well on the low-light enhancement dataset but also shows robust performance on wide light-range enhancement using SEE-0.6M dataset.\nAdditionally, our method allows for pixel-level brightness adjustment, providing flexibility for post-processing, which may inspire more imaging applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Event Camera", "Image Brightness Enhancement", "Brightness Adjustment Dataset" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b72b03baa5f0de1b6d587d5a738b9fbe2e727693.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/480d1f43c21625d7b0b275615787120d86251f09.zip" }, "title": { "value": "SEE: See Everything Every Time - Broader Light Range Image Enhancement via Events" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2x1U8a3s7G
Prompt Diffusion Robustifies Any-Modality Prompt Learning
main
Active
Prompt learning;Diffusion model;Vision-language models
generative models
3;5;6
4;4;3
2;3;3
2;3;3
3;3;3
4.666667
3.666667
2.666667
2.666667
3
-0.755929
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Is the author's approach a two-stage process, starting with a prompt study followed by a prompt proliferation.\n2. Diffusion models incorporate randomness in the generation process, which may lead to uncontrollable fluctuations in the generated prompts and thus affect the robustness of the model. How to cope with the randomness of the generated prompts and avoid the instability of prediction caused by it?\n3. The authors' approach seems to be applicable only to VPT-shallow prompt types, and whether the authors' approach can be migrated to the VPT-deep prompt learning paradigm." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The method in this paper generates customized prompts for each sample by gradually optimizing the prompts through diffusion, which enhances the accuracy of prediction and generalization across downstream tasks.\n2. The diffusion prompting method in this paper is a plug-and-play module that can be seamlessly integrated into existing textual, visual, or multimodal prompt learning methods.\n3. The method in this paper improves the prompt learning process by efficiently extracting unique domain details from test images without mixing them with class labels." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method called Prompt Diffusion, which employs a diffusion model to progressively refine prompts, enabling customized prompts for each sample. By introducing a technique for creating tailored prompts for individual test samples, this method addresses the limitations of fixed prompts, enhancing the model's robustness to distribution shifts. Empirical results on extensive datasets validate the effectiveness of this approach, demonstrating its robustness in generalization tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors' method requires stepwise optimization of the prompts and may require several iterations to obtain optimal results, in addition, the introduction of a diffusion model increases the complexity of the system, and therefore whether the training time is likely to be relatively long.\n2. Whether the authors' approach is a two-stage process, where prompt learning is performed first, followed by diffusion of the prompts, and the final model performance relies on the goodness of the previously learned prompts. In addition, the diffusion process relies on random noise vectors to generate the prompts and therefore may be sensitive to noise, which may affect the stability of the final performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I am curious about the setting of the two loss weights β in Equation (8). Can further experimental analysis be provided?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Experiments have shown that the proposed method outperforms baseline methods.​\n\n2. The overall idea is intuitive and straightforward, addressing the limitations of fixed prompts by leveraging diffusion models to generate over-fitted prompts per sample, which enhances model robustness against distribution shifts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors introduce prompt diffusion, which utilizes a diffusion model to refine prompts for each input image, thereby enhancing the model's ability to generalize across different distributions. The proposed prompt diffusion is a straightforward plug-and-play module that can be seamlessly integrated into existing prompt learning frameworks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Considering that the proposed method is conducted on per sample. during training, does it introduce a significantly larger computational load compared to conventional prompt learning methods? Can a comparative analysis be provided to address this concern?\n\n2. While the proposed method is plug-and-play and the pipeline figure demonstrations are based on CoCoOp, it would be beneficial to include sections addressing visual prompt tuning and multi-modal prompt tuning. Additionally, the method emphasizes the meta-net π within CoCoOp, but it is unclear how it handles other prompt learning methods that do not involve π, such as VPT and MaPLe.\n\n3. The length of prompts in prompt learning methods can affect the final performance. Does the proposed method also encounter similar situations? It is encouraged for the authors to supplement relevant ablation studies to address this concern.\n\n4. There are also some works in the field of prompt learning that address the limitations of fixed prompts by generating instance-level prompts (e.g. [1]). It is recommended that the authors supplement the related work to make the paper more comprehensive.\n\n[1] Xinyang Liu ,et al. Patch-Token Aligned Bayesian Prompt Learning for Vision-Language Models. UAI 2024" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What are the key motivations behind using diffusion models for prompt learning, and how does it address the limitations of fixed prompts?\n2. How does Prompt Diffusion leverage the diffusion model to gradually transition from a random to a sample-specific prompt?\n3. In what ways does Prompt Diffusion enhance generalization capabilities across base-to-new, cross-dataset, and domain generalization tasks?\n4. How does Prompt Diffusion ensure compatibility with existing prompt learning models across textual, visual, and multimodal prompts?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tIntroduces an innovative, modality-agnostic diffusion process that significantly enhances robustness in prompt-based learning.\n2.\tDemonstrates consistent empirical improvements across various prompt learning tasks, supporting the efficacy of diffusion models.\n3.\tEfficient design reduces inference time, making it suitable for diverse real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel framework, Prompt Diffusion, which aims to improve the generalizability and robustness of prompt-based learning across various modalities (e.g., visual, textual, multimodal). In prompt-based learning, especially for foundation models in zero-shot and few-shot learning, fixed prompts often suffer from distributional shifts, impacting performance on unseen data. Prompt Diffusion leverages a diffusion model to refine prompts gradually, transforming them from a generic to a sample-specific prompt. This process enhances the robustness and adaptability of prompts across datasets with distinct distributions, providing a plug-and-play solution compatible with existing prompt-learning methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe paper does not fully articulate the specific limitations of the SOTA prompt mehthods in adapting to distributional shifts in data, which creates ambiguity around the critical nature of these issues within broader prompt-learning applications. To make this critique more actionable, the authors could quantify the performance degradation caused by these shifts in existing methods to better contextualize the importance of their contribution. Specific examples are not enough to illustrate the problem.\n2.\tAlthough the diffusion model is proposed to generate sample-specific, customized prompts, the paper does not clearly explain why diffusion was chosen over other, potentially simpler methods. This raises questions about the model's unique contributions and practical effectiveness. For instance, if simpler statistical methods like ProDA[1] are available, what advantages does the complex diffusion model offer? Moreover, there are already several statistical approaches for prompt learning, such as Bayesian Prompt Learning[2], which the authors could consider referencing.\n3.\tThe approach has limited empirical exploration outside the image-text domain, raising questions about its generalizability to other modalities. To strengthen this point, the authors could discuss the potential challenges and adaptations needed to apply their method to other modalities, such as audio or video. \n4.\tThe high resource demands of diffusion models, including substantial GPU and training time requirements, make them impractical for parameter-efficient methods such as prompt learning. The complexity and cost of implementing diffusion models in this context undermine their accessibility and practicality. \n\n[1] Lu, Yuning, et al. \"Prompt distribution learning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[2] Derakhshani, Mohammad Mahdi, et al. \"Bayesian prompt learning for image-language model generalization.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper introduces prompt diffusion, which uses a diffusion model to gradually refine prompts to obtain a customized prompt for each sample." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024prompt,\ntitle={Prompt Diffusion Robustifies Any-Modality Prompt Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2x1U8a3s7G},\nnote={under review}\n}" }, "abstract": { "value": "Foundation models enable prompt-based classifiers for zero-shot and few-shot learning. Nonetheless, the conventional method of employing fixed prompts suffers from distributional shifts that negatively impact generalizability to unseen samples. This paper introduces prompt diffusion, which uses a diffusion model to gradually refine prompts to obtain a customized prompt for each sample. \nSpecifically, we first optimize a collection of prompts to obtain over-fitted prompts per sample. Then, we propose a prompt diffusion model within the prompt space, enabling the training of a generative transition process from a random prompt to its overfitted prompt. As we cannot access the label of a test image during inference, our model gradually generates customized prompts solely from random prompts using our trained, prompt diffusion. Our prompt diffusion is generic, flexible, and modality-agnostic, making it a simple plug-and-play module seamlessly embedded into existing prompt learning methods for textual, visual, or multi-modal prompt learning.\nOur diffusion model uses a fast ODE-based sampling strategy to optimize test sample prompts in just five steps, offering a good trade-off between performance improvement and computational efficiency.\nFor all prompt learning methods tested, adding prompt diffusion yields more robust results for base-to-new generalization, cross-dataset generalization, and domain generalization in classification tasks tested over 15 diverse datasets." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Prompt learning", "Diffusion model", "Vision-language models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/056756f56239effc7b9d64d580cd8f6620882956.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Prompt Diffusion Robustifies Any-Modality Prompt Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2xRTdzmQ6C
Concepts' Information Bottleneck Models
main
Active
Concept bottleneck models;Information bottleneck
interpretability and explainable AI
1;3;5;6;6
5;4;3;4;4
1;2;2;3;3
1;2;3;3;3
2;2;3;3;3
4.2
4
2.2
2.4
2.6
-0.652328
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In Table 2, the improvements in prediction accuracy on most datasets are very limited compared to the baseline models. Could you provide more explanation on this? What are your thoughts on these limited improvements, and given this, how can we conclude the effectiveness of the proposed CIB method?\n\nAdditionally, since the CIBM_B model in Section 3.1 performs worse than almost all baselines, is it still necessary to devote so many pages to this method? More explanation on this could be helpful to understand the contribution of this section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The idea is clear: incorporating the IB into CBMs addresses the concept leakage issue.\n\nThe experiment is extensive, evaluating the proposed method across three dimensions: accuracy, interventions, and interpretability.\n\nAdditionally, a novel metric is proposed to assess the quality of concept sets based on intervention performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes enhancing Concept Bottleneck Models (CBMs) using the Information Bottleneck (IB) framework, addressing the issue of concept leakage, where concept activations contain irrelevant data, compromising model interpretability and performance. This enhancement, termed Concepts’ Information Bottleneck (CIB), constrains mutual information between inputs and concepts, optimizing concept relevance. Experiments on datasets such as CUB, AwA2, and aPY demonstrate improved prediction accuracy and interpretable concept representations. Additionally, the authors introduce a novel metric to assess concept set quality by evaluating intervention performance, offering a direct measure for interpretability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The improvement of the proposed method compared to existing methods is marginal (Table 2), especially given that prediction accuracy is a primary evaluation metric, making the experimental results less compelling.\n\nThe variational inference derivation is relatively straightforward and could be moved to the appendix.\n\nThe process of incorporating the IB into CBMs is not clearly explained; adding a diagram to illustrate this process would improve clarity.\n\nThe core idea of applying the established IB framework to CBMs limits the novelty of this work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How is the ground truth probability p(c|z) in the conditional entropy-based implementation computed, is it available from the data?\n- Regarding the estimator-based implementation mentioned in Sec 3.2, what is the exact routine for optimizing I(X; C)? Do you employ an approach similar to adversarial training, where you first estimate I(X; C) before each gradient step for optimizing C? \n- Is the results for CBM in Table 2 corresponding to the case where you use hard (i.e. binary) concept label? If so, it would be beneficial to explicitly mention this;\n- The proposed IB-based CBM framework for controlling information leakage appears quite general. While the method mainly used Kawaguchi’s method [1] for estimating I(X; C), could alternative methods, such as variational approximation to densities [2] and slice mutual information [3], also be applicable? These methods may be more effective in removing information from the learned concept representation. I feel the paper could benefit from a discussion on the generality of their framework.\n\n\n\n*References:*\n\n[1] How does information bottleneck help deep learning? ICML 2023\n\n[2] CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information, ICML 2020\n\n[3] Scalable Infomin Learning, NeurIPS 2022" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The work, to the best of my knowledge, is the first one who explicitly marries IB with CBMs, and is the first one that analyzes the info-plane in CBM learning;\n- The proposed IB-based idea for mitigating information leakage is both natural and elegant. The IB-based framework proposed in this work seems also highly general and can potentially be implemented by a wide range of methods beyond the two suggested by the authors;\n- The paper is overall well written and is easy-to-follow;\n- The work has been compared against state-of-the-art methods in the field, including CEM and PCBM. Notably, it does not require additional modules (as in PCBM) or additional regularization techniques (as in CEM), being simple and easy-to-use;\n- The paper also proposed a novel, general-purpose metric for evaluating the quality of the learned concepts, marking the first instance of assessing the quality of the concept set rather than individual concepts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the issue of information leakage in concept bottleneck models (CBMs), a significant challenge that impacts CBMs' interpretability and intervenability. The key idea is to apply Tishby’s Information Bottleneck (IB) principle in concept representation learning. Specifically, the author proposed to compress task-irrelevant information about the data X from the learned concept representation C, whereas making C maximally predictable for the label Y. This information compression is believed to be useful for controlling information leakage. The author further develop two methods to implement their IB-based framework and evaluates their efficacy on three different datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- (Major) Despite the elegant framework proposed, some implementation details may lack clarity and require further justification; please see the “questions” section below;\n- (Major) The technical method for minimizing mutual information (MI) in the proposed IB-based CBM method is actually not so novel and largely relies on existing methods such as [1];\n- (Major) The comparison between the two IB implementations appears somewhat simplistic and may provide only limited insights. What makes the estimator-based implementation more useful than the other?\n- (Minor) While the presentation is generally good, some content could be more concise and structured. For instance, the derivation in Section 3.1 could be streamlined to present only the essential final estimator used in practice, relegating the full derivation to the appendix;\n- (Minor) The main experimental results are based on only three runs. While I appreciate the author’s transparency in reporting this, more runs could be considered for better robustness of the results;\n- (Minor) When assessing intervenability, a comparison between the proposed CIBM method and the original CBM is lacking. How CIBM exactly helps in improving intervenability does not seem apparent.\n- (Minor) Reproducibility: despite the very interesting and elegant proposal, no code repo is shared. Together with the missing technical details mentioned above, this weaken the reproducibility of the work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "**Methodology Questions**\n1. Line 278: Which CBM training scheme (joint, sequential, or independent) is used for comparison? Given that sequential training is known to reduce concept leakage (as per the pitfalls paper https://arxiv.org/pdf/2106.13314), why wasn't a comparison made against CBM using hard concept representations and independent training?\n2. Line 149: Its not clear where Z is coming from under your formulation, presumably some layer before the concept bottleneck?\n3. Line 300: \"We use ResNet18 embeddings provided by the dataset authors and train FCN on top of them.\" For this dataset and the others, are the backbone networks further tuned during training? \n\n**Results and Comparisons**\n\n4. Line 324-377 (Table 2): Why are baseline comparisons inconsistent across datasets?\n - PCBM comparisons only appear for some datasets. Furthermore, comparing against PCBM is not necessary nor useful, as PCBMs are not trained to be susceptible to interventions. \n - CEM results only shown for CUB (where it outperforms the proposed methods)\n - ProbCBM results only shown for CUB\n\n**Experimental Design**\n\n5. Line 431: The claim about CIBME's training stability needs validation loss curves for support.\n\n6. Line 522: Why are concept interventions varied for CUB but not for AWA2?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "**Novel Research Direction** \nThe paper introduces an innovative approach by studying and directly addressing the memorization-compression pattern in concept bottleneck models.\n\n**Technical Writing Quality** \nThe paper demonstrates good clarity in its presentation:\n- Clear and logical flow of ideas throughout the manuscript\n- Concise and grammatically sound writing\n- Well-designed figures and tables that effectively complement the text\n- Abstract and title that accurately capture the paper's core contributions" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an enhancement to Concept Bottleneck Models (CBMs) through the integration of the Information Bottleneck framework, attempting to address the problem of concept leakage in CBMs. The authors propose a Concepts' Information Bottleneck Model (CIBM) that minimizes mutual information between inputs and concepts while maximizing expressivity between concepts and labels, introducing two variants: a bounded CIB (CIBMB) and an estimator-based CIB (CIBME) Additionally, the authors propose a novel intervention scheme based on a measure of 'uncertainty', and propose two metrics to assess concept set quality based on intervention performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Experimental Limitations** \nThe experimental evaluation is insufficient, primarily relying on comparisons against a vanilla CBM with unspecified training parameters. The results are not compelling, as CEM appears to either outperform or match the proposed methods on the CUB dataset.\n\n**Unreproducible** \nThe experiment section is not comprehensive enough to be reproducible, no code is supplimented. \n\n**Intervention Strategy** \nThe Uncertainty Based (UB) concept interventions fail to demonstrate meaningful improvements. The method's performance is comparable to or worse than random baselines. The paper lacks crucial comparisons with contemporary intervention strategies from recent literature.\n\n## Clarity and Novelty Issues\n\n**Metric Formulation** \nThe proposed metrics lack novelty and present existing concepts in a potentially misleading way:\n\n- The concept intervention trends (positive/negative) have been extensively documented in previous work, including the CEM paper\n- AUC_TTI reduces to a simple mean, obscuring nonlinear trends that are more effectively visualized in graphical form (as evident in Figure 3)\n- NAUC_TTI's formulation is problematic:\n - It simplifies to the difference between positive intervention and baseline performance\n - This comparison is standard practice in modern concept bottleneck model papers\n - The metric can paradoxically penalize superior models (e.g., CEMs would score worse despite improving baseline accuracy while maintaining intervention performance)\n\n**Visualization Recommendation** \nRather than introducing potentially confusing metrics, intervention results would be better presented through graphs showing performance across multiple concept groups, providing clearer and more interpretable results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "**Q1**: Is $z$ simply a hidden representation extracted from a neural network (e.g., the output of a ResNet)? Does your model follow the structure: $x \\rightarrow z \\rightarrow c \\rightarrow y$? Clarifying this would help improve understanding of the overall architecture.\n\n**Q2**: Why did you drop certain baselines in some experiments, retaining only a few (e.g., dropping CEM in all experiments except CUB)? I would prefer a comparison with the strongest model, such as CEM, instead of weaker models like PCBM, to ensure a fair performance evaluation.\n\n**Q3**: Could you clarify whether the trend or the higher value of $I(C;Y)$ is more significant, and explain why this matters? Additionally, what does a lower $I(X;C)$ represent in practical terms? Moreover, please standardize the x-axis range across all plots to avoid misleading comparisons between methods.\n\n**Q4**: The plots in Figure 3 all appear quite similar, and it’s unclear what specific differences I should focus on. Could you explain your claims more clearly and point out the key takeaways?\n\n**Q5**: Why was CBM not included as a baseline in Figure 4? Given that CBM likely exhibits a similar trend to CIBM, the statement that “CIBM does not suffer from concept leakage” feels unsupported. Could you strengthen this claim with further evidence or comparative results?\n\n**Q6**: Why did you choose not to compare your model with other approaches specifically designed to reduce leakage, such as “Addressing Leakage in Concept Bottleneck Models”? \n\n**Q7**: Regarding Table 3, why is the performance on CUB so low when there are no corrupted concepts? I would expect it to be at least higher than the accuracy. Furthermore, do you have any insights into why your model’s AUC drops more than CBM’s as the number of corrupted concepts increases (at some point, CBM even surpasses CIBM)? Additionally, why did you choose to corrupt only one concept in AwA2, using a different evaluation setup compared to CUB? Please also specify the strategy used for intervention (uncertainty or random).\n\n**Q8**: At L524, what do you mean by “trained with different concept annotation”? Were CBM and CIBM trained using the same concept annotations, or were there differences in the annotations used?\n\n**Curiosity**: Did you happen to evaluate CEM in the experimental setup used for Figure 2? It would be interesting to observe the trend of a concept-based model with higher expressive power, such as CEM, in comparison to the models you presented." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-written and easy to follow. It provides a solid motivation for the problem, offering sufficient context on concept leakage and how it has been addressed by existing methods.\n- Employing Mutual Information is a novel and intriguing approach to mitigate concept leakage, a critical issue in Concept Bottleneck Models (CBMs).\n- The authors effectively guide the reader through the solution’s formulation, offering enough theoretical insights to understand why they arrived at the two proposed solutions: $CIBM_E$ and $\\text{CIBM}_{\\text{B}}$.\n- The newly introduced metric is a clever addition, as it provides an automatic evaluation of what prior works have mostly assessed graphically. While the concept itself is not entirely new (as CBMs are often evaluated through plots showing model performance with increasing interventions), the metric encapsulates this idea into a single value that assesses the overall trend." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses a significant issue in Concept Bottleneck Models (CBMs): concept leakage. This occurs when the model encodes additional information in the concept values beyond what is necessary to solve a task. To mitigate this, the authors propose Concept Information Bottleneck Models (CIBMs), a novel training approach for CBMs that utilizes Information Bottleneck and Mutual Information techniques. By minimizing the information bottleneck between concepts, inputs, and outputs, they effectively limit the information flow, thereby reducing leakage. The framing of this approach is intriguing, and the experimental results provide promising insights into its effectiveness. Additionally, the paper introduces a new metric and its variation for evaluating how well a CBM handles interventions, which is closely related to measuring concept leakage." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In the practical scenario, the architecture they employ is not entirely clear to me. I understand that the $q(\\cdot)$ functions are distributions parameterized by neural networks, but the details regarding the rest of the model, particularly $z$, are unclear (see Q1). Are the concepts being supervised, and is the same set of concepts used as in traditional CBMs? A simple visual representation of the model, highlighting the differences introduced compared to CBMs, would be very helpful.\n- The experimental section also raises some concerns:\n\t1.\tThe rationale behind dropping certain baselines (as seen in Table 2) is not well explained. For instance, I would have expected to see all baselines, particularly CEM, as it is one of the most powerful CBM-like models in terms of accuracy (see Q2).\n\t2.\tSeveral claims are either missing supporting information (Figure 1), lack proper motivation (L426-431), or are somewhat misleading (L467-469). Regarding Figure 1, there is no discussion about $I(X;C)$, which, as far as I understood, should exhibit a lower value for CIBM later in the training compared to CBM, but this doesn’t seem to happen and isn’t discussed. Both CBM and CIBM display a similar trend in $I(C;Y)$, though the effect is less pronounced for CBM (as expected) (see Q3). Additionally, the explanation in L426-431 is unclear, especially since Figure 3 shows CBM and CIBM behaving similarly, leaving it unclear what insight the reader is supposed to take away (see Q4). Lastly, L467-469 are somewhat misleading, as there is no baseline comparison. Even a comparison with CBM would be fine here. Since CBM might also exhibit a similar trend in responsiveness to interventions while suffering from leakage, the statement “does not suffer from concept leakage” seems too strong or well not motivated (see Q5).\n\t3.\tIf the goal of the model is to reduce leakage, why isn’t it compared against other models that tackle the same issue, such as those cited in the paper (e.g., “Addressing Leakage in Concept Bottleneck Models”)? Including a comparison with at least one of these models would strengthen the experimental validation (see Q6).\n\nAddressing these issues would significantly improve the clarity and strength of the paper, and I would be inclined to raise my score." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "you claim to achieve significant improvement in performance compared to vanilla CBMs and related advanced architectures. How do you support this claim of being significant? Is this meant to be statistically significant?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-\tThe paper introduces a novel integration of the Information Bottleneck framework into CBMs, which is an interesting theoretical contribution to the area of explainable AI.\n\n-\tThe paper provides sufficient experimental results on multiple datasets, demonstrating the performance of the proposed method in both concept and target prediction accuracy being on par or slightly better than current approaches\n\n-\tThe introduction of a novel metric to assess the quality of concept sets based on intervention performance is a valuable addition. This metric offers a direct and interpretable evaluation of concept set goodness, addressing a gap in the current literature." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes an enhancement to Concept Bottleneck Models (CBMs) by integrating the Information Bottleneck (IB) framework, aimed at addressing issues of concept leakage and reduced performance. Further, a model-based metric is introduced to measure concept set goodness. Experiments conducted on CUB, AwA2, and aPY datasets demonstrate that IB-augmented CBMs improve both concept and target prediction accuracy while increasing intervenability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The integration of the Information Bottleneck framework adds complexity to the CBMs. A more detailed discussion of the computational overhead and implementation challenges associated with the proposed method would improve the paper.\n-\tThe performance of the proposed method may be sensitive to the choice of hyperparameters, such as the Lagrangian multiplier β. A more systematic approach to hyperparameter tuning could be explored to optimize performance." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Enhances Concept Bottleneck Models by integrating the Information Bottleneck principle to reduce concept leakage and improve performance" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024concepts,\ntitle={Concepts' Information Bottleneck Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2xRTdzmQ6C},\nnote={under review}\n}" }, "abstract": { "value": "Concept Bottleneck Models (CBMs) offer a self-explainable AI framework by predicting targets based on human-understandable concepts, but they often fail to achieve optimal performance and interpretability due to leakage of irrelevant information into the concept activations. This paper presents an information-theoretic enhancement of CBMs through the integration of the Information Bottleneck (IB) framework, aimed at addressing their issues of concept leakage and reduced performance. Our approach reshapes the way CBMs process and utilize concepts by constraining mutual information between input data and concepts, ensuring that only the most relevant information is preserved for decision-making. This introduces a new paradigm for CBMs that not only enhances performance but also enforces a tighter connection between latent representations and human-understandable concepts, ensuring a more robust and interpretable model. Our experiments on datasets such as CUB, AwA2, and aPY demonstrate that IB-augmented CBMs improve both concept and target prediction accuracy, while also increasing intervenability. Additionally, we propose a novel metric to assess the quality of concept sets based on intervention performance. Unlike traditional task performance metrics, which may obscure the effects of concept leakage, the new metric offers a direct, interpretable evaluation of concept set goodness." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Concept bottleneck models", "Information bottleneck" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2577e442675156a0814ba1014f5b6be5268a4721.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Concepts' Information Bottleneck Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2xljvcYOLm
First-Step Inference in Diffusion Models Learns Image De-whitening
main
Active
Diffusion models;ZCA Whitening
generative models
3;3;5;5;5;6
5;4;4;3;4;4
2;2;3;3;4;3
2;1;2;2;2;2
3;2;2;3;3;3
4.5
4
2.833333
1.833333
2.666667
-0.516398
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weakness.\n\nIn addition, author insists \"efficient optimization algorithm\". What is the actual computation cost, or searching time compared to the other baseline?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper is well-written, and easy to follow.\n- The authors extend their analysis to real-world applications, such as image editing, and shows promising results.\n- The analyzation of noise and image with image whitening operator is quite novel." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores how the initial noise-to-image mapping of diffusion models, particularly with deterministic DDIM sampling and ZCA image whitening. \nThrough optimizing the noise with a fixed-point iteration and simulated annealing approach, the method preserves the structure of the original image at noise levels. The author further apply the proposed method to improving image editing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Authors show the correlation of image and noise, however it is not quite novel. Since DDIM is deterministic, same noise initialization to any score based generative model with same training objective will yields same image. (For instance, fig1 with DDIM inversion will yield similar results.)\n- With respect to the analyzation, the author empirically found that image whitening operation to noise space. It would be better if there was a more mathematically proven explanation, since the main concern of the paper is related to the analysis of the strictly mathematical model. For example, what is the mathematical reason why hypothesis 1 in the diffusion model actually holds? This should be thoroughly explained in section 3 or in the appendix.\n- With respect to the application, where is the quantitive results? I understand that it is not easy to quantitively evaluate in the image editing, however author can evaluate quantitively through experiments in SDEdit. \n\nIn summary, the analysis by image whitening is novel, but the paper contains only empirical motivation and quantitative results. It would be a better paper if the above weakness were addressed.\n\n1. Su, Xuan, et al. \"Dual Diffusion Implicit Bridges for Image-to-Image Translation.\" The Eleventh International Conference on Learning Representations.\n2. Hur, Jiwan, et al. \"Expanding Expressiveness of Diffusion Models with Limited Data via Self-Distillation based Fine-Tuning.\" Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written, clear, and easy to follow. The proposed idea is well-motivated, simple, and effective. It begins by introducing the observed phenomenon that noise and images generated by DDIM are correlated, followed by a well-supported hypothesis, demonstrated through detailed analysis.\n2. The simulated annealing algorithm for correlated noise proves useful for image variation generation and editing tasks, yielding decent generation quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper analyzes the correlation between noise and the images generated through DDIM sampling, showing that the one-step approximation of the DDIM inversion noise for any given image closely relates to the Zero-phase Component Analysis (ZCA) inverse whitening transform applied to that image. Based on this observation, the paper proposes a simple yet effective simulated annealing method to identify correlated noises, demonstrating its utility in tasks such as image variation generation and editing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of the paper lies in the lack of quantitative comparisons and discussions regarding existing baseline methods, making it challenging to objectively assess the performance advantages of the proposed approach. Specifically:\n\n1. There is no performance and efficiency comparison between the proposed model-agnostic method and other commonly used DDIM inversion techniques, leaving a gap in understanding the practical advantages in real-world applications.\n\n2. While SDEdit with correlated noise visually preserves more structural similarity compared to random noise, the paper only provides qualitative results. Although the method appears effective, the absence of comprehensive quantitative comparisons hinders a full evaluation of its performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Not sure I understand Figure 1. I don't see any correlation between the different rows. In SD1.5 all the cats look to the right. In SD Turbo you see head rotation, which is not present in the previous two rows. \n2. You write \"We hypothetize that the gap between the fitted one (Diff) and (ZCA) might be partly due to the fact that the ZCA\nwhitening matrix was only estimated on a subset of ImageNet, while the fitted one would reflect the entire training distribution of the diffusion model\". You can easily check this hypothesis by simply both increasing and decreasing the size of the data used to calculate ZCA and see if it increases and reduces the gap, respectively. \n3. Why you show the inversion experiment just on few images? Feels like strong cherry picking\n4. The fact that the learning of the noise is correlated is not surprising. There are many works that show that the diffusion process learn different level of details throughout the diffusion and that it is not just learning Gaussian noise. The claim in the paper that we would expect learning Gaussian noise in the optimization in (6) is not well justified\n5. Any real theory for why ZCA?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The correlation to ZCA shown is done while comparing to other alternatives.\nThe editing experiment is nice\nThe different demonstration shown throughout the paper are quite nice\nThe paper is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper suggests that the first step of a diffusion model is similar the dewightning using ZCA. It shows that it is much more correlated with ZCA than other dewightning approaches like PCA and others. Then it search for the best noise to use to generate similar images to a given one, i.e., try to perform noise inversion, by simply correlating the noise after dewithning with the target image. They show it can be useful to perform editing on one example." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The first part of the work is nice and the experiments done are quite rigorous showing why ZCA and not other options. Yet, the second part of the editing and simulated annealing is quite trivial and not really convincing. Basically checking each time what happens after one step of denoising and if it is similar to the original image is expected to lead to the results shown. Moreover, the fact that the results are demonstrated on few images only is very limited. It feels like strong cherry picking. Also, there are many other inversion methods. In addition, one may apply the same correlation with just simple denoising." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can you provide additional applications for your discovery?\n2. Can you offer stronger evidence to demonstrate that ZCA is the best approximation? Perhaps comparing it with more commonly used linear transformations would be better." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This phenomenon (the initial inference closely resembles ZCA) is interesting.\n2. This phenomenon is observed in many diffusion models\n3. The authors conduct multiple experiments to investigate this phenomenon.\n4. This paper demonstrates that the first-step inference approximates a linear transformation and does not depend on the model. Consequently, it proposes a model-agnostic method.\n5. The paper identifies two applications for this finding, where the prompt-based image editing is useful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the intricate relationship between input noise and generated images. Specifically, it finds that the initial denoising step performed by the network can be approximated as image de-whitening (ZCA). Consequently, the paper proposes a model-agnostic method for sampling correlated noises. Finally, it discusses two applications of this phenomenon." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While this phenomenon is interesting, its potential applications may be quite limited, as it only holds true for the first step. Although you have identified two applications, one of them—image variation generation—is not widely discussed.\n2. Although this phenomenon is interesting, it may not be particularly amazing, as any non-linear function can be approximated by a linear function within a small interval.\n3. Focusing solely on linear operations related to whitening is too narrow in scope. Although you provide a motivation in Figure 4 indicating that the results of Equation 6 bear a striking resemblance to the effects of ZCA whitening, this does not imply that only whitening should be considered. I believe there are many other linear transformations worth discussing. For instance, the identity transformation may also yield good performance, as suggested by the experiments in Section 4." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Are there advantages to using simulated annealing (SA) over gradient-based (GD) optimization? I want to know the qualitative difference between using SA and GD." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The qualitative results applied to SDEdit are somewhat promising." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper makes two main contributions:\n\n1. The first step of the sampling process of diffusion models (or single-step approximation of the full sampling trajectory) can be modeled using image de-whitening techniques, particularly ZCA.\n2. Through fixed-point iteration, it is possible to find noise corresponding to an image, which shows the ability to generate similar images given different diffusion models, and improvements in image editing methods, specifically SDEdit." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1. Replication of Previous Work**\n\nThe first downside of this paper is that its contributions, especially those validated through experiments, have already been claimed in existing literatures. The paper experimentally demonstrates that via fixed-point iteration, we can identify noise corresponding to an image, which can be used to (1) generate similar images across different models and (2) assist with image editing using SDEdit.\n\nHowever, regarding point (1), there are already results showing that if the noise is the same, similar images are generated even with different models.\n- *The Emergence of Reproducibility and Generalizability in Diffusion Models* ([ICML24](https://arxiv.org/abs/2310.05264))\n\nMoreover, research has already proposed finding noise through fixed-point iteration and using it for editing in various ways. In particular, this approach has also been applied to image editing. Besides the papers I listed, I recall other examples using fixed-point techniques.\n- *On Exact Inversion of DPM-Solvers* ([CVPR24](https://arxiv.org/abs/2311.18387))\n- *ReNoise: Real Image Inversion Through Iterative Noising* ([ECCV24](https://arxiv.org/abs/2403.14602))\n- *Lightning-Fast Image Inversion and Editing for Text-to-Image Diffusion Models* ([Arxiv23](https://arxiv.org/abs/2312.12540))\n\nAdditionally, the lack of any quantitative metrics for the experimental results is also an issue.\n\n**2. Overclaim**\n\nThe second issue with this paper is overclaiming their argument with insufficient experimental results, especially when they claim that ZCA de-whitening and the first step of diffusion models are similar. The key to verifying this claim lies in choosing a de-whitening method that resembles the diffusion model. However, in my opinion, the notion that ZCA is the most similar among de-whitening methods is quite different from the claim that the first step of the diffusion model can be understood as ZCA de-whitening. For example, we already understand the latent code of diffusion model as an optimal transport solution [1]. Why do you think the framework of ZCA de-whitening gives us better understanding of the diffusion models? Can you validate that ZCA de-whitening is better theory to understand the diffusion models?\n\n[1] : Understanding DDPM Latent Codes Through Optimal Transport ([ICLR23](https://openreview.net/forum?id=6PIrhAx1j4i))\n\nIf the theoretical contribution were significant, the paper could still be evaluated positively even if the empirical contribution is small (Weakness #1), but this does not seem to be the case here." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. About Hypothesis 1:\n- Equation (8) is the most critical finding (or core contribution) of this paper. Is this discovery purely based on observation? What is the motivation? \n- Since it only holds approximately when $ T $ is large, it seems that there exists an upper bound for the gap that is independent of $ \\epsilon_{\\theta} $? \n- Why was $ t = 0.98T $ chosen for the experiment? Could $ t = T $ be useful instead? \n- For the different models $ \\epsilon_{\\theta_1}, \\epsilon_{\\theta_2}, \\dots, \\epsilon_{\\theta_n}$, if the assumption holds, the optimal solution should be $ \\epsilon^*_{\\theta_1} \\approx \\epsilon^*_{\\theta_2} \\approx \\dots \\approx \\epsilon^*_{\\theta_n} $ for the same $z_0$, which seems counterintuitive. This suggests that different models yield the same solution for the same $ x_t $ regarding Equation (6), even though $ \\epsilon_{\\theta_1}(x_t, t), \\epsilon_{\\theta_2}(x_t, t), \\dots, \\epsilon_{\\theta_n}(x_t, t) $ are expected to differ.\n\n2. SDEdit performs best at 40%-60% timesteps, which seems to contradict the hypothesis in the paper that $ t $ needs to be very large. Does this pose a conflict?\n\n3. For inversion-based methods, this paper significantly improves upon the original DDIM inversion method. However, some existing approaches [1,2,3] have already improved the inversion reconstruction loss, achieving more precise consistency. What are the advantages of this method compared to those? However, it seems that this method cannot achieve complete reconstruction, although it can maintain consistency within a certain range. Therefore, I would like to know how effective this method is for image editing in complex scenarios, such as those with rich backgrounds or multiple objects—specifically, whether it can maintain background consistency.\n\n4. Could you compare the results of directly adding random noise to $z_0$ to obtain $z_t $, then denoising back to $z_0$? Perhaps randomly adding noise and then denoising might also achieve good results, as $z_t$ would still contain information from $z_0$ in this case.\n\n\n\n### Reference:\n\n[1] Cho H, Lee J, Kim S B, et al. Noise map guidance: Inversion with spatial context for real image editing. ICLR 2024.\n\n[2] Xu S, Huang Y, Pan J, et al. Inversion-free image editing with natural language. CVPR 2024.\n\n[3] Ju X, Zeng A, Bian Y, et al. Pnp inversion: Boosting diffusion-based editing with 3 lines of code. ICLR 2024." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- It was discovered that the initial denoising operation of diffusion models can be approximated by the ZCA de-whitening transform, revealing the global structure that associates noise with images in the model.\n- Noise optimization was achieved using the simulated annealing algorithm, enabling the ability to generate similar images across multiple models.\n- The optimized noise was shown to improve the performance of image editing methods such as SDEdit, better preserving image structure at high noise levels.\n- The optimized noise can be applied across different diffusion models, enhancing the generalizability of the approach.\n- This method outperforms traditional approaches in preserving image structure at high noise levels, increasing the flexibility of image editing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the correlation between input noise and generated images in diffusion models, aiming to reveal how diffusion models maintain the relationship between noise and images during the denoising process. Specifically, the study proposes an approximation of a single-step mapping through fixed-point inference (first-step inference) and finds that this mapping closely aligns with the ZCA de-whitening transform. The experimental results demonstrate that the single-step inference achieved through noise optimization closely aligns with the ZCA de-whitening transform. The effectiveness of this linear mapping was validated on the ImageNet dataset, showing that the optimized noise can generate consistent image variations across different models. Additionally, the optimized noise improved structural preservation in image editing tasks, maintaining the overall content of the image even at high noise levels, outperforming traditional methods such as SDEdit." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Using simulated annealing for noise optimization requires multiple iterations, affecting efficiency.\n- The effectiveness of ZCA de-whitening depends on the data distribution, which may limit the model's performance on unseen datasets.\n- The approach is based on observed assumptions without providing a rigorous analysis." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We discover that the diffusion model learns to do ZCA image de-whitening in the initial step." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024firststep,\ntitle={First-Step Inference in Diffusion Models Learns Image De-whitening},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2xljvcYOLm},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion models have emerged as powerful generative models for image synthesis, yet the intricate relationship between input noise and generated images remains not fully understood. In this paper, we investigate the correlation between noise and images generated through deterministic DDIM sampling, uncovering fundamental elements that are present across different diffusion models. More specifically, we demonstrate that a one-step approximation of the mapping learned by these models closely relates to Zero-phase Component Analysis (ZCA) inverse whitening transform, which maximizes the correlation between source and target distributions. We leverage this insight to develop a simple and yet effective model-agnostic method for sampling correlated noises and showcase applications for image variation generation and editing." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion models", "ZCA Whitening" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/11ae9c61ba598d3ef3620c3e75a108bb9ac8186d.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "First-Step Inference in Diffusion Models Learns Image De-whitening" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2xvisNIfdw
Unlocking Global Optimality in Bilevel Optimization: A Pilot Study
main
Active
Bilevel optimization;nonconvex optimization;global convergence;linear neural network
optimization
3;5;5;6;8;8
2;3;4;3;3;3
4;3;3;3;4;3
2;2;2;3;3;3
3;3;3;2;3;3
5.833333
3
3.333333
2.5
2.833333
0.325875
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Line 1226, why is the blockwise PL condition of $L_\\gamma$ over $u$ sufficient to ensure the PL condition for $L^*_\\gamma$?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe study of convergence of bilevel algorithms to global solutions is an interesting topic, and this paper offers an approach.\n2.\tThe paper includes concrete application examples that validate the assumptions necessary for establishing global convergence results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the convergence properties of a penalized bilevel gradient descent (PBGD) algorithm, aiming to obtain global optimal solutions of bilevel optimization problems under the joint and blockwise Polyak-Łojasiewicz (PL) conditions. The joint and blockwise PL conditions are validated in the context of two specific applications: representation learning and data hyper-cleaning. Numerical experiments are provided to substantiate the theoretical results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tWhile the topic of global optimal convergence in bilevel optimization is engaging, the approach presented in this work does not appear as innovative as suggested by the title. The main idea relies on the joint/blockwise PL condition of the penalized objective $L_\\gamma$. However, it is well known that when the PL condition holds, any stationary point is globally optimal, and the proximal-gradient method can achieve linear convergence to this global optimum (see, e.g., Hamed Karimi, Julie Nutini, and Mark Schmidt, Linear Convergence of Gradient and Proximal-Gradient Methods under the Polyak-Łojasiewicz Condition, ECML PKDD 2016). Furthermore, the convergence of PBGD to a stationary point of $L_\\gamma$ under the PL condition has been well studied in existing literature (e.g., Bo Liu, Mao Ye, Stephen Wright, Peter Stone, BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach, NeurIPS 2022, and Shen, Han, and Tianyi Chen, On Penalty-Based Bilevel Gradient Descent Method, ICML 2023). Thus, the approach in this work may lack novelty, and the contribution seems somewhat incremental.\n2.\tAlthough the authors have put considerable effort into verifying that the joint/blockwise PL condition can be satisfied in specific applications, such as representation learning and data hyper-cleaning, only very restricted cases are analyzed, with strong assumptions imposed. For instance, Assumption 2 in the representation learning setting and the assumption $X_{trn}X_{trn}^{\\dagger}$ is a diagonal matrix in data hyper-cleaning narrow the applicability of the results and limit their general applicability. The theoretical analysis appears heavily dependent on these assumptions, raising doubts about whether the joint/blockwise PL condition would hold in broader or more practical cases, or even in other bilevel optimization applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How should one choose between joint and blockwise PL conditions for a given application?\n1. Could you please clarify which aspects of the convergence results would generalize to more complex settings like non-linear models?\n1. What practical takeaways does this work provide for achieving global convergence in more complex bilevel applications?\n1. How robust are the convergence results if the PL conditions are only approximately met?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The main strength is that it is a pioneering work that studies the challenging and important problem of global convergence in bilevel optimization, a topic with substantial real-world relevance. The proposed analysis extends PL to both joint and blockwise PL conditions and verifies them on two application cases. Overall, the paper is well-organized and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a theoretical framework for achieving global convergence in bilevel optimization. The authors propose that a constrained reformulation generally yields a benign landscape, and they analyze the global convergence of penalized bilevel gradient descent (PBGD) algorithm for bilevel objectives under the proposed joint and blockwise PL conditions. The paper illustrates that the specific applications of representation learning and data hyper-cleaning can satisfy these PL conditions. Theoretical results are then supported by experiments conducted on these applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have several concerns and comments on the submission (please correct me if I am wrong):\n\n1. The applicability of the developed theorem seems unclear. The proof closely dependent on and follow existing convergence theorems for PBGD, and it’s unclear whether the analysis could extend to other bilevel algorithms. The non-additivity of PL conditions poses a great challenge for applying the developed theorem and no practical solutions are provided. The two applications studied rely on linear models and strong convexity of loss, which is overly idealized and simplified.\n\n1. Moreover, in line 228 (Section 3), the authors mention that convergence analysis may need “fine-tuning per application,” but it remains unclear which parts of the analysis are generally hold, such as whether the iteration complexity $𝑂(log⁡(𝜖^{−1}))$ generally holds to other settings that satisfy PL conditions. It also mention that \"This may also shed light on a broader range of bilevel problems involving sophisticated neural network architectures in machine learning\", but the paper lacks clearly summaries practical takeaways got from the developed theorem for achieving global convergence in such complex applications with modern non-linear deep models.\n\n1. The numerical analysis lacks depth and discussion on robustness. I am suggesting throughly evaluating how values of parameters $\\alpha$, $\\beta$, $\\gamma$ are set theoretically as well as practically, and whether the observed results match theoretical expectations on the convergence rate. Also, exploring how slight violations of PL conditions affect convergence would help clarify the robustness.\n\n1. Section 2 provides an example to illustrate the complexity of the nested objective $𝐹(𝑢)$ caused by the lower-level mapping $𝑆(𝑢)$, but it lacks rigorous analysis of how the constrained formulation reliably produces a more benign landscape and to what extend. A precise definition of benign landscape in the context of bilevel optimization is also helpful. The conclusion that constrained reformulation yields a benign landscape relies heavily on prior literature (lines 211-215) rather than in-depth analysis in this paper.\n\n1. In line 373 (page 7), matrix $𝑊_3$ is introduced without a clear explanation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could the authors expand Section 1.1 with detailed theorems? The sentence following C3, “The joint and blockwise PL condition… are not assumptions, but the properties of the penalty reformulation,” is confusing. The authors should clarify the assumptions needed to establish global convergence rigorously.\n\n2. In what specific way is “global optimality” used in the paper?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper offers conditions that ensure global convergence in bilevel optimization by generalizing the Polyak-Lojasiewicz (PL) condition." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores global convergence in bilevel optimization, a crucial yet challenging objective due to the non-convexity and potential for multiple local solutions in bilevel problems. To address this, the authors propose sufficient conditions for global convergence and illustrate these in bilevel learning applications such as representation learning and data hyper-cleaning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While global optimality is underscored as essential, the precise definition or context of “global optimality” within this framework is unclear. A clear explanation of how this term is specifically applied in their method would strengthen the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Can it be applied to a more general bi-level optimization with constraints in (1)?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Achieving Global optimality is an important property." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed two PL conditions, by satisfying which the global optimality of bi-level optimization can be achieved using simple algorithms like Gauss-Seidel." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper's assumptions are very restrictive. For most bilevel optimization problems, the Joint and blockwise PL conditions cannot be guaranteed, and even checking these conditions can be challenging. The representative problems illustrated in the paper are very specific simple cases. For example, only linear models can satisfy the assumption for representation learning." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What does the a pilot mean in the title?\n\n2. Line 057, a benign landscape, is there a direct meaning for that?\n\n3. Line 53, the goal of this paper, this sentence is not important. Do not need to emp{}. \n\n4. The numerical results seem too little? Does the proposed method outperform SOTA bi-level methods?\n\n5. What are the best convergence results for bi-level optimization method before this paper?\n\n6. line 414, what does \\gamma to be O(\\epsilon^{-0.5}) mean? If gamma is very very large (with a very large constant), can the algorithm still converge? What is the meaning of O(xx) here?\n\n7. Will PL condition a bit too strong?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "good. This paper is overall well written and provide plenty of theoretical results.\n\nThe proposed method also solves the neural network cases. That's especially good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper propose a new bilevel optimization algorithm. This paper is generally very well written and provide plenty of theoretical results. Overall this paper is clear a good paper. If all these results are correct, this paper should be clearly accepted (However, I am inadequate to go through all proofs)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Experiments are not adequate. \n\n2. Some fonts seem strange." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tMajor Concerns:\n\n(a) In line 261, Danskin theorem is mentioned, then the gradient is calculated. Also, the variable $\\omega$ is introduced later. I think it would be better to explain the connection and point out that the using Danskin theorem, the auxiliary variable $\\omega$ will help us to find a good estimation of the gradient with respect to $u$.\n\n(b) It may be better to put Algorithm 1 and 2 on Page 6 after the authors have summary these algorithms. It will give the readers a smooth reading experience.\n\n(c) In section 6, you may want to specific the choice of $\\alpha$ and $\\beta$ and make sure that they satisfied the conditions stated in Theorem 2 and 3.\n\n(d) If possible, adding more baseline methods would help readers better understand the convergence rate of the PBGD method. This is not necessary given the limited time.\n\n2.\tMinor Concerns:\n\n(a) The sentence in line 199 is not very clear, please double check.\n\n(b) There’s a “?” in line 309, please make sure it is correct.\n\n(c) Misspelling in Line 973 and Line 2189. “invertiable” to “invertible”." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Clear Problem Statement: The authors articulate the limitations of existing methods, particularly those that only guarantee convergence to local minima or stationary points, which motivate them for pursuing global convergence.\n\nTimeliness and Relevance: The paper proof the global convergent rate for a certain type of bilevel optimization problems. Given the increasing application of bilevel optimization in machine learning and high-stakes fields, this work has substantial relevance.\n\nTheoretical Contribution: The authors provide sufficient conditions for achieving global optimality. By leveraging the penalty reformulation approach, the paper establishes an almost linear global convergent rate for some linear bilevel optimization problems.\n\nExperimental Validation: The empirical results test on bilevel learning problems like representation learning and data hyper-cleaning. The preliminary computational results support the almost linear convergence theorem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the global convergence rate of bilevel optimization. The main result is that if the penalized objective satisfies the PL condition, then the bilevel problem have almost linear global convergence rate if PBGD method is used to solve the problem. Then the authors give two applications: representation learning and data hyper-cleaning. These problems can be formulated as bilevel optimization problems, and their penalized objectives satisfy the PL condition. Thus, when applying PBGD algorithm, they should converge almost linearly. The preliminary computational results also support the theorem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Assumptions and Limitations: While the paper claims global convergence for bilevel problems, it focuses primarily on linear models. Expanding the theoretical foundation to nonlinear models or other loss functions would improve the paper’s generalizability.\n\nComparative Analysis: While the paper mentions other approaches, a direct empirical comparison with state-of-the-art methods for bilevel optimization would strengthen its validation.\n\nConnection between Theory and Experiment: the author should clearly specified the connections between the theory and experiment so that the experimental results can support the theory. For example: in section 6, the author should specific the choice of the step length and make sure that they satisfied the conditions stated in Theorem 2 and 3." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024unlocking,\ntitle={Unlocking Global Optimality in Bilevel Optimization: A Pilot Study},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2xvisNIfdw},\nnote={under review}\n}" }, "abstract": { "value": "Bilevel optimization has witnessed a resurgence of interest, driven by its critical role in advanced machine learning applications such as hyperparameter optimization, meta-learning, and reinforcement learning. Recent research has focused on proposing efficient methods with provable convergence guarantees. However, while many prior works have established convergence to stationary points or local minima, obtaining the global optimum of bilevel optimization remains an important yet open problem. Arguably, attaining the global optimum is indispensable for ensuring reliability, safety, and cost-effectiveness, particularly in high-stakes engineering applications that rely on bilevel optimization. In this paper, we first explore the challenges of establishing a global convergence theory for generic bilevel optimization, and present two sufficient conditions for global convergence, inspired by contemporary machine learning applications. \nWe provide algorithm-specific proofs to rigorously substantiate these sufficient conditions along the optimization trajectory, focusing on two specific bilevel learning scenarios: representation learning and data hypercleaning (a.k.a. reweighting). Numerical results corroborate the theoretical findings, demonstrating convergence to global minimum in both cases." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Bilevel optimization", "nonconvex optimization", "global convergence", "linear neural network" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/354bd270a524a7b4c5d316407b712859bcb8d614.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/01d48ef763b4782034c87979b33dd6c82a897402.pdf" }, "title": { "value": "Unlocking Global Optimality in Bilevel Optimization: A Pilot Study" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ySt3cdGfJ
Distribution Backtracking Builds A Faster Convergence Trajectory for Diffusion Distillation
main
Active
Diffusion Model;Diffusion Distillation;One-step Generation
generative models
3;5;5;6
5;3;4;4
3;2;2;2
3;2;2;3
3;3;2;3
4.75
4
2.25
2.5
2.75
-0.648886
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The original motivation suggests that the teacher training trajectory could be used, but \"the convergence trajectory of most teacher models is inaccessible\". It's not clear to me that the training trajectory of the teacher would be useful for distillation, as it may not align well with the student distribution either. Did you explore DisBack using the training trajectory for trained diffusion models?\n\nDid you explore distillation into smaller student models? Mismatched architectures could be a useful application of DisBack too.\n\nDo you have any samples from the models along the intermediate teacher trajectory? Do these produce sensible samples at all?\n\nOverall, I like the proposed DisBack algorithm and feel that it is sufficiently novel and performant to justify publication. I would give the paper a higher score if the authors provided some more experimental investigation into why their method is successful.\n\n\nMinor:\n\nFig 2. includes the same sample twice (top middle, and middle)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper proposes a novel technique that is intuitive and is shown to work well. The experiments show a clear improvement in the convergence speed of the DisBack method and the qualitative results show high-quality single-step samples. The authors apply their method to a variety of teacher models over multiple datasets and demonstrate success and failure cases (in the appendix).\n\nThe paper is easy to follow. Technical content is presented clearly." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel approach to improve diffusion distillation. The key idea is to include a trajectory of teacher distributions for the student to match. This improves convergence and the final quality of the student model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I felt that the paper writing in places suggested more than was provided. For example, the authors claim that they \"identified this issue arises because existing score distillation methods focus on using the endpoint\". However, the authors provide no such identification in their own work. They provided sufficient evidence that utilizing a trajectory of distributions improves the student but this has not been shown to be necessary. There may be alternative approaches that work well while using only the endpoint. This is present elsewhere in the work, e.g. \"the fat convergence speed is because constraining the convergence trajectory of the generator provides a clear optimization direction\", this is a strong technical claim that has not been adequately explored.\n\nThe authors could do more to explain why DisBack is successful. Some theoretical analysis could help to explain the improved convergence speed, or experiments designed to show the convergence more carefully. For instance, I'd be interested to see how quickly the student model converges to each point on the trajectory, and how closely it matches the distribution. Presumably, this behaviour must be better than linear for DisBack to succeed, which is interesting. Overall, I felt that the idea worked well and was intuitive, but I didn't understand why it worked so well after reading the paper.\n\nThe ablation study is minimal (and I would argue does not qualify as an ablation study). The authors only explore DisBack and the original method side-by-side. Instead, they could also investigate the effect of the number of degradation path checkpoints, different student initializations, different degradation schemes (like using the training trajectory), etc." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper has a clear motivation: to address the initial score mismatch problem in diffusion distillation. Specifically, the authors first identify the suboptimal performance of existing diffusion distillation methods as being due to the score mismatch issue at the beginning of student model training, and then propose a novel approach to resolve it. \n2. The proposed approach is intuitive and effective. It makes sense to follow the degradation path from the teacher model to the student model in reverse during distillation, providing a progressive learning signal for the student and mitigating the initial score mismatch problem. In practice, by introducing this backtracking strategy, the distillation process is shown to be significantly faster than its variant without this technique. \n3. The proposed approach is versatile, as it is orthogonal to other diffusion distillation approaches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, a novel approach is proposed to further improve existing diffusion distillation methods. The proposed approach leverages the convergence trajectory from teacher model to the initial state of student model to guide the training of student model backwards, which mitigates the score mismatching problem at the beginning of the distillation. Empirical results have demonstrated the superior performance of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The state-of-the-art claim for the proposed method is misleading. According to the experimental setup in Appendix B.2, the proposed method trains for 500,000 **epochs** ($\\approx500K \\times \\frac{|D|}{|B|}$ **iterations or steps**, where $|D|$ is the data size and $|B|$ is the batch size). This number is significantly higher than for the baselines. For example, Diff-Instruct only trains for ${\\color{red}50K}$ **iterations** on ImageNet $64\\times 64$, while DisBack (this paper) uses about ${\\color{red}500K\\times 40K=20G}$ **iterations** ($|D|=1,281,167$ and $|B|=32$), which is approximately $40,0000$ times larger. Even if \"epochs\" actually refers to steps (if it is a typo), it still represents 10 times the training length compared with the Diff-Instruct baseline. Additionally, the result (${\\color{green}1.51}$) of DMD2 on ImageNet $64\\times 64$ is achieved by training for ${\\color{red}200K}$ **iterations**. With the extended training setup (${550K}$ **iterations** in total), DMD2 could achieve an FID of ${\\color{green}1.28}$, which is lower than DisBack's ${\\color{green}1.38}$. This raises concerns that the proposed strategy may not fully account for the state-of-the-art performance showcased. This is also supported by their ablation study and Table 3. The variant (\"w/o convergence trajectory\") is essentially Diff-Instruct, as noted in the main text on Lines 391-392. However, even this variant, when trained under the same setting, shows better performance on FFHQ (12.26) versus the original Diff-Instruct (19.93).\n\n2. The speedup shown in Figure 1 is only plotted for epochs 0 to 2000, which covers only the early stage of the distillation. More epochs, either until convergence or until training budgets are exhausted, are needed to better understand how the backtracking strategy behaves throughout training.\n\n3. Although the entire concept revolves around backtracking the degradation path, in actual training, each intermediate checkpoint is only trained for as few as $1000$ steps (for FFHQ, AFHQv2, and ImageNet at $64\\times64$ resolution), while the remaining steps are trained with the original teacher model. This means that the proposed backtracking is used for only a small fraction of the student model's training, which makes it even harder to attribute the superior performance to the proposed strategy." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My major worry is the reliability of performance and the laborious algorithm. Please respond to my worries." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The idea is simple but effective. It is makes sense that distilled from degraded teacher make the distillation faster.\n- Degradation Recording algorithm looks reasonable. The degraded teacher finally converges at the initialized student distribution, which make the student easy to learn in the early stage.\n- The result compared to Diff-instruct seems the algorithm is effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper propose a DisBack, which is a new distillation method of diffusion models. On the top of the Diff-Instruct, this DisBack propose a better training algorithm. While Diff-Instruct only use the pre-trained diffusion teacher, DisBack makes a series of degraded teachers, and use that teachers iteratively. This makes the student models easy to learn a teacher distribution." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- My major worry is \bthat I can not trust the performance. The paper distilled from EDM in ImageNet 64 which have 2.44 FID, but the distilled student has the performance of 1.38 FID. In my understanding, I think the upper bound performance of the student is teacher.\n\n- I also can not believe user preference study compared to the SDXL teacher. How can it better than teacher? \n\n- The ablation on the number of degraded teacher N is missing. I want to see progressive performance boosting from N=1 (equivalent to the Diff-Instruct) to N= large.\n\n- Is there any scheduling algorithm that changes the teacher in stage 2? It may requires lots of trials to find the schedule that determine when to change that target teacher from degraded to the original.\n\n- Figure 1 is a little bit over-claimed. This algorithm should contains the training costs of stage 1." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The score estimation (7) is not general for all noising methods. For example, the score estimation of ddpm has a mean scale $\\alpha_t$. When do distillation, should the teacher and student noising methods keep consistent?\n- Compared with Diff-Instruct which only training student models to fit one teacher model, Algorithm 2 needs to fit $N-1$ intermediate checkpoints, what about the training overhead of this part? In Fig.1, did the epochs for DisBack in x-axis record from the degradation recording stage or from which other points?\n- Any experiments to show the influence of the number of degradation checkpoints and the number of degradation epochs? Will more checkpoints and epochs mitigate the mismatch better?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. DisBack addresses the common “score mismatch” issue in score distillation by incorporating the entire convergence trajectory. DisBack enables the student generator to align more accurately with the teacher model, leading to faster convergence and better optimization paths.\n2. DisBack is designed to be easily integrated into current distillation frameworks, providing a versatile tool to further boost performance in generative model distillation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Distribution Backtracking Distillation (DisBack), a method to accelerate sampling in diffusion models by addressing the “score mismatch” issue common in traditional score distillation approaches. Unlike existing methods that rely solely on the endpoint of a pre-trained teacher model, DisBack captures the full convergence path between the teacher and student models. It does this through two stages: Degradation Recording, which records a degradation path from the teacher to the untrained student model, and Distribution Backtracking, where the student generator retraces this path to improve alignment with the teacher model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Major**:\n- While the authors claim DisBack is orthogonal to those of other distillation methods, there is no evidence to support this point. It would be valuable if the authors could provide further experiments to show it can be incorporated into other distillation methods, like consistency distillation or adversarial score distillation. \n- The paper aims to mitigate the score mismatch issue by employing degradation recording as convergence trajectory for distillation. The mismatch between the predicted score of generated samples and the model's prediction will be degraded but the mismatch between the model's prediction and the teacher's score prediction will be larger in degradation path. This suggests a potential tradeoff between these two types of mismatches, which could impact the final model’s performance. Providing further analysis or empirical results on this point would strengthen the motivation and effectiveness of this approach.\n\n**Minor**:\n- In Eq.(6), $\\partial x_t/\\partial \\eta$ should be included in expectation, same as (8). \n- Better to use bold $\\epsilon$ for noise and show the relationship between $\\epsilon$ and $x_t$. \n- In Algorithm 1&2, since the loss includes the expectation w.r.t. $t$ and $\\epsilon$, the line to calculate $x_t = x_0 + \\sigma_t \\epsilon$ is unnecessary and misleading. \n- Labels in Fig.7 are wrong." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper proposes an efficient and fast distillation method for diffusion models by introducing the convergence trajectory." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024distribution,\ntitle={Distribution Backtracking Builds A Faster Convergence Trajectory for Diffusion Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ySt3cdGfJ},\nnote={under review}\n}" }, "abstract": { "value": "Accelerating the sampling speed of diffusion models remains a significant challenge. Recent score distillation methods distill a heavy teacher model into a student generator to achieve one-step generation, which is optimized by calculating the difference between two score functions on the samples generated by the student model.\nHowever, there is a score mismatch issue in the early stage of the score distillation process, since existing methods mainly focus on using the endpoint of pre-trained diffusion models as teacher models, overlooking the importance of the convergence trajectory between the student generator and the teacher model.\nTo address this issue, we extend the score distillation process by introducing the entire convergence trajectory of the teacher model and propose $\\textbf{Dis}$tribution $\\textbf{Back}$tracking Distillation ($\\textbf{DisBack}$). DisBask is composed of two stages: $\\textit{Degradation Recording}$ and $\\textit{Distribution Backtracking}$. \n$\\textit{Degradation Recording}$ is designed to obtain the convergence trajectory by recording the degradation path from the pre-trained teacher model to the untrained student generator.\nThe degradation path implicitly represents the intermediate distributions between the teacher and the student, and its reverse can be viewed as the convergence trajectory from the student generator to the teacher model.\nThen $\\textit{Distribution Backtracking}$ trains the student generator to backtrack the intermediate distributions along the path to approximate the convergence trajectory of the teacher model.\nExtensive experiments show that DisBack achieves faster and better convergence than the existing distillation method and achieves comparable or better generation performance, with an FID score of 1.38 on the ImageNet 64$\\times$64 dataset.\nDisBack is easy to implement and can be generalized to existing distillation methods to boost performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion Model", "Diffusion Distillation", "One-step Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/04031f43fff495e38be88811499cec8250cad33e.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/82df8ac7c8a9e9f535672fe85a1bf0b46f7a274d.zip" }, "title": { "value": "Distribution Backtracking Builds A Faster Convergence Trajectory for Diffusion Distillation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2yqAzFPT4F
Zer0-Jack: A memory-efficient gradient-based jailbreaking method for black box Multi-modal Large Language Models
main
Active
Jailbreaking attacks;Black-box MLLMs;Zeroth-order optimization
alignment, fairness, safety, privacy, and societal considerations
3;5;5;5
4;4;3;3
2;3;3;3
2;3;2;2
2;2;1;3
4.5
3.5
2.75
2.25
2
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper is well written.\n* The method is sound.\n* The performance shows that the proposed method can improve the performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method that introduces the zero-order black-box attack into the jailbreak attacks against Multi-modal Large Language Models. Experimental results demonstrate it outperforms several recent methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* My main concern is that the proposed method lacks novelty. Many similar methods have already been proposed to perform adversarial attacks against vision models, e.g., [1]. The authors should discuss these related works in detail and highlight the differences between the proposed method and existing ones.\n\n* It would be beneficial to provide a more detailed discussion on why Zer0-Jack outperforms \"WB\" in Tables 2 and 3.\n\n* The paper lacks comparisons with many previous works.\n\n\n[1] Chen, Pin-Yu, et al. \"Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models.\" Proceedings of the 10th ACM workshop on artificial intelligence and security. 2017." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How do you estimate the value of the loss function of the black-box MLLM? Do you need to access the output scores of the MLLM? \n2. The experiments show that only around 50 iterations for each attack. Will this be influenced by the scale of model parameters and image size?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper achieved a high attack success rate on MiniGPT-4.\n2. This paper proposed a patch-based method to reduce memory usage." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new black-box attack on MLLM. Moreover, it proposed to attack part of the image to decrease the computation complex. However, it seems that this paper is just an application of zero-order optimization attack on the MLLM with few modification. Zero-order optimization attack is a widely used black-box attack method, and I think the contribution of this paper is little." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper just applied the zero-order optimization attack on the MLLM with very few modifications. There are already some papers that have applied the ZOO to black-box attacks, such as \n[1] Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent, AAAI2020\n[2] Zo-adamm: Zeroth-order adaptive momentum method for black-box optimization, NIPS2019\nIt would be helpful if the author could compare their methods with these or other latest black-box attack benchmarks.\n2. In Equation 4, you estimate the gradient according to the value of the loss function. But how do you estimate the value of the loss function of the black-box MLLM? Do you need to access the output scores of the MLLM? More details should be provided.\n3. More ablation studies should be conducted, such as the influence of MLLM size and image size on the ASR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- How many update steps were used for Zer0-Jack in the experiments? Is it consistent with other baselines? If not, why are they different?\n- For results presented in Table 1, are they based on a single image or a batch of images? It would be great to present both a single image and a batch of images.\n- Line 226, why normally use a patch of 32 by 32 for 224 by 224 image? And how this is becoming 0.02% of the updated dimensions in lines 281 - 283.\n- Line 100, \"a single 4090 without any quantization\", is it mean a single NVIDIA RTX 4090 GPU?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The proposed method is memory-efficient and operates within a black-box threat model, making it practical for real-world applications. Notably, this work highlights a safety vulnerability related to exposing logit probabilities in API responses—a finding that could significantly impact current LLM service practices. This insight into potential risks may prompt further consideration of security measures in API design for LLMs.\n- The proposed method is technically sound and has been rigorously validated using MMSafetyBench, where it achieved a significantly higher attack success rate than several baseline methods and demonstrated performance comparable to white-box attacks. Additionally, evaluations of commercial models like GPT-4o further showcase its effectiveness.\n- The approach of iteratively estimating the gradient over image patches is a creative and technically sound idea to address estimation errors in high-dimensional space inherent to zeroth-order optimization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Zer0-Jack, a method designed to jailbreak Multi-modal Large Language Models (MLLMs) without requiring gradient access, enabling it to function in a black-box threat model. Zer0-Jack employs zeroth-order optimization to approximate gradients using logits, though such an approach can introduce estimation errors in high-dimensional spaces. To address this, Zer0-Jack iteratively optimizes patches of the image, mitigating these errors. Compared to other methods, Zer0-Jack demonstrates improved memory efficiency in constructing the attack. Experimental results on MMSafetyBench confirm its effectiveness, achieving performance comparable to white-box attacks and significantly surpassing existing black-box attack methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed method relies on access to the logit output from the victim model, which aligns more closely with a grey-box rather than a fully black-box threat model. In API services, a potential defense could involve disabling logits or probability outputs in responses, effectively countering this type of attack. While identifying the vulnerability associated with logits/probability exposure is an insightful contribution, it is worth noting that the method’s success depends on this information being completely or partially accessible. \n- The paper lacks evaluations of detection methods, which are particularly relevant for query-based attacks. Repeated or suspicious query patterns could potentially alert defenders. Including experiments that test Zer0-Jack against detection mechanisms, such as those proposed in [1, 2], would be helpful to improve the contribution of the paper.\n- The paper lacks evaluations with prompt-based defense. For example, methods in [3, 4].\n- The evaluation setup for text-based attacks lacks clarity. Specifically, it’s unclear whether the experiments with GCG, AutoDAN, and PAIR combine adversarial text prompts with random images. This setup may not fairly represent these methods, as random images could interfere with the effectiveness of the text prompts. A fairer comparison would assess the ASR of these methods without image inputs. Additionally, the statement suggesting that MLLMs cannot accept text-only input appears misleading; most MLLMs can process text-only queries. Some models, such as LLaVA-1.5 and MiniGPT-4, employ frozen language models like Vicuna and Llama-2, and using the corresponding LLMs for text-only attack evaluations would provide a more accurate assessment.\n- The paper has a few confusing parts that would benefit from further clarification. Please refer to the questions section.\n- Minor typos: lines 130-131 ”Do-AnythingNow” (DAN).\n\n\n--- \n\n[1] Chen, S., Carlini, N., & Wagner, D. (2020, October). Stateful detection of black-box adversarial attacks. In Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence (pp. 30-39).\\\n[2] Li, H., Shan, S., Wenger, E., Zhang, J., Zheng, H., & Zhao, B. Y. (2022). Blacklight: Scalable defense for neural networks against {Query-Based}{Black-Box} attacks. In 31st USENIX Security Symposium (USENIX Security 22) (pp. 2117-2134).\\\n[3] Zhang, Y., Ding, L., Zhang, L., & Tao, D. (2024). Intention analysis prompting makes large language models a good jailbreak defender. arXiv preprint arXiv:2401.06561.\\\n[4] Robey, A., Wong, E., Hassani, H., & Pappas, G. J. (2023). Smoothllm: Defending large language models against jailbreaking attacks. arXiv preprint arXiv:2310.03684.\\" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I rewrite some of my questions from the previous section here more concisely:\n\nQ1) Could the authors provide additional experiments exploring what aspects of the Zer0-Jack algorithms make it so effective (e.g. varying patch sizes, number of gradient samples. and smoothing parameter)?\n\nQ2) Could the authors please address the issues raised in the **Clarity of writing and presentation** weaknesses section?\n\nQ3) Could the authors please address my concern relating to the memory consumption calculation that I raised in the **Focus on memory consumption** weaknesses section?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "#### Originality\n\nThe papers use of zeroth order optimization to create jailbreaking images against black-box models, is to my knowledge novel. In addition, their results showing jailbreaking image attacks to GPT-4o is also novel and very impressive.\n\n#### Quality and Clarity\n\nThe Zer0-jack method is explained well and is easy to follow. For the most part, results are also explained well and back up the main claims made in the paper. \n\n#### Significance\n\nThe most significant adversarial attack algorithms are those that can be applied in a black-box setting, and are able to successfully attack state-of-the-art models. The method and results in this paper clearly fit into this category, giving the paper good significance.\n\nI found myself being surprised that the Zer0-Jack method was so effective. Especially that using black-box gradient estimations could be almost as sample efficient as white-box attacks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces Zer0-Jack, a method to create adversarial images to MLLMs that jailbreak said models. \n\nThe Author's method uses 0th order gradient estimation to apply edits to patches of images in series with the goal of maximizing the models probability of responding to a harmful request in the affirmative.\n\nThe Author's results show that Zer0-Jack is very effective, achieving comparable jailbreaking attack success rate to white box gradient based methods. What's more, due to the gradient free nature of Zer0-Jack, it achieves these results with a comparatively lower memory requirement.\n\nFinally, the Authors show that their method can be applied to jailbreak GPT-4o, accessing the required logit information using the logit_bias feature of the GPT-4o API." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am going to split my critique up into two sections. The first will be a high-level critique of the paper, and the second will be specifics about sections. Whilst the critique is long, this is only because I believe the paper has interesting results that could be improved, not because I think there are any fundamental failings in the paper. To the contrary, I think the paper contains valuable insights for the broader adversarial attack community.\n\n## High Level\n\n**Algorithmic Insights**\n\nThe biggest impact of this paper would come from other practitioners being able to apply Zer0-Jack to novel situations, or apply insights gained from reading this paper to novel situations. Zer0-Jack shows effectiveness in an area that prior works have struggled to (black-box jailbreaking language models). For this reason, I think the paper would have a far greater impact if it could provide more insight on why the Zer0-Jack algorithm is effective. Some specific examples of experiments that would be useful in achieving this include:\n- How adjusting the size of patches affects performance.\n- How adjusting the updating of patches affects performance (is sequential the optimal).\n- How the smoothing parameter affects performance.\n- We can decrease the variance of the zeroth order gradient estimator by sampling many random vectors from the unit ball, and averaging. An experiment exploring the number of samples and convergence rate would be valuable, as well as comparisons between the gradient estimator and the true gradient. It may be the case that in this setting, very few samples are needed to get an accurate estimate for the gradient. This kind of information could be very valuable for future works.\n\n**Clarity of writing and presentation**\n\nThe clarity of writing and presentation of the paper could be improved. I found myself confused at times trying to understand the exact experiments that the Author's ran. Some examples include:\n1) In section 4.6, the Authors provide a single example of jailbreaking GPT-4o. There needs to be more explanation of a) how the Author's used the logit bias, and b) the exact experiment that was run. For example, what was the attack success rate against GPT-4o? By only providing a single qualitative example, it would suggest the attack success rate was low. This is not a problem, it simply should be presented to the reader.\n2) In section 4.4, I was not sure what the definition of an iteration was in the case of Zer0-Jack vs WB attack. For an apples to apples comparison, this should probably be number of required forward passes, but we could equally define 1 iteration of Zer0-Jack as providing updates to all patches (which would require num_patches number of forward passes).\n\n\n**Focus on memory consumption**\n\nThe Author's present the lower memory consumption of Zer0-Jack as a benefit to the algorithm over gradient based alternatives. This is certainly a benefit, but I do not think it is a hugely significant one. This does not mean this analysis should be removed from the paper, simply that I do not think it adds significance to the method. \n\nIn addition, on line 460, the Authors state \"WB Attack, applied to MLLMs like MiniGPT-4, use about 19GB each due to the need for gradient retention, while Zer0-Jack significantly reduces memory usage without sacrificing performance, uses only 10GB of memory.\" I am slightly confused by this. When running the WB attack, if all of the parameters of the model are frozen (in pytorch language, `parameter.requires_grad == False`) then there should be very little additional memory overhead when training? Did the authors set `requires_grad` to `False` for this evaluation or is my understanding of memory consumption incorrect? \n\nConcretely, when setting `requires_grad==False`, WB attack should only have to store gradients over the input image (and some intermediate gradients during the backward pass, but critical NOT the gradient for every model parameter) and so I do not expect the memory consumption to be ~double of that of a black-box \"forward only\" method.\n\n\n## Section Specific\n\nHere I include some smaller concerns with individual sections.\n\nSection 3\n- Writing is not succinct. Equation (8) is unnecessary, as is equation (9). The algorithm does a good job of explaining the method though.\n- Line 282, Authors claim the dimension is 0.02% of the total image as a whole. I may be incorrect here, but should the ratio not be (32 * 32)/(224* 224) = 0.02 = 2%\n\nSection 4\n- It would be good to include examples from Harmful Behaviors Multi-modal Dataset and MM-SafetyBench-T in the Appendix.\n- Nit - On line 316, Authors state \"Since the selected MLLMs do not support text-only input, we pair the P-text with a plain black image containing no semantic information.\" From my experience working with these models, they can accept text only inputs, you simply input the text only through the language model backbone?\n- The GCG transfer baseline is somewhat unfair. In their paper they get the best transfer by using GCG against an ensemble of models, where as my understanding is the Authors only attack one model? The baseline could be made stronger by attacking an ensemble of surrogate models. \n- On line 323, Authors state \"We will pair the malicious text prompts with corresponding images to evaluate their performance on Multi-modal LLMs.\" What are these images?\n- Line 346, the Authors state \"To our knowledge, few approaches specifically optimize the image component of an image-text pair for jailbreak attacks on MLLMs.\" This is incorrect, in-fact the Authors cite some papers that do this (Qi et al. and Bailey et al. for example). Given that the WB baseline is using these techniques, I am guessing this sentence can just be removed?\n- The WB attack should be explained in more detail.\n- Lines 369-371 are not needed.\n- Nit - In the caption of Table 2, Authors should state that the blank entries are due to OOM (this is only stated in the main text currently).\n- I would recommend creating table 4 but for the Harm Behaviors dataset. I expect GPT-4o to have 0% attack success rate without an attack present.\n\n\nWhilst I raise a number of weaknesses, I think the Zer0-Jack method is highly interesting, and thank the Authors for their work! Because the core-idea is so interesting, I simply think the work could be improved with more detailed experimentation (Algorithmic Insights mentioned above) and better presentation. The core ideas presented in the paper are strong and constitute valuable research, in my opinion." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Zer0-Jack uses zeroth-order optimization to directly attack black-box models, addressing the high memory consumption and performance degradation issues of previous methods." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024zerjack,\ntitle={Zer0-Jack: A memory-efficient gradient-based jailbreaking method for black box Multi-modal Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2yqAzFPT4F},\nnote={under review}\n}" }, "abstract": { "value": "Jailbreaking methods, which induce Multi-modal Large Language Models (MLLMs) to output harmful responses, raise significant safety concerns. Among these methods, gradient-based approaches, which use gradients to generate malicious prompts, have been widely studied due to their high success rates in white-box settings, where full access to the model is available. However, these methods have notable limitations: they require white-box access, which is not always feasible, and involve high memory usage. To address scenarios where white-box access is unavailable, attackers often resort to transfer attacks. In transfer attacks, malicious inputs generated using white-box models are applied to black-box models, but this typically results in reduced attack performance.\nTo overcome these challenges, we propose Zer0-Jack, a method that bypasses the need for white-box access by leveraging zeroth-order optimization. We propose patch coordinate descent to efficiently generate malicious image inputs to directly attack black-box MLLMs, which significantly reduces memory usage further. Through extensive experiments, Zer0-Jack achieves a high attack success rate across various models, surpassing previous transfer-based methods and performing comparably with existing white-box jailbreak techniques. Notably, Zer0-Jack achieves a 95\\% attack success rate on MiniGPT-4 with the Harmful Behaviors Multi-modal Dataset, demonstrating its effectiveness. Additionally, we show that Zer0-Jack can directly attack commercial MLLMs such as GPT-4o. Codes are provided in the supplement." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Jailbreaking attacks", "Black-box MLLMs", "Zeroth-order optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c6c3109ff743c8d65be5313397db0bcb6db2c31c.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c5983d9153bb0a480baca4fe40c736b96684d767.zip" }, "title": { "value": "Zer0-Jack: A memory-efficient gradient-based jailbreaking method for black box Multi-modal Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2z1HT5lw5M
Trajectory attention for fine-grained video motion control
main
Active
Trajectory attention;video generation;motion control
generative models
5;6;6;6;6
3;5;4;4;4
2;3;3;3;3
3;3;3;3;3
2;3;2;3;3
5.8
4
2.8
3
2.6
0.790569
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I suggest to review and correct the mathematical formulations and notation to enhance the paper's clarity and reliability." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed method is lightweight, requiring low training costs, making it practical and efficient for real-world applications without the need for extensive computational resources.\n2. The method demonstrates strong transferability, showing effectiveness with different architectures such as DiT. \n3. The paper conducts thorough exploration at application level, showcasing the method's effectiveness in multiple tasks, including camera motion control and video editing. Abalation studies are sufficient." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel approach called Trajectory Attention for fine-grained video motion control, particularly aiming to enhance camera motion control in video generation tasks. By modeling trajectory attention as an auxiliary branch alongside traditional temporal attention, the method leverages available pixel trajectories to inject precise motion information into the video generation process. This design allows the original temporal attention and the trajectory attention to work synergistically. The proposed method demonstrates strong adaptability, e.g., being transferable to architectures like DiT. Experiments across various tasks show significant improvements in control precision and content consistency while maintaining high-quality generation. Extensive ablation studies validate the effectiveness of each module." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The method heavily relies on dense optical flow information, as shown in Figure 3 of the supplementary material. This dependency can significantly increase inference time due to the computational cost of processing dense optical flow, especially in real-time applications. \n2. The reliance on dense optical flow makes it challenging to adapt the method to user inputs of sparse trajectories. As noted in DragNUWA, it's difficult for users to input precise trajectories at key points in practical applications, leading to a gap between training and inference. This limitation reduces the method's practicality in scenarios where only sparse motion cues are available.\n3. In line 158, H and W represent the dimensions of the latent features, but in Algorithm 3, H and W are used for image dimensions, which is confusing.\n4. Some examples in Fig.6 and Fig.9 are not significant, like the second example in Fig.6." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper introduces a novel concept of Trajectory Attention for fine-grained motion control in video generation. This auxiliary attention mechanism enhances the existing temporal attention in video diffusion models by explicitly incorporating trajectory information, which is a significant advancement in the field.\n2. By modeling trajectory attention as an auxiliary branch that works alongside the original temporal attention, the approach allows for seamless integration without modifying the original model parameters. This design choice is both practical and efficient, leveraging pre-trained models and enabling efficient fine-tuning.\n3. The proposed method demonstrates significant improvements in motion control precision and long-range consistency over existing methods. The experimental results, including quantitative metrics like Absolute Trajectory Error (ATE) and Relative Pose Error (RPE), validate the effectiveness of the approach.\n4. The paper includes thorough experiments and ablation studies that not only demonstrate the superior performance of the proposed method but also validate the design choices. This strengthens the credibility of the findings and provides valuable insights into the method's effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Trajectory Attention, a novel approach designed to enhance fine-grained motion control in video generation, particularly focusing on precise camera motion control within video diffusion models. Traditional methods often struggle with imprecise outputs and neglect temporal correlations, leading to inconsistencies in generated videos. This work addresses these challenges by explicitly modeling trajectory attention as an auxiliary branch alongside the standard temporal attention mechanism. By modeling trajectory attention as an auxiliary branch alongside the standard temporal attention, the method explicitly injects available pixel trajectory information into the video generation process. This design allows the temporal attention to focus on motion synthesis and short-range dynamics, while the trajectory attention ensures long-range consistency along specified paths. The approach efficiently integrates trajectory information without modifying the original model parameters and supports sparse trajectories, meaning it can handle partial trajectory data. Experiments demonstrate that this method significantly improves motion control precision and video quality across various tasks, including camera motion control on images and videos, as well as first-frame-guided video editing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The method is primarily designed for video diffusion models that use decomposed spatial-temporal attention. It is less clear how well the approach generalizes to models with integrated spatial-temporal attention (e.g. 3D DiTs) or other architectures. Expanding the evaluation to include such models would strengthen the contribution.\n2. The paper compares the proposed method with a limited set of existing approaches. Including discussions with more recent or state-of-the-art methods, especially those that have emerged concurrently, would provide a more comprehensive evaluation of the method's relative performance. For example, Collaborative Video Diffusion [1] uses epipolar attention to align contents of different camera trajectories, and Camco [2] also uses epipolar, but to enhance the 3D consistency of generated contents.\n3. The experimental evaluations are primarily conducted on the MiraData dataset. While this dataset may offer certain advantages, relying on a single dataset limits the ability to generalize the findings. Evaluating the method on additional, diverse datasets would strengthen the claims about its general applicability.\n4. While the method supports sparse trajectories, the paper does not extensively explore how it performs when the trajectory information is highly sparse, incomplete, or noisy. Real-world applications often involve imperfect data, so robustness to such conditions is important. Going back to my point 2, this is especially concerning since the model is trained on MiraData, which mostly consists of synthetic videos.\n\n[1] Kuang et al. Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control, in NeurIPS, 2024.\n\n[2] Xu et al. CamCo: Camera-Controllable 3D-Consistent Image-to-Video Generation, in arXiv, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) How do you ensure that when attention is applied along the trajectory, the generated pixel also follows the trajectory? Have you observed any cases where control fails?\n\n2) In Algorithm 3, are you feeding \\{I_r\\} to the model in any particular format? The same question applies for Algorithm 4 with \\{V_r\\}.\n\n3) Is the comparison to other work (motion control/camera control) fair? They are trained on different datasets, and they may have some issue generalizing to the evaluation dataset used here. How did you select the evaluation set? Were you able to evaluate on the test set of other papers? \n\n4) In training, optical flow is used as a trajectory, but in inference, the model takes the camera trajectory as input. Could this cause a mismatch between training and inference? Why not use the camera trajectory as guidance during training as well?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Metric-wise, it seems the model achieves better camera control.\n- The model can be used for first-edited-frame + original-video-guided editing, though how this is achieved is not very clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes injecting a new attention layer along the trajectory into the model to support camera motion control in video generation. During training, optical flow is used as the trajectory, and the new attention operation is performed only along this trajectory. The trained model achieves good results in camera control for image-to-video and video-to-video tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Figure-1 is confusing. It takes some time to understand the input and output of each task. It would be better to reorganize this figure to make it clearer. Each task could be separated into a small sub-figure with a clear indication of the input and output.\n\n2) In Figure-3, it’s unclear what the model’s input is in two scenarios: (1) when you have multiple frames as input, i.e., ‘camera motion control on videos’ in Figure-1, and (2) when you have multiple frames plus edited frames as input, i.e., ‘first-frame-guided video editing’ in Figure-1.\n\n3) The trajectory attention mechanism operates only in 2D space, making it challenging to distinguish motion orthogonal to the image plane—for example, whether an centered object is moving towards or away from the camera. In such cases, the coordinates remain the same across frames. Could this be a limitation of this method?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "According to the weaknesses, you can take the suggestions below to make your paper more convincing:\n1. In Algorithm 3, please elaborate on which depth estimation method you take in step 1 and how you render a set of views $I_{r}$ and get the translation of pixels $T$ in step 2. In Algorithm 4, please elaborate on which point trajectory estimation method you take in step 2. Meanwhile, could you provide the visual results of the trajectory extraction from a single image and a video to demonstrate the correctness of Algorithms 3 and 4? \n2. Provide results of your method on videos with occlusions, rapid camera movements, and multiple moving objects, respectively.\n3. Provide a comparison with more related works, such as [MotionBooth](https://arxiv.org/abs/2406.17758) and [CamTrol](https://arxiv.org/abs/2406.10126). More comparisons to concurrent work are also encouraged but not mandatory.\n\nIf the authors could solve my problems, I would raise the score." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Originality: The paper demonstrates originality in its approach to video motion control. The concept of trajectory attention, modeled as an auxiliary branch to traditional temporal attention, is a novel way to incorporate pixel trajectories for fine-grained camera motion control. This approach differs from existing methods that either rely on high-level constraints or neglect temporal correlations.\n2. Quality: The experimental setup is comprehensive, using a large-scale dataset and multiple evaluation metrics. The results are presented in a clear and organized manner, with both quantitative comparisons and qualitative visualizations. The ablation study further validates the effectiveness of the proposed components, indicating a high level of quality in the research design and execution.\n3. Significance: The significance of the paper lies in its potential impact on the field of video generation and motion control. The proposed method shows improved performance in camera motion control for both images and videos, which is crucial for creating high-quality and customized visual content." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on fine-grained camera motion control in video generation. It has the following contributions:\n1. Trajectory Attention Mechanism: Proposes a novel trajectory attention branch alongside the original temporal attention branch. It models attention along available pixel trajectories for camera motion control.\n2. Improved Performance: Demonstrates significant improvements in precision and long-range consistency for camera motion control in both images and videos while maintaining high-quality generation.\n3. Extension to Other Tasks: Shows that the approach can be extended to other video motion control tasks, such as first-frame-guided video editing, where it excels in maintaining content consistency over large spatial and temporal ranges." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper does discuss trajectory extraction for different tasks such as camera motion control on images and videos, and video editing. However, the description of the extraction process could be more detailed and clear. For example, in Algorithm 3 for trajectory extraction from a single image, some steps might require further clarification for a reader who is not familiar with the underlying concepts. The estimation of the depth map and the rendering of views are steps that could be explained in more detail, including the methods and algorithms used. Similarly, in Algorithm 4 for video trajectory extraction, the point trajectory estimation and the combination with camera motion could be more clearly described.\n2. While the proposed trajectory attention method shows promising results in the presented experiments, there is a lack of exploration of more complex scenarios. For example, in real-world video data, there may be occlusions, rapid camera movements, or multiple moving objects, and it is not clear how the method would perform in such situations.\n3. The comparison with existing methods, although extensive to some extent, could be more comprehensive. There are many other techniques in the field of video motion control that were not included in the comparison, and it is possible that some of these methods may have unique features or advantages that could have provided a more nuanced understanding of the proposed method's superiority." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How can one obtain or customize the appropriate intrinsic and extrinsic parameters when performing trajectory extraction for a single image or video? Does the camera always need to be directed at the center of the image?\n\n- Is it necessary to adjust the camera's intrinsic and extrinsic parameters based on the depth information available?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The Trajectory Attention module is intuitive and offers flexibility in capturing temporal correlations in camera motion. This innovative approach effectively addresses the challenges associated with fine-grained control of camera motion.\n\n- The experiments on camera motion control for both images and videos are impressive. They demonstrate significant improvements in precision and long-range consistency, all while maintaining high-quality generation. These results underscore the effectiveness of the proposed method in handling complex camera motion scenarios.\n\n- The paper effectively shows that the approach can be extended to other video motion control tasks. For instance, in first-frame-guided video editing, the method excels at maintaining content consistency over large spatial and temporal ranges. This versatility is a testament to the robustness and general applicability of the Trajectory Attention framework." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Trajectory Attention, an innovative method for fine-grained camera motion control that attends to available pixel trajectories. The authors identify conflicts between the original temporal attention modules in diffusion models and supplementary trajectory-conditioned temporal modules. To resolve these conflicts, the paper employs optical-flow data to define trajectories, samples the most correlated points along them, and applies a copy attention mechanism to enhance trajectory precision. The original temporal module is retained for consistency. Comprehensive experiments on camera motion control for both images and videos demonstrate significant improvements in precision and long-range consistency without compromising high-quality generation. Furthermore, the approach is shown to be extensible to other video motion control tasks, including first-frame-guided video editing, where it maintains content consistency over extensive spatial and temporal dimensions" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper raises concerns about object dynamics in the image-to-video case presented in the supplementary material. The examples, such as the dog and the cat, lack additional motion, which could be a limitation. It would be beneficial to see how objects with more complex dynamics are handled by the method.\n\n- There is a concern regarding the generalization of camera pose. In the Image-to-Video (first-frame) scenario, the trajectory module is trained with optical-flow data from only 10K video clips. It's unclear how the method would perform under challenging motions, such as clockwise rotation, high-speed zooming in and out, or 360-degree rotations like those seen in NVS-Solver GitHub. In these extreme trajectories, points visible in the first frame may become invisible, potentially leading to anti-aliasing issues. Additional results or a discussion of the necessary limitations would aid in a more comprehensive assessment of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024trajectory,\ntitle={Trajectory attention for fine-grained video motion control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2z1HT5lw5M},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in video generation have been greatly driven by video diffusion models, with camera motion control emerging as a crucial challenge in creating view-customized visual content. This paper introduces trajectory attention, a novel approach that performs attention along available pixel trajectories for fine-grained camera motion control. Unlike existing methods that often yield imprecise outputs or neglect temporal correlations, our approach possesses a stronger inductive bias that seamlessly injects trajectory information into the video generation process. Importantly, our approach models trajectory attention as an auxiliary branch alongside traditional temporal attention. This design enables the original temporal attention and the trajectory attention to work in synergy, ensuring both\nprecise motion control and new content generation capability, which is critical when the trajectory is only partially available. Experiments on camera motion control for images and videos demonstrate significant improvements in precision and long-range consistency while maintaining high-quality generation. Furthermore, we show that our approach can be extended to other video motion control tasks, such as first-frame-guided video editing, where it excels in maintaining content consistency over large spatial and temporal ranges." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Trajectory attention", "video generation", "motion control" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4ad4590f425a441603a23ca0d6284af651253116.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1e0bf90f6d4060bddc070f7fbc08658252e3334c.zip" }, "title": { "value": "Trajectory attention for fine-grained video motion control" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2z340YQdvJ
Revisiting the Relation Between Robustness and Universality
main
Withdraw
similarity;representational similarity;functional similarity;adversarial robustness;universality
interpretability and explainable AI
Laura Caspari;Max Klabunde;Florian Lemmerich
~Laura_Caspari1;~Max_Klabunde1;~Florian_Lemmerich2
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "We have discovered a bug that invalidates some of our observations. The manuscript needs to be revised before publication." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": { "value": "@misc{\ncaspari2024revisiting,\ntitle={Revisiting the Relation Between Robustness and Universality},\nauthor={Laura Caspari and Max Klabunde and Florian Lemmerich},\nyear={2024},\nurl={https://openreview.net/forum?id=2z340YQdvJ}\n}" }, "abstract": { "value": "The *modified universality hypothesis* proposed by Jones et al. (2022) suggests that adversarially robust models trained for a given task are highly similar. We revisit the hypothesis and test its generality. We find that predictive behavior does not converge with increasing robustness and thus is not universal. Further, with additional similarity measures, we uncover differences in the representations that were invisible with the measures used in prior work. While robust models tend to be more similar than standard models, robust models remain distinct in important aspects. Moreover, the importance of similarity measures when comparing representations is highlighted as the absolute level of similarity---and thus the assessment of universality---is heavily dependent on the measure used." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Laura_Caspari1", "~Max_Klabunde1", "~Florian_Lemmerich2" ] }, "authors": { "value": [ "Laura Caspari", "Max Klabunde", "Florian Lemmerich" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "similarity", "representational similarity", "functional similarity", "adversarial robustness", "universality" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "caspari|revisiting_the_relation_between_robustness_and_universality" }, "pdf": { "value": "/pdf/497b65e4946184545de6975f588c19b4ecc02175.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f6807554a963cf0837456f5520eea1e7a7e0e416.zip" }, "title": { "value": "Revisiting the Relation Between Robustness and Universality" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2zMHHZ569S
Qinco2: Vector Compression and Search with Improved Implicit Neural Codebooks
main
Active
vector compression;large-scale retrieval;neural compression;quantization
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;6;6;6;6
4;4;2;4;3
2;3;2;4;3
2;3;2;3;3
2;3;3;4;3
5.4
3.4
2.8
2.6
3
-0.375
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Dear authors & reviewers,\n\nThe reviews for the paper should be now visible to both authors and reviewers. The discussion is open until November 26 at 11:59pm AoE.\n\nYour AC" }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "authors - reviewers discussion open until November 26 at 11:59pm AoE" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Figure 6 demonstrates the retrieval accuracy/efficiency trade-off, but only R@1 is considered. How would the QPS/task accuracy trade-off be affected if a re-rank stage is added to RQ and PQ with relaxed settings such as R@10?\n- Figure 4 only demonstrates the encoding/decoding speed of QINCov2. It is recommended to provide a more comprehensive comparison with QINCo, etc., similar to Table 3 in [1].\n- It is advised to add a latency comparison of the full retrieval pipeline with other methods.\n\n[1] Residual Quantization with Implicit Neural Codebooks" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed method achieves state-of-the-art performance on several benchmarks\n- Extensive experiments demonstrate the effectiveness of each component." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a variant of QINCo which predicts codebooks per step according to the previous encode part. QINCov2 develops many tricks such as a better training procedure, beam search, etc., to improve its performance. Extensive experiments across multiple benchmark datasets demonstrate its superior performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The task scenarios are not convincing. Previous work shows that QINCo [1] has significantly lower encoding and decoding speeds than PQ and RQ, and there is no obvious improvement in the paper. Figure 6 also shows nearly an order of magnitude less QPS than PQ/RQ in the low recall region. The authors should provide more explanation of why improving accuracy at the cost of QPS is necessary.\n- Latency comparison with other methods is not considered in experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to Weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. QINCO2’s use of beam search for vector encoding and codeword pre-selection represents a significant advancement over previous methods, optimizing both encoding time and quantization accuracy.\n2. The introduction of a fast, approximate decoder based on codeword pairs offers a novel solution to the computational challenges of large-scale vector search, enhancing speed without a major sacrifice in accuracy.\n3. The paper conducts thorough empirical evaluations across multiple datasets, showing substantial reductions in mean squared error (MSE) for vector compression and improvements in search accuracy compared to the original QINCO and other state-of-the-art models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents QINCO2, an advanced method for vector compression and large-scale nearest neighbor search, building on the QINCO framework. QINCO2 introduces several key enhancements to improve the efficiency and accuracy of vector quantization, including: (i) QINCO2 incorporates codeword pre-selection and beam search, which improve encoding precision without exhaustive evaluations of all codebook options; (ii) an approximate decoder based on codeword pairs; (iii) an optimized training approach. The paper validates QINCO2's performance on datasets such as BigANN and Deep1M, demonstrating substantial improvements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It would be beneficial to compare QINCO2 with other non-uniform quantization methods. Can QINCO or QINCO2 be extended to work with other large language models (LLMs), such as the LLaMA family?\n2. The inference time remains high, especially in large-scale applications.\n3. This method requires multiple heuristics and iterative steps to reach an optimal solution, which makes it appear more like a refinement rather than a groundbreaking improvement over QINCO. Including more mathematical analysis or theoretical proofs would strengthen the approach.\n4. In line 205, you mention that \"$g$ uses the same architecture as $f$.\" Did you experiment with alternative architectures for $g$?\n5. In Figure 2, you note \"Keep A candidates for each beam.\" Did you consider keeping a single candidate set for multiple beams?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. A detailed description of an ANN use case that clearly benefits from QINCo2 would strengthen this paper. This paper currently shows that QINCo2 outperforms other quantizers at iso-bitrate in terms of quantization error, but pays more in terms of decoding cost. It could perhaps be argued that using other quantization methods to compress the vectors, and storing such compressed data on a cheaper storage medium (ex. flash) could perhaps beat QINCo2 in both storage cost and decoding cost. Quantifying whether or not this is the case would be very useful.\n1. Source code?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. Figures are well-crafted and make the paper easy to understand\n1. Extensive empirical results that break down the effect on quantization quality and encode/decode time for each adjustment relative to QINCo" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "QINCo2 is a deep-learning based vector quantizer that improves off of QINCo. The basic idea of both is to extend the idea of residual quantization (RQ) via deep learning. RQ is a greedy approach that quantizes a vector by doing each successive codeword selection to minimize the assignment loss so far. The QINCo family of quantizers adds a neural network that adapts the current codeword depending on the quantized representation so far, i.e. if $\\hat{x}_i$ is the quantized representation of $x$ after $i$ codes, RQ does $\\hat{x}_i=\\hat{x}\\_{i-1}+c_i$ while QINCo does $\\hat{x}_i=\\hat{x}\\_{i-1}+f(c_i,\\hat{x}\\_{i-1})$ with learned $f$.\n\nThe main improvements from the original QINCo are:\n1. Faster encoding by leveraging a faster, approximate $f$ to generate initial quantization candidates, and only re-ranking the top candidates with the full $f$.\n1. Beam search during encoding, to make up for quality loss from approximate $f$ above.\n1. Slight tweaks to model architecture and training hyperparameters.\n1. Using a pairwise codebook procedure during decoding so that the vanilla additive decoder more closely resembles QINCo's implicit codebook results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of source code release: considering these are fairly small models trained on open datasets, releasing code for reproducibility shouldn't have been difficult.\n1. Limited novelty: this work only only suggests a minor change to the QINCo idea." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Proposed method significantly improves quantization error and retrieval accuracy\n- It is faster for retrieval tasks, which is important for industry scale applications" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "QINCO2 is an improved version of the original QINCO model for residual MCQ. It improves search efficiency in large datasets and reconstruction error. Both methods use neural network to dynamically adapt codebooks after each step of residual quantization. Instead of static codebook (conventional RQ), QINCO2 (and QINCO) uses neural network to adjust the codebook based on the current approximation and base codebook values. The network inputs the residual vector and partial reconstruction and produces centroids that more accurately encode the residuals. The original QINCO dramatically increased computational complexity of the quantization process and memory usage.\nQINCO2 improves encoding speed by introducing codeword pre-selection which narrows down the search of centroids. It uses another neural network of smaller parameters to calculate top $A$ candidates (among possible centroids) which is further used for adaptive quantization. Furthermore, QINCO2 applies beam search to improve quantization quality by exploring multiple encoding paths in parallel, which helps to minimize the quantization error and refine the encoded representation more accurately.\nTo address the high computational cost during decoding, QINCO2 introduces a pairwise additive decoder, which enables faster approximate decoding by combining pairs of codewords, effectively capturing dependencies between codewords" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The theoretical contribution is rather low. Authors mainly engineered existing methods together to improve inference of the model.\nThe paper is very hard to follow, it is not completely clear why introducing another neural network for pre-selection can speed it up (furthermore, increasing training training time)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tThe dataset names in Table 3 should be consistent with other results in Sec. 4.2, i.e., BigANN1M, Deep1M, Contriever1M, and FB-ssnpp1M.\n2.\tA little confused on the “2M successive least-squares problems” in RQ-based codebook approximation (mentioned in Sec. 4.3), as there are only M steps in RQ.\n3.\tThe R@10 and R@100 results of QINCo2 are not included in this paper, despite the authors' claim in Section 4.1 that recall percentages at ranks 1, 10, and 100 have all been considered." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe proposed method seems concise and effective, especially in speeding-up the QINCo encoding and searching process.\n2.\tThe pairwise additive decoding looks like an effective tool to create more accurate approximation of non-independent neural codebooks.\n3.\tThe experiments and analysis are quite extensive and the improvements are significant. \n4.\tThe paper is well-written and easy to read." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper enhances QINCo in both the encoding and decoding processes. To tackle the significant complexity of encoding, the authors introduce codeword pre-selection and beam search strategies, which improve encoding efficiency and approximation capabilities. Additionally, to mitigate the limited search accuracy of the AQ decoder, the authors propose a fast approximate decoder based on pairwise additive code, which creates accurate shortlists for fast searching. Experimental results demonstrate that QINCo2 improves both efficiency and search accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tIn Table 3, “Improved Architecture” slightly improves the search accuracy on BigANN and Deep datasets with lower vector dimension. Since the performance of original QINCo is largely affected by the network scale, the question is whether the “Improved Architecture” in QINCo2 affects the performance by improving the network parameters. It is better to provide the comparison of parameters.\n2.\tCompared to the original QINCo, the “Improved Training” approach used in this paper incorporates more training samples. Results in Table 3 shows that the introduction of large training set brings limited performance improvement. With a fixed training epoch of 70 and the sequential acquisition of each 10M splits, wonder if the model achieves optimal convergence with such a large training set." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a new quantization process for the QINCo neural quantizer, combining beam search with fast approximation, alongside a improved search pipeline and model architecture, improving both accuracy and speed for large-scale retrieval." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024qinco,\ntitle={Qinco2: Vector Compression and Search with Improved Implicit Neural Codebooks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2zMHHZ569S},\nnote={under review}\n}" }, "abstract": { "value": "Vector quantization is a fundamental technique for compression and large-scale nearest neighbor search. For high-accuracy operating points, multi-codebook quantization associates data vectors with one element from each of multiple codebooks. An example is residual quantization (RQ), which iteratively quantizes the residual error of previous steps. Dependencies between the different parts of the code are, however, ignored in RQ, which leads to suboptimal rate-distortion performance. Qinco recently addressed this inefficiency by using a neural network to determine the quantization codebook in RQ based on the vector reconstruction from previous steps. In this paper we introduce Qinco2 which extends and improves Qinco with (i) improved vector encoding using codeword pre-selection and beam-search, (ii) a fast approximate decoder leveraging codeword pairs to establish accurate short-lists for search, and (iii) an optimized training procedure and network architecture. We conduct experiments on four datasets to evaluate Qinco2 for vector compression and billion-scale nearest neighbor search. We obtain outstanding results in both settings, improving the state-of-the-art reconstruction MSE by 44% for 16-byte vector compression on BigANN, and search accuracy by 24% with 8-byte encodings on Deep1M." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "vector compression", "large-scale retrieval", "neural compression", "quantization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/52f6f45cd0a116f4a4cf90e00548a7ae5e55cc85.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Qinco2: Vector Compression and Search with Improved Implicit Neural Codebooks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2zmO1GVT0Y
NL-Eye: Abductive NLI For Images
main
Active
Benchmark;Multimodality;Abductive Reasoning;NLI;VLM
datasets and benchmarks
5;6;6;6;6
4;3;4;4;2
2;3;3;3;3
3;3;3;3;3
3;3;3;3;3
5.8
3.4
2.8
3
3
-0.375
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- For the evaluation with this benchmark, it would be beneficial to have better metrics. Are there methods to quantify image order sensitivity? Could metrics be developed to measure visual understanding and linguistic abstract reasoning capabilities using various forms of input (Text-only, Image-to-Text, Image and Text, etc.)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written and easy to read.\n- The process of data collection and verification is systematic and meticulous.\n- It intriguingly points out the shortcomings of existing visual language models (VLMs) in visual abductive reasoning, with experimental results to substantiate this claim.\n- The paper proposes various experimental setups by combining or separating images, changing the order of images, which helps ensure fair testing.\n- The benchmark effectively reveals multiple shortcomings of different VLMs, not only evaluating abductive reasoning but also highlighting issues with image location sensitivity and poor visual interpretation.\n- Unlike traditional natural language inference (NLI) benchmarks, this approach offers a comprehensive evaluation of multiple aspects." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a benchmark for measuring visual abductive reasoning capability and explains the process of constructing this benchmark. It demonstrates that current multimodal language models lack visual abductive reasoning capability and introduces a novel aspect of verifying image-to-image entailment that has not been previously addressed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The evaluation criteria are unclear and not well-defined. The use of automatic evaluation for explanations seems inadequate, and manual evaluation, while more accurate, is too costly and varies depending on the person.\n- The definition of visual abductive reasoning capability remains unclear; it appears to evaluate abilities including visual interpretation, interpretation of multiple images, and natural language inference, covering a broad range of concepts that are not distinctly defined." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weakness 1-3. \nAlso just out of curiosity, why can't the setting in Table 3 solve this problem? E.g. How did GPT-4o fail the entailment upon the Figure 8 machine-generated captions?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Previous Visual entailment tasks were mainly in text format. This paper for the first time proposes the task in image formats, and collected a human-curated benchmark. The experiments show that current VLMs cannot do well on the NL-EYE.\nAlso, one experiment result saying that VLM prediction depends on hypothesis location is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new benchmark NL-EYE that is designed to assess VLMs’ visual abductive reasoning skills. NL-EYE adapts the abductive Natural Language Inference (NLI) task to the visual domain, requiring models to evaluate the plausibility of hypothesis images based on a premise image and explain their decisions. NL-EYE consists of 350 carefully curated triplet examples (1,050 images) spanning diverse reasoning categories: physical, functional, logical, emotional, cultural, and social. Experiments show that VLMs struggle significantly on NL-EYE, often performing at random baseline levels, while humans excel in both plausibility prediction and explanation quality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It is unclear whether the used prompt can best unleash VLMs' performance. For example, from Table 5, it seems no example has been provided, and that may lead to lower VLM performance. \n2. Why do human only achieve 83-85% accuracy if human collected the dataset and this dataset do not require expert knowledge? (Line 426-427) It is a bit confusing to understand.\n3. In Table 3, why not try GPT-4o as the Image-to-Text model? Also, why not try Claude models as predictor?\n4. The images are generated instead of from real world, and could potentially affect the output. The test size is 350 which might be small." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Q1. It is unclear from the paper how the authors selected the concepts for each individual reasoning category. For example, in the Cultural Reasoning category, which cultures were represented in the generated image. As image generation models are also not good for cultural content generation and the VLMs being better on cultural NLI raise interest in which cultures were highlighted mostly in the data to assess the comprehensiveness of the test set.\n\nQ2. The current prompt for VLM is asking the plausible answer first and then asking for explanation. It would be interesting to reverse this process (i.e., explain each image step-by-step and then conclude the plausible answer) and see how the VLMs react.\n\nQ3. In Tables 2 and 3, LLaVA 1.6 performs better at predicting the plausible image using GPT-4 when converting image to text (Table 3) than when directly inputting images (Table 2). Could this difference be due to LLaVA’s limitations as a predictor, or is the prompt structure (e.g., asking for image descriptions first before selecting a plausible answer) affecting performance?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper demonstrates a way to measure VLMs abductive reasoning ability on six diverse reasoning categories using multiple images.\n2. The experiments shown in the paper comprehensively evaluate the reasoning ability of VLMs by checking image ordering, exploring different input formats to justify the reasoning gap of the existing VLMs.\n3. The analysis section is interesting. The breakdown of performance across reasoning categories and the underlying insights will be useful for the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents NL-EYE, a benchmark to evaluate VLMs' visual abductive reasoning skills across six reasoning types: physical, functional, logical, emotional, cultural, and social. It includes 350 triplet examples (1,050 images) with temporal annotations indicating event sequence and duration. The study examines model robustness, considering hypothesis order sensitivity and different input formats (individual versus composite images). NL-EYE also assesses models' ability to score single hypotheses, addressing real-world scenarios where multiple alternatives may not be available." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The prompt selection is under-explored. \n2. More detailed in the Questions section" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "It would be nice to be able to determine if the problem this benchmark shows is out-of-domain on the language model side or a limitation of the visual encoder itself. If we split the data in the benchmark for training and test purposes and fine-tuned models improved on the remaining test splits, then we can assume that the main problem was the task was out-of-distribution rather than a lack of performance on visual perception, since most current visual-language models trained with a frozen visual encoder. Have you done any further experiments to see if this limitation on visual reasoning can be improved with some training or not?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This work is distinct from existing multi-image benchmarks in that the objects of perception required to perform reasoning are provided solely through visual perception. NLI-inspired benchmarks that require visual reasoning over multi-images already exist, such as [1], but they are limited in terms of evaluating purely on visual perception, as they require reasoning over a given natural language premise. However, NL-Eye has the unique feature that requires reasoning on pure visual perception, since these premises are provided as images.\n\n[1] A Corpus for Reasoning About Natural Language Grounded in Photographs (Suhr et al., 2019)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes the NL-Eye benchmark to evaluate the abductive reasoning ability of visual-language models from pure visual perception in multi-image situations, inspired by Natural Language Inference (NLI). The benchmark consists of testing examples for temporal reasoning along with six reasoning categories and images are obtained from the text-to-image generation model. The authors argue that current visual-language models show significantly inferior performance compared to humans on abductive reasoning in multi-image situations and claim that this is due to the lack of purely visual perception abilities compressed from the visual perception modules." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There is a lack of consideration in the experiments as to whether a proper evaluation of the current visual-language model can be made in a multi-image setting. As the authors argue, current benchmarks for testing abductive reasoning are single-image focused, but it should not be overlooked that research on the visual-language model itself is also focused on this. As a result, the authors provide “concatenated” images, which may not be a fair assessment for most visual-language models that currently operate at fixed, squared-sized resolutions. To demonstrate the need for the proposed benchmark, it is required to observe if the same phenomenon is found in visual-language models that can handle flexible resolutions and aspect ratios like [1].\n\n[1] LLaVA-UHD: an LMM Perceiving any Aspect Ratio and High-Resolution Images (Guo et al., 2024)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "## Question\n1. Have the authors conducted experiments with VLMs that trained on datasets including multiple images such as LLaVA-Onevision or VILA, and with VLMs that use other visual encoders like Cambrian-1?\n\n## Typo\n* L260 Validation,and Categorization -> Validation and Categorization\n\n---\n### References\n* Lin, Ji, et al. Vila: On pre-training for visual language models. CVPR 2024\n* Li, Bo, et al. Llava-onevision: Easy visual task transfer. https://llava-vl.github.io/blog/2024-08-05-llava-onevision/\n* Tong, Shengbang, et al. Cambrian-1: A fully open, vision-centric exploration of multimodal llms. Neurips 2024" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The benchmark is well-designed with diverse reasoning categories.\n2. Experiments on the benchmark reveal interesting findings.\n3. The analysis is thorough and highlights notable insights into VLM limitations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces NL-EYE, a benchmark designed to test the abductive reasoning capabilities of Visual Language Models (VLMs) through image-based tasks. The benchmark includes 350 carefully curated triplet examples spanning diverse reasoning categories where models must choose the more plausible hypothesis from a set and provide an explanation. Experiments reveal that while humans excel in this task, current VLMs show notable deficiencies in their reasoning capabilities. The authors conclude that VLMs face significant challenges in visual interpretation, which impacts their ability to reason effectively about images." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While I agree that the benchmark is carefully curated, the filtering condition can be inconsistent and subjective because it is done manually. \n2. This paper focuses primarily on evaluating VLMs' deficiencies but lacks discussion on strategies or methods to improve these models' abductive reasoning capabilities.\n3. The paper lacks experiments with additional open-source models. While the current model selection is valid, given the paper's findings about failures in visual interpretation and hypothesis location dependency, testing VLMs with different visual encoders or those trained on multi-image datasets would further support the analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024nleye,\ntitle={{NL}-Eye: Abductive {NLI} For Images},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2zmO1GVT0Y},\nnote={under review}\n}" }, "abstract": { "value": "Will a Visual Language Model (VLM)-based bot warn us about slipping if it detects a wet floor? Recent VLMs have demonstrated impressive capabilities, yet their ability to infer outcomes and causes remains underexplored. To address this, we introduce NL-Eye, a benchmark designed to assess VLMs' visual abductive reasoning skills. NL-Eye adapts the abductive Natural Language Inference (NLI) task to the visual domain, requiring models to evaluate the plausibility of hypothesis images based on a premise image and explain their decisions. NL-Eye consists of 350 carefully curated triplet examples (1,050 images) spanning diverse reasoning categories: physical, functional, logical, emotional, cultural, and social. The data curation process involved two steps—writing textual descriptions and generating images using text-to-image models, both requiring substantial human involvement to ensure high-quality and challenging scenes. Our experiments show that VLMs struggle significantly on NL-Eye, often performing at random baseline levels, while humans excel in both plausibility prediction and explanation quality. This demonstrates a deficiency in the abductive reasoning capabilities of modern VLMs. NL-Eye represents a crucial step toward developing VLMs capable of robust multimodal reasoning for real-world applications, including accident-prevention bots and generated video verification." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Benchmark", "Multimodality", "Abductive Reasoning", "NLI", "VLM" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0f7b789b7fe82c21fe4b85b4e0712757b69a80cb.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "NL-Eye: Abductive NLI For Images" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
30FCIyWWSU
DeNVeR: Deformable Neural Vessel Representations for Unsupervised Video Vessel Segmentation
main
Withdraw
Video vessel segmentation;Unsupervised learning;X-ray angiography videos dataset
unsupervised, self-supervised, semi-supervised, and supervised representation learning
Chun-Hung Wu;Shih-Hong Chen;Chih Yao Hu;Hsin-Yu Wu;Kai-Hsin Chen;Yuyou-chen;Chih-Hai Su;Chih-Kuo Lee;Yu-Lun Liu
~Chun-Hung_Wu1;~Shih-Hong_Chen1;~Chih_Yao_Hu1;~Hsin-Yu_Wu1;~Kai-Hsin_Chen1;~Yuyou-chen1;~Chih-Hai_Su1;~Chih-Kuo_Lee1;~Yu-Lun_Liu2
3;3;5;6
5;4;4;4
2;2;3;3
2;1;3;3
2;2;2;3
4.25
4.25
2.5
2.25
2.25
-0.555556
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Have the authors used their method to train a general representation and then fine-tuned it on the labels they have? I think this would be an interesting baseline, and if successful, it would strengthen the method, showing that pretraining helps." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The method efficiently combines modern techniques for the complex task of vessel segmentation in videos. These include test-time training, multiple losses, and Eulerian motion fields. The authors clearly demonstrate in an ablation study how each component contributes to the overall performance gain. \n\n- Existing unsupervised approaches are outperformed.\n\n- The authors provide source code, which is a plus for reproducibility. However, the codebase is nested and not well-documented, making a reproducibility check challenging within a reasonable time frame, which led me not to run the code myself. Overall, this is still a positive point." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this submission, the authors present “Deformable Neural Vessel Representations,” a highly specialized method for vascular segmentation in X-ray angiography videos. The proposed method is an unsupervised approach that uses “a novel layer separation bootstrapping technique, a parallel vessel motion loss, and the integration of Eulerian motion fields for modeling complex vessel dynamics” (L 16-18). The method outperforms other unsupervised approaches in the segmentation task but does not outperform a simple supervised U-Net baseline.\n\nExperiments are conducted on a single dataset named XACV, which is newly released with the submission." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In my opinion, the work at hand is not a perfect conference fit due to its heavily applied nature on a very specific topic: unsupervised vessel segmentation in X-ray videos. Submission of this nice work to a dedicated medical image analysis conference could reach an audience which is more familiar and interested in this work. \n\n- Experimentation. The method is evaluated on a single dataset, which is also newly proposed. However, this raises a question: For this very specific task, where the XACV dataset now exists, why do we need an unsupervised method when a supervised method performs better, and annotation could be done in reasonable time?\n\n- Topological metrics are very important for evaluating the faithfulness of vessel segmentation; I suggest adding metrics such as Betti errors to the evaluation table and discussing the results in this regard.\n\n- Hyperparameter selection. What was the range of hyperparameters tested, and how much time or resources were used for tuning? How were the hyperparameters for the four baseline methods specifically chosen? I believe clearly describing the hyperparameter search is essential for reproducibility. For example these additional results could be presented in additional tables." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Fig. 2, why implicit neural representation (MLP) was used in stage 1 to fit the canonical background, whereas DIP was used in stage 2 to fit the canonical foreground? Why not using the same method or the other way around? What's the motivation here?\n- In Fig. 3, the background motion should include both heartbeat and breathing motion. Shouldn't these two motion patterns be separated before used to warp the vessel Eulerian motion?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Unsupervised coronary vessel segmentation in X-ray videos is an underexplored field, and the proposed method, showing descent performance, is a valuable contribution.\n- The method is well designed and clearly presented. \n- Extensive experiments and ablation analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a fully unsupervised learning method for coronary vessel segmentation in X-ray videos. It achieved this by using layer separation, which takes advantage of different motion patterns in the vessel layer (foreground) and the rest structures (background) and across-frame consistency of their appearance. It also employed a test-time training method to address the high variability in medical imaging data. Overall, since unsupervised coronary vessel segmentation in X-ray videos is an underexplored field, the proposed method, showing descent performance, is a valuable contribution. In addition, this paper also contributes the first X-ray coronary angiography video dataset with fine labels, which is a valuable source for the field." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Figs 7 and 8 show relatively easy scenarios for coronary vessel segmentation, where there are few interfering objects such as ribs, catheters, and surgical wires. Authors may want to show more challenging cases.\n- Small vessels are not being well segmented in Figs 7 and 8, and there are also broken vessel segmentations. Where is the bottleneck? In other words, which module(s) are responsible for the false negative here?\n- Authors may consider show more intermediate results (e.g. input/output of each module/step) to help readers better understand where the strengths and weaknesses of the design are. \n- There is a trend of using foundation models or pre-trained large models to tackle the small-dataset supervised or unsupervised segmentation problems. I think including such a baseline is important in evaluating the contribution of this work.\n- Authors may also want to report how accurate the segmentation boundary is (e.g. Harsdorf distance), as boundary accuracy is essential for downstream tasks such as FFR calculation.\n- Numerous losses were weighted summed in the training. How sensitive is the model performance to the choice of weights?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The loss function seems be written mistake, the prediction of foreground as one as possible." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The author have rich theory, and solid basic skills. This paper achieve the unsupervised segmentation by summarizing the many methods and designing the fitting architecture. Especially the designing of losses, there are numerous work to be finish, and the performance in the experiment seems good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper propose a method DeNVeR Deformable Neural Vessel Representations. The method utilizes optical flow and layer separation techniques, enhancing segmentation accuracy and adjusts during test time, improving adaptability and ensuring consistent results across cardiac conditions. And during the training , the paper leverage the full temporal information of the videos and eliminating the need for annotated training datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1、The baseline lacks the unsupervised model to compare.\n2、The paper need explain the reason for guidance, such that significant of optical flow, latent code, etc.\n3、The paper add one group experiment for unsupervised image segmentation not vedio to prove the effect of model in single image.\n4、The paper seems like the integration of all kinds of method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "How this would help the CV and AI community?\n\nIs this method overfit and specific to this domain application?\n\nWhat would be the limitation of the method?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "DeNVeR operates without annotated training datasets, using an unsupervised learning approach that takes advantage of the complete temporal information available in X-ray video data. This enables effective vessel segmentation directly from the video sequence.\n\nBy employing optical flow analysis and an innovative layer separation strategy, DeNVeR refines segmentation results dynamically at test time, achieving decent adaptability and consistent performance across various cardiac conditions.\n\nXACV is claimed to be the first coronary angiography video dataset with high-quality, manually labeled segmentation ground truth. XACV sets a new benchmark for training and evaluating video vessel segmentation models, fully leveraging video-based temporal data for improved segmentation fidelity." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces DeNVeR, an unsupervised approach for segmenting cardiac vessels in X-ray angiography videos without requiring annotated datasets. By leveraging temporal information in video data, DeNVeR uses optical flow and a layer separation technique to enhance segmentation accuracy and adaptability at test time, ensuring consistent performance across varied cardiac conditions. The authors creat the XACV dataset—claimed to be the first X-ray angiography coronary video dataset with high-quality, manually labeled ground truth. DeNVeR outperforms baseline methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Its broader implications for the ICLR community is unclear, especially how this could benefit the general computer vision and machine learning community.\n\nThe introduction of the XACV dataset is valuable, but it also highlights the niche focus of the work. It shows the research might be limited to a small community, without wider research adoption for general CV and AI.\n\nThe approach, while powerful, may be overly complex for the specific problem domain without demonstrated flexibility across different datasets or applications. To establish robustness, an evaluation of DeNVeR on broader computer vision tasks could show its adaptability. \n\n\"There is no free lunch\" It is not clear what would be the limitation of the proposed method, especially without using manual annotation. \n\nHow would the clinicians know the uncertainty and trustworthiness of the results?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nwu2024denver,\ntitle={De{NV}eR: Deformable Neural Vessel Representations for Unsupervised Video Vessel Segmentation},\nauthor={Chun-Hung Wu and Shih-Hong Chen and Chih Yao Hu and Hsin-Yu Wu and Kai-Hsin Chen and Yu-You Chen and Chih-Hai Su and Chih-Kuo Lee and Yu-Lun Liu},\nyear={2024},\nurl={https://openreview.net/forum?id=30FCIyWWSU}\n}" }, "abstract": { "value": "This paper presents **De**formable **N**eural **Ve**ssel **R**epresentations (DeNVeR), an unsupervised approach for vessel segmentation in X-ray angiography videos without annotated ground truth. DeNVeR utilizes optical flow and layer separation techniques, enhancing segmentation accuracy and adaptability through test-time training. Key contributions include a novel layer separation bootstrapping technique, a parallel vessel motion loss, and the integration of Eulerian motion fields for modeling complex vessel dynamics. A significant component of this research is the introduction of the XACV dataset, the first X-ray angiography coronary video dataset with high-quality, manually labeled segmentation ground truth. Extensive evaluations on both XACV and CADICA datasets demonstrate that DeNVeR outperforms current state-of-the-art methods in vessel segmentation accuracy and generalization capability while maintaining temporal coherency. This work advances medical imaging by providing a robust, data-efficient tool for vessel segmentation. It sets a new standard for video-based vessel segmentation research, offering greater flexibility and potential for clinical applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Chun-Hung_Wu1", "~Shih-Hong_Chen1", "~Chih_Yao_Hu1", "~Hsin-Yu_Wu1", "~Kai-Hsin_Chen1", "~Yu-You_Chen1", "~Chih-Hai_Su1", "~Chih-Kuo_Lee1", "~Yu-Lun_Liu2" ] }, "authors": { "value": [ "Chun-Hung Wu", "Shih-Hong Chen", "Chih Yao Hu", "Hsin-Yu Wu", "Kai-Hsin Chen", "Yu-You Chen", "Chih-Hai Su", "Chih-Kuo Lee", "Yu-Lun Liu" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Video vessel segmentation", "Unsupervised learning", "X-ray angiography videos dataset" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "wu|denver_deformable_neural_vessel_representations_for_unsupervised_video_vessel_segmentation" }, "pdf": { "value": "/pdf/23a8772b04b4414d5bc808c5f62ee24b65caeada.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/57a0bdf313fc53ffcd38a983725a935007cd9462.zip" }, "title": { "value": "DeNVeR: Deformable Neural Vessel Representations for Unsupervised Video Vessel Segmentation" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
30SmPrfBMA
GCML: Grounding Complex Motions using Large Language Model in 3D Scenes
main
Active
human-scene interaction;human motion generation;large language model;3d visual grounding
generative models
3;5;5;6
5;4;3;4
2;3;3;3
2;3;2;3
2;3;2;2
4.75
4
2.75
2.5
2.25
-0.648886
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I am interested in the generation time for a sequence and how time is distributed across the modules. If the process proves quick, it could be a valuable tool for artists in their creative workflows." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The framework's ability to generate complex, long-term human motions from scene and textual inputs could significantly benefit industries such as animation, gaming, etc.\nThe integration of LLMs and a 3D Visual Grounding Model automates the process of long-term human-scene interaction, potentially saving human efforts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces GCML (Grounding Complex Motions using a Large Language Model), a framework for generating human interactions from textual and scene inputs. It combines technologies like GPT-4, OpenScene, and OmniControl to create an automated system for synthesizing long-term, complex human motions. A new evaluation set demonstrates the method's performance compared to existing approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Several key related works should be discussed, including \"Synthesizing Long-Term 3D Human Motion and Interaction in 3D\" from CVPR 2021, which decomposes long-term human-scene interaction synthesis into subtasks of body generation and motion in-betweening. Also, \"GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping\" from CVPR 2022. It deals with whole-body motion synthesis involving hand-object interactions which I think is not solved very well in this paper. (could be a limitation) and \"Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents\" in ICML 2022, which shares similar concepts and outcomes even though it doesn’t directly generate human motion. \n\nThe quality of the generated motions remain unnatural, particularly at the junctions of sub-motion clips, which are noticeably disjointed. Could the authors consider using or referencing more state-of-the-art motion in-betweening methods, such as those discussed in \"Flexible Motion In-betweening with Diffusion Models\" in SIGGRAPH ASIA 2024, to enhance the naturalness of the generated motions?\n\nThere are issues with the notation used in the paper, such as the inconsistent use of the symbol 'N' in Lines 236 and 237 to represent both 'N points' and 'N frames', which should be distinctively defined to avoid confusion." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The top down angle in the visual results makes it difficult to see the motion quality. It would also be nice to provide more visual examples showcasing the capability of the system.\n- Are the generated subtask programs by LLM in Fig. 3 fully directly used to call the functions? E.g., avoidance_map, specify_joint_position, generate_motion. Would there be any errors or bugs in LLM’s generated programs? If so, how does the system handle them?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The LLM-based approach is sensible, which decomposes complex motion tasks into simpler ones and makes the whole task much more manageable.\n- The method can take pretty ambiguous prompts like “a person feels hungry”, and generate a sequence of plausible motions, which is impressive.\n- The proposed Complex Motion Evalution set demonstrates the advantage of the proposed method, and the dataset itself can be a good addition to advance research in this area." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces GCML, a framework designed to generate complex human motions in 3D scenes based on textual descriptions and scene context. The method is motivated by two key challenges in Human-Scene Interaction (HSI): the lack of diverse, high-quality paired datasets for complex actions and the limitations of existing models that primarily generate simple motions. GCML leverages a Large Language Model (LLM) to decompose complex tasks into simpler subtasks and uses a 3D Visual Grounding Model to identify interaction targets and obstacles within the scene. It then synthesizes full-body motion sequences that align with both the user's input and the scene's semantics. The paper's main contributions include the introduction of a new task and evaluation set for complex motion generation, outperforming existing methods in generating intricate, realistic motions. GCML demonstrates competitive performance on simple tasks and significantly excels on its proposed Complex Motion Evaluation Set." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- One main weakness is the novelty of the visual grounding part & motion generation parts of the framework, which is similar to [1] published at CVPR 2024. [1] also VLMs to ground target objects and generation motion based on it. That said, the LLM decomposition part still has its novelty, although subtask planning using LLMs is quite common.\n- The generated motion has sudden jitter (e.g., 00:18-00:25 in the video), which is undesirable for real-world applications.\n- The writing of the paper also needs improvement. Eq 2 is not well explained. What is d? And how is this objective optimized?\n\n[1] Cen, Zhi, et al. \"Generating Human Motion in 3D Scenes from Text Descriptions.\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why is the cost map set a resolution of 100x100x100? This resolution may be sufficient for the tabletop object grasping scenario in VoxPoser (Huang et al., 2023b). However, indoor rooms typically have much larger scales, and a resolution of 100x100x100 can result in a too coarse voxelization that can not accurately represent the environment, especially for fine-grained object interactions. This coarseness could potentially contribute to the human-scene penetrations observed in the video results.\n\n2. If the output in L306 is full body 22 joints trajectory as stated, I would appreciate visualization of this intermediate result and how different it is from the final generation of OmniControl." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed method is training-free and can directly be applied to given 3D scenes. It leverages GPT-4 for task decomposition, a pretrained OpenScene (Peng et al., 2023) model for object grounding, and a pretrained motion generation model OmniControl (Xie et al., 2023). All modules are readily available for immediate use.\n\n2. The subtask executor considers the interaction between human and scenes, encouraging the human to reach the goal location and avoid colliding with obstacles using the target map and avoidance map.\n\n3. Experiments show the proposed method outperforms two baseline methods in scene-text conditioned motion generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to generate human motions in 3D scenes from natural language prompts. The language prompt is first decomposed into a sequence of simple atomic actions using GPT-4, and then each simple action is processed by the subtask executor to get the joint trajectories. Finally, a pretrained motion generation model from OmniControl (Xie et al., 2023) yields the final human motion conditioned on the decomposed action description and joint trajectories. The authors conducted experiments to show the proposed methods outperforms two baseline methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The presented results can not support the central claim of generating human-scene interactions, such as mopping floor(L40), brushing teeth, and watering plants (L123). These interaction examples are not presented in the submission. According to the presented results in the first supplementary video, there is no real interaction between the human and scene objects. In the presented example of washing dishes, the person does not really have contact with the dishes and just randomly waves hands in the air.\n\n2. The generated motion quality is far from satisfactory. There exists a lot of human-scene penetrations in the presented video results, e.g., the sequence labelled as 'sit on the toilet'. Foot skating and jittering artifacts are obvious in all non-walking sequences. The results in the Complex Motion Evaluation Set even show weird, twisted bodies. The presented motion quality is far from being useful for real applications. I recommend the authors to aim for motion quality at least on par with TRUMANS (Jiang et al., 2024), Object Motion Guided Human Motion Synthesis (Li et al., 2023), and Human-Object Interaction from Human-Level Instructions (Wu et al., 2024).\n\n3. Many important technical details are missing, especially for the subtask executor. The missing information include: the prompts used for the task planner; how the initial human location in the scene is determined; what are the provided code examples to GPT for the Language Model Program (LMP); how is the target map and avoidance map is built; how the N-frame 22 joints trajectory in L306 is obtained from LMP and how the minimization in equation 2 is solved (I also have the question whether the output is a single joint trajectory as visualized in generated trajectory in Figure 3 or full body 22 joints trajectory as stated in L306). \n\n4. With the limited presented information, the planner and subtask task executor are very similar to the method proposed in VoxPoser (Huang et al., 2023b), with a LLM-based decomposition planner, a vision language model for scene grounding and output python programs to build voxel value maps, and trajectory synthesis given the voxel value maps. Further clarifications about the distinction between the proposed method and VoxPoser are needed.\n\n5. Although the subtask executor takes target and obstacle into consideration, the subsequent motion generation by OmniControl is scene-agnostic, which is a source for artifacts like scene penetration.\n\n6. The visualization view in the video results is not informative enough. In the first video, most human bodies are occluded by the furniture, hiding the skating and jittering artifacts. The top-down view of the other videos also has scene or self-occlusion problems, I would suggest adding one more side-view visualization." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1: what’s the difference between your proposed approach to [1][2]? \n\n2: what are the limitations and failure cases of this paper?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1: This paper addresses an important research question. I agree that generating complex full-body motions in various realistic scenarios is crucial for both the computer graphics and robotics research communities. \n\n2: I like the idea of breaking down complex motion generations into several simpler problems using a task planner with large language models. \n\n3: The paper is well-written, with high-quality and easy-to-understand figures. \n\n4: The authors compare the proposed approach to several baselines through extensive experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an approach to generate complex motions using a Large Language Model based on input consisting of a language goal and a complex scene. By incorporating a task planner using an LLM, the proposed approach can decompose the complex action sequences into several simple actions and then solve these simple action generations separately. By combining these simple action sequences, the approach can achieve diverse complex tasks involving full-body motions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1: My first concern is the limited details provided in this paper. For example, there is no information about the prompts used for the large language model and vision language model. I would expect to see these details, at least in the supplemental material. \n\n2: This paper does not discuss its limitations. Could your proposed approach be applied to real-world robots? Would it be robust to sensor noise and ambiguous language goals? Though answering these questions would offer more insights, I would encourage the authors to thoroughly investigate the limitations of their paper. \n\n3: This paper also does not have a discussion of failure cases. Under what conditions would your proposed approach fail? In Table 2, your approach sometimes performs worse than the bassline (Afford-motion). What analysis can explain this result? \n\n4: This paper misses some important related works on generating task planners with large language models and 3D visual grounding, such as [1,2]. \n\nReferences: \n\n[1]: Y. Huang, C. Agia, J. Wu, T. Hermans, and J. Bohg. Points2Plans: From Point Clouds to Long-Horizon Plans with Composable Relational Dynamics, ArXiv, 2024. \n\n[2]: K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural language instructions to feasible plans. Autonomous Robots, 47(8):1345–1365, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024gcml,\ntitle={{GCML}: Grounding Complex Motions using Large Language Model in 3D Scenes},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=30SmPrfBMA},\nnote={under review}\n}" }, "abstract": { "value": "To solve the problem of generating complex motions, we introduce GCML (Grounding Complex Motions using Large Language Model). This method supports complex texts and scenes as inputs, such as mopping the floor in a cluttered room. Such everyday actions are challenging for current motion generation models for two main reasons. First, such complex actions are rarely found in existing HSI datasets, which places high demands on the generalization capabilities of current data-driven models. Second, these actions are composed of multiple stages, with considerable variation between them, making it difficult for models to understand and generate the appropriate motions. Current methods in the HSI field can control the generation of simple actions under multiple constraints, such as walking joyfully toward a door, but they cannot handle the complexity of tasks like the one described above. By incorporating a Large Language Model and a 3D Visual Grounding Model into the HSI domain, our approach can decompose complex user prompts into a sequence of simpler subtasks and identify interaction targets and obstacles within the scene. Based on these subtask descriptions and spatial control information, the Motion Generation Model generates a sequence of full-body motions, which are then combined into a long motion sequence that aligns with both the user's input and the scene semantics. Experimental results demonstrate that our method achieves competitive performance for simple action generation on the HUMANISE dataset and the generalization evaluation set. For complex motion generation, we created a new evaluation set by automatically generating possible behaviors of virtual humans in common indoor scenes, where our method significantly outperforms existing approaches. Project Page: https://anonymous.4open.science/w/GCML-4562/" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "human-scene interaction", "human motion generation", "large language model", "3d visual grounding" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/61dae648c0ef825a9c3e5e659811d5ef4d84b1a5.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "GCML: Grounding Complex Motions using Large Language Model in 3D Scenes" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
30oIfmrcFO
Seq-VCR: Preventing Collapse in Intermediate Transformer Representations for Enhanced Reasoning
main
Active
LLMs;Representation Learning;Reasoning
foundation or frontier models, including LLMs
5;5;6;8
4;5;3;3
2;2;3;3
2;4;3;3
3;1;3;3
6
3.75
2.5
3
2.5
-0.738549
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Same as the weaknesses section." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents a novel regularization technique that improves the model's performance in several reasoning tasks\n- The paper presents detailed analysis of the experimental results, showcasing how exactly the regularization techniques affects the diversity of representations, the learning dynamics, as well as the digit-by-digit accuracy on multiplication tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a regularization technique for preventing representation collapse across the intermediate representations of a deep sequence model. Their results show that 1. the regularization technique increases matrix entropy (low matrix entropy = representation collapse) and 2. when pause tokens are added the language model significantly improved in performance for 4x4 and 5x5 arithmetic tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The effect of the regularization technique was only studied for a relatively narrow domain of tasks, and it would be interesting to understand its effect on more general language benchmarks as well.\n- Slightly more contextualization on how exactly pause tokens are incorporated would assist readers in understanding the work more easily as it is also a core part of what is being proposed in this work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "A discussion that clarified the improvement versus CoT would improve the significance, whether clearly establishing the speedup with Seq-VCR or showing its better generalization/scaling." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The details of the method to seem to be heavily inspired by VICReg, but so far as I can judge, the application of it to the sequence/Transformer is original. The method is, in theory, computationally attractive compared to CoT and the results are fairly compelling. \n\nThe paper is clearly written and the quality of the presentation is moderately high." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the performance of decoder-only models on tasks such as multi-digit mathematical reasoning that require a series of immediate representations. They hypothesize representation collapse of intermediate layers as a key contributor to this poor performance, preventing effective storage of the intermediate steps necessary for these kinds of tasks. While chain of thought reasoning can be effective in counteracting this collapse and performing well on such tasks, the proposed approach seeks to increase entropy among intermediate layers and achieve similar performance with at a reduced computational cost. Formulated in terms of alpha-order matrix-based entropy, the formulate a regularization term which aims are increasing variance and decreasing covariance in the intermediate representations. Additionally, pause tokens are included in the method. Results on three kinds of tests are presented – computing arithmetic expressions, identifying the longest increasing integer subsequence, and performing multiplication of 4 digit or 5 digit numbers. Performance with the regularization term and pause tokens leads to performance which approaches chain of thought on most tests, and regularization performs well on its own for the simpler tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The result in Table 1 naturally provokes a question: This and several previous studies show that GPT-2 with CoT performs remarkably well, but this is actually more difficult to achieve in larger models. What is the evidence/argument that the Seq-VCR approach will scale better with model size than CoT? Figure 8 hints at this but it doesn’t clearly address it.\n\nThe speedup vs CoT is intuitively reasonable but it would have been nice to see performance numbers as in the cite Deng 2024 paper.\n\nSimilarly, it would be helpful to understand the amount of hyperparameter optimization necessary for, e.g., identifying the number of pause tokens used to obtain the best results. Do the number of pause tokens necessary correlate with, e.g., task complexity?\n\nFor completeness, it would be nice to see CoT in figures 7 and 8." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "(1) Pause tokens are a crucial part of the author's technique, but at no point do the authors describe where, and how, the pause tokens are added to the input. \n\n(2) Representational collapsed supposedly happens in the intermediate layers of the transformer, and yet the Lseq-VCR loss term is only applied to the final layer. (Line 225). Shouldn't it be applied to the intermediate layers, where you measure the entropy? Why not?\n\n(3) Equation (3) introduces $\\lambda_1$ and $\\lambda_2$ as hyperparameters, but the paper fails to say what they are set to. \n\n(4) What batch size is used for computing the covariance matrix?\n\n(5) Equation 3 computes the covariance matrix only across the batch dimension. Why? In a transformer, you could potentially use the length dimension as well, which would drastically increase the effective batch size. Did you do an ablation study which showed that to be ineffective?\n\n(6) How is the projection layer $f_{proj}$ trained?\n\n(7) For GPT2 on multiplication, you fine-tune a pre-trained GPT2 model, despite the fact that the pre-trained GPT2 has no real multiplication ability to start with. Why bother with a pre-trained model, instead of just training from scratch, as you do with minGPT on arithmetic expressions?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The application of VICReg to language models is novel as far as I know, and the experimental results are very compelling. This could potentially be a high-impact paper in improving the reasoning capabilities of LLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Background: variance-covariance regularization (VICReg/VCReg) is a technique that was pioneered in vision models. Given a batch of inputs, the technique uses an NN to encode those inputs to a batch of embedding vectors, and then computes a covariance matrix for the embedding vectors. It introduces two losses based the covariance matrix: (a) the variance loss ensures that every dimension of the embedding vector has different values, distributed across the batch, and (b) the covariance loss ensures that different dimensions are not correlated. In vision models, these two losses guard against representational collapse. \n\nThe authors of this paper adapt VICReg from the vision domain to transformer-based language models. They show that when combined with pause tokens, VICReg (now renamed to Seq-VCR) produces large improvements in several tasks that LLMs are usually very bad on -- multidigit arithmetic, arithmetic expressions, and a longest-increasing-subsequence task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unfortunately, the paper itself is hastily and sloppily written, and difficult to follow in places. I had numerous questions when reading it that are not addressed by the text. The current draft does not contain all of the information necessary to replicate these experiments, and does not discuss many of the crucial design decisions. The authors claim that there is an \"Appendix A\", but neglected to provide any supplementary material. See \"questions\" section below for specific questions.\n\nOne of the author's central claims is that transformers suffer from representational collapse, but I do not think that they adequately make that point based on the experimental evidence. There are only two entropy charts in Figure 2, which cover only two narrow tasks. On one of those charts (a) the collapse seems minimal at best, while on the other (b) the addition of pause tokens (the second key technique that the authors propose) actually increases collapse, rather than decreasing it. I would need to see a much larger set of studies, over a variety of different tasks, including general language modeling tasks (translation etc.) to fully buy the author's argument about collapse. If the authors did such a study, however, it would be a significant breakthrough.\n\nSimilarly, I would like to know what the effects of VICReg are on more general language modeling tasks. If the technique helps the model multiply 5-digit numbers after fine-tuning, but otherwise degrades peformance on most other language modeling tasks, then the technique is useless. Because the authors do not perform this ablation, it is impossible for me to evaluate whether this is a high-impact advance over SOTA, or a trivial result.\n\nFinally, the use of pause tokens is interesting, but also seems haphazard. They authors themselves admit that the number of pause tokens is task-specific. To employ this technique more widely, I would need to see a more comprehensive test of where, how many, and under what circumstances pause tokens should be added.\n\nMore specific criticisms:\n\nEquation (3) defines the Seq-VCR loss. The text of the paper claims that it is \"inspired by\" prior work, and cites such work appropriately, but it is more than just \"inspired\". Equation (3) is lifted almost verbatim from the orginal VICReg (Bardes 2021) and VCReg (Zhu 2023) papers, and the authors need to be crystal clear about the source of that equation. \n\n(As a minor nit, it is unclear to me whether or not the covariance term in equation (3) should have an additional 1/(d-1) factor; VICReg has the term, while VCReg does not. I would have appreciated it if the authors explained why they chose one version over the other.)\n\nFor further clarity, the authors should also devote a few lines to defining how the covariance matrix C is computed; as is done in other papers. Otherwise, it can easily be confused with the cross-correlation matrix of the Barlow twins technique, which the authors also cite as inspiration." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see the weaknesses section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The identified representation collapse is quite interesting\n\n2. The proposed method, including Seq-VCR regularization loss and pause tokens, demonstrates novelty and effectiveness based on the experimental results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work identifies representation collapse in the LLM intermediate layers as a key factor limiting their arithmetic reasoning capabilities. The paper proposes sequential variance-covariance regularization (Seq-VCR). It then combines Seq-VCR with pause tokens to prevent the representation collapse. Experiments on GPT-2-small and minGPT demonstrate the effectiveness in improving accuracy on arithmetic reasoning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The representation collapse experiment was conducted only on GPT-2. I am curious whether this phenomenon occurs in more recent and larger LLMs, such as LLaMA 3 or LLaMA 3.1. The authors should either include additional experiments or provide a theoretical analysis to demonstrate that this is not an isolated case.\n\n2. While the proposed Seq-VCR regularization loss has been shown to be effective in arithmetic reasoning tasks, I wonder whether adding this loss after the next token prediction loss would impact the LLM's performance on other tasks (e.g., math reasoning and general MMLU). If it does have an effect, then this method may not be widely applicable. I encourage the authors to discuss this point." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024seqvcr,\ntitle={Seq-{VCR}: Preventing Collapse in Intermediate Transformer Representations for Enhanced Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=30oIfmrcFO},\nnote={under review}\n}" }, "abstract": { "value": "Decoder-only Transformers often struggle with complex reasoning tasks, particularly arithmetic reasoning requiring multiple sequential operations. In this work, we identify representation collapse in the model’s intermediate layers as a key factor limiting their reasoning capabilities. To address this, we propose Sequential Variance-Covariance Regularization (Seq-VCR), which enhances the entropy of intermediate representations and prevents collapse. Combined with dummy pause tokens as substitutes for chain-of-thought (CoT) tokens, our method significantly improves performance in arithmetic reasoning problems. In the challenging 5 × 5 integer multiplication task, our approach achieves 99.5% exact match accuracy, outperforming models of the same size (which yield 0% accuracy) and GPT-4 with five-shot CoT prompting (44%). We also demonstrate superior results on arithmetic expression and longest increasing subsequence (LIS) datasets. Our findings highlight the importance of preventing intermediate layer representation collapse to enhance the reasoning capabilities of Transformers and show that Seq-VCR offers an effective solution without requiring explicit CoT supervision." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLMs", "Representation Learning", "Reasoning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/291e5b3693abc8b78475114fa2c59f0621a7cf96.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Seq-VCR: Preventing Collapse in Intermediate Transformer Representations for Enhanced Reasoning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
30saKMFyHt
FedBiP: Heterogeneous One-Shot Federated Learning with Personalized Latent Diffusion Models
main
Active
One-Shot Federated Learning;Latent Diffusion Models;Data Heterogeneity
other topics in machine learning (i.e., none of the above)
3;3;5;6
5;4;4;4
3;3;4;3
2;2;4;3
3;3;4;3
4.25
4.25
3.25
2.75
3.25
-0.555556
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please see my comments above. I will increase my score if the authors could address my concerns well." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This paper addresses an existing problem with innovative approaches. The originality is solid, and the topic is significant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper discusses One-Shot Federated Learning (OSFL), a decentralized machine learning approach that minimizes communication costs and enhances privacy by requiring only a single round of client data or model upload. Existing methods encounter challenges related to client data heterogeneity and limited data availability, particularly when applied in real-world contexts. The authors highlight the advancements of Latent Diffusion Models (LDM) in synthesizing high-quality images from large-scale datasets. Despite this potential, directly applying pretrained LDM in heterogeneous OSFL leads to distribution shifts in synthetic data, resulting in degraded performance for classification models, especially in rare domains like medical imaging.\n\nTo tackle these issues, the authors introduce Federated Bi-Level Personalization (FedBiP), which personalizes pretrained LDMs at both the instance and concept levels. The effectiveness of FedBiP is demonstrated through extensive experiments on three OSFL benchmarks and challenging datasets in medical and satellite imaging." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Limited Discussion of Limitations in Prior Work\n\n1. Although the authors mention FedD3, DENSE, FedDEO, and FGL in their paper, they do not thoroughly examine the technical innovations in comparison to these works. Notably, the paper at this link (https://arxiv.org/html/2306.16064v2) also utilizes generative models for federated learning. What are the key differences? Please provide a comparison with these existing state-of-the-art methods from a methodological perspective. What distinguishes the proposed method as superior to the others?\n\n2. For instance, the authors mention that these approaches are either inefficient or pose risks of client information leakage, but it is unclear how these methods are inefficient or what specific risks they present. \n\nMethod Details\n\n1. The authors assert that the concepts are initialized randomly. Do they need to label the datasets to define these concepts, or are the concepts learned by the network itself? How do they ensure that the concepts are sufficiently accurate? If the concepts are incorrect, how might this affect the results? Please provide more details on the initialization and learning process of the concepts, and discuss the potential impacts on results if the concepts are inaccurate.\n\n2. What do FedBiP-S, FedBiP-M, and FedBiP-L represent? How do their parameters compare to those of other methods? Do the authors utilize more parameters than other approaches?\n\n3. Will the generated synthetic images need to be synthesized again during training, or will this be completed beforehand?\n\n4. In Table 3, what experiments are represented in Row 2, the one adjacent to FedAVG?\n\nRationale\n\n5. It appears that the proposed method could be applicable to other tasks, such as using diffusion models to address limited data problems in image classification under domain shifts. Why do the authors not demonstrate their approach on general benchmarks? What is the specific relationship of the proposed method to federated learning? It does not seem to address the unique challenges inherent in FL. Please discuss potential extensions to other tasks or domains, and to more explicitly connect their approach to specific federated learning challenges." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- With more synthesized images, the performance seems saturated, what could be the reason?\n- Why don’t consider the segmentation task?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Data heterogeneity and limited data quantity are important topics in FL.\n- Using the latent diffusion model to address the data quantity issue is promising.\n- Evaluations are performed on multiple different datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies one-shot federated learning  (OSFL) and aims to address the data heterogeneity and limited data quantity issue. A personalized version of the latent diffusion model is proposed to address these issues and the proposed method is evaluated on five public datasets with improved performance over compared methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The motivation for studying OSFL needs to be further justified. It takes great effort to build a FL collaboration, but only one-shot communication is performed. This does not make sense in real-world scenarios, as the FL collaboration efforts have not been fully utilized. Furthermore, privacy threats could be defined by using privacy protection methods such as secure multi-party computation, homomorphic encryption, differential privacy, etc. Performing one-shot communication may not be the ideal solution. \n- It is not clear which part needs to be communicated and which parts are preserved locally. It seems only the latent vectors will be uploaded to the server. \n- Finetuning the LDM on local client data should be a straightforward solution, which needs to be discussed and compared in experiments.\n- It may not be proper to claim the application on the medical dataset as a real-world application, the DermaMNIST has a very low resolution of images, while in the medical area, the image size could be large." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What are the time consumption and communication costs associated with FedBiP, particularly when scaling to larger datasets or more clients? Providing insights or metrics on these aspects would help evaluate the practical applicability of your method in real-world scenarios.\n2. Given that the samples generated by the LDM may resemble the original datasets, what measures are in place to ensure that client privacy is preserved? Could you elaborate on how you mitigate the risk of sensitive information being inadvertently exposed through these generated samples?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The integration of a pretrained LDM into the OSFL framework represents a significant advancement in addressing feature space heterogeneity, showcasing creativity and depth in the methodology.\n2. The extensive experiments conducted across various benchmarks effectively demonstrate the robustness and effectiveness of FedBiP, reinforcing its potential impact in the field.\n3. Validating the method on real-world datasets, particularly in sensitive domains like medical imaging, underscores the practical applicability and relevance of the proposed approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel method called FedBiP, which incorporates a pretrained Latent Diffusion Model (LDM) for heterogeneous one-shot federated learning (OSFL). This marks the first OSFL framework designed to address feature space heterogeneity through the personalization of LDM. The authors conduct comprehensive experiments on three OSFL benchmarks characterized by feature space heterogeneity, demonstrating that FedBiP achieves state-of-the-art results. Additionally, the maturity and scalability of FedBiP are validated on real-world medical and satellite image datasets featuring label space heterogeneity, highlighting its promising capabilities in preserving client privacy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper lacks a thorough analysis of the time consumption and communication costs associated with the FedBiP method. Understanding these aspects is crucial, particularly in federated learning settings where resource constraints are common. An evaluation of the efficiency of the model updates and the overhead introduced by the personalized LDM would provide valuable insights.\n2. While the use of LDM for generating samples may enhance data privacy, there is a potential risk that the generated samples could be too similar to the original dataset. This similarity could inadvertently expose sensitive information about the clients’ data, raising privacy concerns. A discussion on how to mitigate these risks and ensure that the generated samples maintain sufficient divergence from the original data would be beneficial." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Is the performance improvement attributed to the prior knowledge introduced by LDMs?\n- What is the novelty of this work compared to other methods that use LDMs for data augmentation in federated learning?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Developed a latent-diffusion-model-based data augmentation method to address the issue of insufficient data in heterogeneous federated learning.\n- Conduct extensive experiments to show the proposed method performs significantly better than the baseline." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposed heterogeneous one-shot federated learning using an improved diffusion-based data agumentation, which can reduce the distribution gap between simulated heterogeneous data and real-world data. Extensive experiments demonstrate that the proposed model significantly outperforms the baseline in heterogeneous federated learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- FedBiP requires uploading latent features to the server, which could potentially lead to data reconstruction attack. Please provide a data reconstruction attack privacy analysis.\n- LDMs are typically large and computationally intensive. Performing bi-level personalization on client devices may impose significant computational and storage burdens, particularly on resource-constrained devices like mobile phones or edge devices. Please provide the time complexity and space complexity analysis in the inference process.\n- Personalizing the model through instance-level and concept-level tuning increases system complexity. This added complexity might pose challenges in management and implementation. Discussing ways to reduce the computation and storage requirements on the client side can enhance the quality of this manuscript.\n- The performance of LDMs is highly dependent on the quality and relevance of their pretraining data. If the pretraining data does not sufficiently represent the target domain, issues related to distribution shift may arise, potentially degrading the performance of classification models trained on synthetic data. It would be valuable for the authors to discuss how FedBiP might perform under significant domain shifts and explore potential mitigation strategies, such as domain adaptation or fine-tuning with domain-specific data.\n- Although the study mentions that FedBiP outperforms existing methods, further validation under different experimental settings, such as imbalanced samples, and on larger-scale datasets, such as ImageNet, CIFAR-10, or CIFAR-100, may still be needed to confirm its effectiveness and scalability.\n- The use of LDMs for data augmentation has been widely discussed in the community [1][2][3]. The core contribution of this manuscript lies in proposing a bi-level strategy to improve this augmentation approach. However, the quantitative results show only limited improvements, while the method introduces additional computational overhead on the client side. Moreover, uploading latent space features raises the risk of data reconstruction attacks, which should be carefully considered.\n\n[1] Morafah, M., Reisser, M., Lin, B., & Louizos, C. (2024). Stable Diffusion-based Data Augmentation for Federated Learning with Non-IID Data. arXiv preprint arXiv:2405.07925.\n[2] Yang, M., Su, S., Li, B., & Xue, X. (2024, March). Exploring One-Shot Semi-supervised Federated Learning with Pre-trained Diffusion Models. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 15, pp. 16325-16333).\n[3] Yang, M., Su, S., Li, B., & Xue, X. (2024). FedDEO: Description-Enhanced One-Shot Federated Learning with Diffusion Models. arXiv preprint arXiv:2407.19953." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a method using pretrained Latent Diffusion Models to address data heterogeneity in One-Shot Federated Learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024fedbip,\ntitle={FedBiP: Heterogeneous One-Shot Federated Learning with Personalized Latent Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=30saKMFyHt},\nnote={under review}\n}" }, "abstract": { "value": "One-Shot Federated Learning (OSFL), a special decentralized machine learning paradigm, has recently gained significant attention. OSFL requires only a single round of client data or model upload, which reduces communication costs and mitigates privacy threats compared to traditional FL. Despite these promising prospects, existing methods face challenges due to client data heterogeneity and limited data quantity when applied to real-world OSFL systems. Recently, Latent Diffusion Models (LDM) have shown remarkable advancements in synthesizing high-quality images through pretraining on large-scale datasets, thereby presenting a potential solution to overcome these issues. However, directly applying pretrained LDM to heterogeneous OSFL results in significant distribution shifts in synthetic data, leading to performance degradation in classification models trained on such data. This issue is particularly pronounced in rare domains, such as medical imaging, which are underrepresented in LDM's pretraining data. To address this challenge, we propose Federated Bi-Level Personalization (FedBiP), which personalizes the pretrained LDM at both instance-level and concept-level. Hereby, FedBiP synthesizes images following the client's local data distribution without compromising the privacy regulations. FedBiP is also the first approach to simultaneously address feature space heterogeneity and client data scarcity in OSFL. Our method is validated through extensive experiments on three OSFL benchmarks with feature space heterogeneity, as well as on challenging medical and satellite image datasets with label heterogeneity. The results demonstrate the effectiveness of FedBiP, which substantially outperforms other OSFL methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "One-Shot Federated Learning", "Latent Diffusion Models", "Data Heterogeneity" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ebd6cf24599b26c0231bfb0fa809bdd32357c392.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/673d3d4e2fe14cd22586b8357520e59ea61a037f.zip" }, "title": { "value": "FedBiP: Heterogeneous One-Shot Federated Learning with Personalized Latent Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
31J6aWPnlR
Which Network is Trojaned? Increasing Trojan Evasiveness for Model-Level Detectors
main
Active
trojan detection;neural trojans;trojans;hidden functionality;monitoring;security;ML safety
interpretability and explainable AI
3;3;3;5
4;4;4;2
3;3;3;3
2;2;2;2
3;3;2;3
3.5
3.5
3
2
2.75
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Literature Update: The field of backdoor attacks is evolving rapidly, yet the most recent baseline references and comparisons in this paper are two years old. Incorporating more recent research would ensure fairer and more rigorous comparisons, thus enhancing the study’s relevance and comprehensiveness.\n\n2. Clarification of Model Counts: The paper mentions training over 6,000 models, but the distribution and structure of these models are not clearly explained. Questions arise about whether the models are homogeneous in architecture and backdoor methodology. The sheer volume of models used would be more insightful if accompanied by concrete conclusions or comparative insights about the models' effectiveness and evasiveness.\n\n3. Statistical Reporting: Given the large number of models tested, it would be beneficial to report the results as mean ± standard deviation rather than as single values. This would provide additional insight into the method’s consistency and generalization.\n\n4. Generalizability Across Backdoor Types: It remains unclear whether the proposed method is effective against other types of backdoor attacks, such as frequency-based or invisible backdoors. Expanding the study to cover these variations would increase the paper’s contribution to the field.\n\n5. Complexity of Models and Datasets: The paper primarily tests on relatively simple models and datasets. Evaluating the method’s performance on more sophisticated architectures (e.g., very deep networks, Vision Transformers) and more complex datasets (e.g., CelebA or face recognition tasks) could further strengthen its impact.\n\n6. Baseline Detector Relevance: The baseline detectors used are somewhat outdated. Including recent works such as Unicorn, Rethinking Reverse-Engineering, and Symmetric Feature Differencing would improve the rigor and relevance of the evaluation. Suggested references include:\n\n[refA] Wang, Zhenting, et al. \"Unicorn: A unified backdoor trigger inversion framework.\" ICLR (2023).\n\n[refB] Wang, Zhenting, et al. \"Rethinking the reverse-engineering of trojan triggers.\" Advances in Neural Information Processing Systems 35 (2022): 9738-9753.\n\n[refC] Liu, Yingqi, et al. \"Complex backdoor detection by symmetric feature differencing.\" CVPR (2022)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ Simplicity and Effectiveness: The proposed method is straightforward yet effectively increases the evasiveness of backdoor attacks, making detection by conventional methods significantly more challenging without overly complicating the attack strategy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a method to increase the evasiveness of backdoor attacks in neural networks, making these compromised models much harder to detect with standard defenses. Using a distribution-matching loss and additional specificity and randomization losses, the approach crafts trojaned networks that closely resemble clean ones, significantly lowering detection success. Interestingly, the enhanced evasiveness also hinders reverse-engineering efforts, making it challenging to identify attack targets or triggers. These findings underscore the urgent need for more advanced detection and reverse-engineering methods in light of evolving backdoor threats." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Outdated References: The paper's references are somewhat outdated, particularly given the rapid advancements in the field of backdoor detection and defenses. More recent studies would provide a fairer and more comprehensive baseline for comparison.\n\n- Lack of Clarity on Model Distribution: The paper reports using over 6,000 models, but it does not clearly explain how these models are structured, distributed, or how they vary. This lack of clarity makes it difficult to assess the robustness and representativeness of the findings.\n\n- Limited Statistical Insights: Despite the high number of models trained, the results are presented as single values rather than as mean ± standard deviation, which would better reflect the consistency and generalizability of the method across the large sample size.\n\n- Narrow Scope of Backdoor Types: The method is tested primarily on standard backdoor attacks, without exploring its applicability to more complex backdoors, such as frequency-based or invisible backdoors, which limits the generalizability of the findings.\n\n- Simplistic Model Architectures and Datasets: The experiments focus on simpler models and datasets, leaving it unclear how well the method performs with complex architectures, like deep networks or Vision Transformers, and on more challenging datasets or tasks, such as CelebA or face recognition.\n\n- Outdated Baseline Detectors: The baseline detectors used in the study are not the most recent in the field. Incorporating newer techniques like Unicorn, Rethinking Reverse-Engineering, and Symmetric Feature Differencing would strengthen the paper’s contribution and provide a more rigorous evaluation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is easy to follow.\n- Detailed experiments\n- Open source" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new evasive Trojan attack method. The attack is motivated by the distribution matching loss inspired by the Wasserstein distance along with specificity and randomization losses. The paper evaluates the new attack over 6, 000 trojaned neural networks and find that their evasive trojans considerably reduce the performance of a wide range of detection methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The main idea (use Wasserstein distance) is not new\n- Lack of some comparison and ablation study\n\nDetailed comments below:\n\n- The core idea of using Wasserstein distance for evasive trojan generation is not new. It would be better if this paper could be compared in detail with existing similar work.\n- The paper could include comparisons with more recent evasive trojan methods, particularly those discussed in Section-2-RelatedWork-Evasive attacks. Although the paper compares the method with TaCT, it is not the most advanced Trojan attacks. Comparing and adapting more advanced evasive attacks will be appreciated.\n- While the paper focus on model-level trojan detection, evaluating the performance against other types of trojan detection methods would be helpful.\n- Lack of some ablation studies, e.g., poison rate." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "please refer to Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The studied problem is interesting.\n\n2. The proposed evasive trojan is harder to be detected than standard trojan.\n\n3. This paper is well-written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a backdoor attack designed to enhance evasiveness against\ndetection methods for backdoored models. This increased evasiveness is achieved\nby incorporating evasiveness loss into the backdoor planting process.\nExperiments on MNIST, CIFAR-10, CIFAR-100, and GTSRB datasets demonstrate the\neffectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of this paper might be somewhat limited. For the Distribution\nMatching module, several existing works, such as LIRA [1] and AdaptiveBlend [2],\nalready propose approaches sharing similar spirits. The specificity loss design\nmay also with limited novelty, as similar ideas have been explored in WaNet [3]\nand Input-Aware Attack [4].\n\n2. The defense methods used in this paper might be somewhat outdated.\nIncorporating more advanced defenses [5,6] is suggested.\n\n3. The experiments are conducted on small datasets with low-resolution images\n(32x32), leaving the generalizability to larger datasets and higher image\nresolutions (e.g., ImageNet) uncertain.\n\n\n[1] Doan et al., LIRA: Learnable, Imperceptible and Robust Backdoor Attacks. ICCV 2021.\n\n[2] Qi et al., Revisiting the Assumption of Latent Separability for Backdoor Defenses. ICLR 2023.\n\n[3] Anh et al., WaNet -- Imperceptible Warping-based Backdoor Attack. ICLR 2021.\n\n[4] Tuan et al., Input-Aware Dynamic Backdoor Attack. NeurIPS 2020.\n\n[5] Huang et al., Distilling Cognitive Backdoor Patterns within an Image. ICLR 2023.\n\n[6] Xu et al., Towards Reliable and Efficient Backdoor Trigger Inversion via Decoupling Benign Features. ICLR 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could you justfiy your novelty and experimental setup?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is easy to follow and works on important problems in the field of adversarial machine learning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new type of trojan attack for deep neural networks to increase the evasiveness of trojans against model-level detectors. The main idea of achieving this is to design a special loss which contains not only the task loss, but also two others that reflect the loss trojan loss to increase attack success rate and evasion loss to make the trojan harder to detect. The evasion loss contains three components, including distribution matching, specificity, and randomization. The experiments show that the proposed method can significantly increase the attack success rate and make the trojan harder to detect." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper seems to be out-dated, not following recent advances in the field of adversarial machine learning. The designed loss function, in particular the evasion loss, is not very novel. There has been work on very similar ideas in the past, e.g., Gradient Shaping (NDSS'23) on distribution matching with both theoretical and empirical results. The idea of smoothing, normalization, and randomization is also not new. \n\nThe experiments are not comprehensive enough to show the effectiveness of the proposed method. The used datasets are rather small, and the generalization of the proposed method to other datasets is not clear." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We develop trojan attacks in DNNs that are more evasive for a broad range of model-level detectors." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024which,\ntitle={Which Network is Trojaned? Increasing Trojan Evasiveness for Model-Level Detectors},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=31J6aWPnlR},\nnote={under review}\n}" }, "abstract": { "value": "Trojan attacks can pose serious risks by injecting deep neural networks with hidden, adversarial functionality. Recent methods for detecting whether a model is trojaned appear highly successful. However, a concerning and relatively unexplored possibility is that trojaned networks could be made harder to detect. To better understand the scope of this risk, we develop a general method for making trojans more evasive based on several novel techniques and observations. In experiments, we find that our evasive trojans reduce the efficacy of a wide range of detectors across numerous evaluation settings while maintaining high attack success rates. Surprisingly, we also find that our evasive trojans are substantially harder to reverse-engineer despite not being explicitly designed with this attribute in mind. These findings underscore the importance of developing more robust monitoring mechanisms for hidden functionality and clarifying the offense-defense balance of trojan detection." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "trojan detection", "neural trojans", "trojans", "hidden functionality", "monitoring", "security", "ML safety" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e6d915cd9a3466eb32385dcb81a6293bf496c665.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Which Network is Trojaned? Increasing Trojan Evasiveness for Model-Level Detectors" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
31UkFGMy8t
Quantifying AI Psychology: A Psychometric Benchmark for Large Language Models
main
Active
Large language model;evaluation;psychometrics;psychology
datasets and benchmarks
3;5;5;8
4;4;3;4
2;3;2;3
2;2;2;4
3;2;2;3
5.25
3.75
2.5
2.5
2.5
0.080845
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- L321-340: The training corpora for LLMs often consist primarily of English data, which may reflect predominantly Western cultural perspectives. Could the results discussed in this section be generalized to other cultures, especially those involving low-resource languages or non-Western societies?\n\n- L363-365: Even humans struggle to make decisions in complex scenarios, often influenced by cultural context and environmental factors. In this light, is it possible to determine what constitutes a \"better\" or \"moral\" decision for LLMs?\n\n- L1288-1295: the personality prompts and reverse ones are generated using GPT-4, which likely reflects GPT-4’s own personality traits. Given this, could the results differ if another model were used to generate these prompts? \n\n- L1874-1875: in the prompt, the rating scale in this setup seems to lack explicit definitions for each score (this is not like a widely known likert-scale). \n\n- When asking for ratings or scores, have you ever considered asking the models to generate a short summary of their rationale for the generated scores? It could give you more structured ideas about the underlying reasoning behind those psychological dimensions." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- A well-written paper and clear to understand. \n- A detailed explanation of their experiment design for each five psychological dimensions, based on solid psychological literature.\n- Essential work for measuring (1) LLMs' psychological behaviors and underlying reasons and (2) their consistency, by creating a comprehensive evaluation framework that is novel and significant to LLM research for improving representations and social interaction with human users." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores the psychological patterns in large language models (LLMs), drawing inspiration from psychometrics. They propose a benchmark for assessing LLMs' psychological traits across five various dimensions, by a thorough design of psychometrics assessment datasets and validations of the results across different LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Most of the detailed explanations and results are in the Appendix; I would suggest refactoring the paper structure to move some from Appendix to the main body of the paper. \n\n- There is a lack of analysis on the underlying causes of LLMs' inconsistency in various dimensions. The paper only provided the numeric reports of experiment results. I would suggest conducting a small study of ablation studies." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Comments/Suggestions/Typos\nDespite its weaknesses, this work presents valuable observations regarding inconsistencies in evaluation results. In particular, this observation could serve as solid evidence to argue that LLMs do not possess attributes corresponding to personality in a psychological sense. We suggest shifting the direction of the paper to emphasize this point.\n\nAdditionally, definitively titling the work as \"AI Psychology\" implies that the psychometric evaluations for AI in terms of human psychology are entirely reasonable. This can limit diverse interpretations of the evaluation results, and give the impression that the results have been misinterpreted." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This work recognizes the difference between humans and LLM, and proposes guidelines to bridge this gap.\n\n- A thorough reliability test was conducted on psychometric evaluation results, particularly reporting the discrepancy between open-ended and self-report questions. While this discrepancy has been observed in other work, its reporting within the field of LLM psychometrics is meaningful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work provides a framework to assess five psychological dimensions (personality, values, emotion, theory of mind, and motivation). Unlike previous works, this study conducts both self-report and open-ended tests. This approach identifies discrepancies between the results of self-report and open-ended tests, which is a valuable observation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This work brings key concepts from psychology but lacks a deep understanding of the domain, losing soundness.\n\n1. While the author recognizes the difference between LLMs and humans and endeavors to bridge the gap, some aspects are still unconvincing. In particular, applying human “personality” assessment methods to LLMs does not appear to be meaningful. The paper loses soundness in the following points.\n\n1-1) Naive definition of “personality”\nIn Section 3, the author defines the human personality as a \"set of characteristics that influences an individual’s cognition, emotion, motivation, and behaviors,\" referring to Friedman and Schustack [1]. However, this definition is overly simplistic.\nEven a closer look at the referred literature [1] reveals that there are more diverse and complex perspectives on the definition of human personality. Specifically, [1] introduces the perspective of Alfred Adler, who provided the foundation for modern “personality theory”. As described in [1], Adler emphasizes that a central core of personality is the striving for superiority. In other words, personality is the character a person strategically develops in the process of adapting to the social environment (i.e., to achieve superiority). For example, suppose a child is raised in a family of painters, where parents adore him when he paints. In that case, he tends to develop a personality as a painter to achieve more compliments from his parents, which is adapting to the environment of the family. Thus, to explain personality, the aspect of “adaptation” and the “environment” is crucial.\n\nFrom this perspective, the assumption that LLMs possess a personality in psychological terms may lack validity, as LLMs do not have a physical environment, nor do they have any desire for adaptation. Therefore, applying the evaluation metrics of human personality directly to LLMs may not be \"meaningful,\" using the term in the guidelines in this work.\n\n1-2) Insufficient references in the psychology domain\nThe naive definition of terminology seems to stem from a lack of a broader study of psychology. This study brings the key concept of “personality” from human psychology but does not take a look at fundamental studies on the concept. It mainly references research on psychometrics, which is only one part of the broader and fundamental study.\n\nThere are approaches that explain structural causes and mechanisms behind the personality, such as psychoanalysis, cognitive psychology, and neuroscience. Among these, psychometrics describes only the aspects that can be observed statistically, but it is based on insights derived from the aforementioned structural explorations. However, this work lacks consideration and reference to such structural perspectives.\n\n1-3) Misuse of datasets\nA naive understanding of personality has led to the misuse of datasets, which is nonsensical. The following query in the SD3 (Short Dark Triad) can be an example of misuse, which is used to assess the LLM's personality in this work.\n\nOne of the questions in SD3 is, \"I enjoy having sex with people I hardly know.\" This likely aims to assess whether the human respondent tends to consider risks related to safety and morality in pursuit of sexual pleasure. It addresses how humans manage and regulate the essential instinctual desires within a social environment. This question can clarify personality, as it asks the style of adaptation to the environment. However, for an LLM, \"sex\" does not ground to a real substance. LLMs have never experienced it, do not know what it feels like, and have no desire for it. They also face no moral judgment or danger of disease. It does not involve adaptation and environment for LLMs. Thus, asking LLMs such a question cannot reveal anything about their personality in psychological terms.\n\n2. Conversely, this work endeavors to strictly apply guidelines to “motivation” and “emotion”, providing alternative redefinitions for them. However, this effort makes the study disconnected from psychometrics.\n\nIn Section 5, the author redefines the evaluation of emotion as \"understanding another person's emotion.\" However, \"understanding the target's emotion\" and \"assessing how well the target understands others' emotions\" are different tasks, though they share the keywords “understanding” and “emotion”. It is difficult to consider the latter as an assessment of the target's emotion. In Section 7, the author redefines motivation as \"self-efficacy.\" However, motivation is distinct from self-efficacy. \n\nThis work redefines the terms “emotion” and “motivation” into entirely different meanings and then measures them, which is outside the boundaries of psychometrics.\n\nReference\n[1] Howard S Friedman and Miriam W Schustack. Personality: Classic theories and modern research. Allyn and Bacon Boston, MA, 1999." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper combines existing datasets, psychological tests in to one unified benchmark, resulting a more comprehensive evaluation than previous works.\n2. It covers five aspects: personality, values, emotion, theory of mind, and motivation and tests on various scenarios such as self-reported questionnaires, open-ended questions, and multiple-choice questions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents psychometric benchmark for LLMs, covering five aspects: personality, values, emotion, theory of mind, and motivation. \nIt tested LLMs on various scenarios such as self-reported questionnaires, open-ended questions, and multiple-choice questions.\n\nThis paper finds that 1) LLMs exhibit discrepancies in psychological tendencies when responding to closed-form versus open-ended questions; 2) LLMs have consistent performance on tasks that require reasoning, such as theory of mind or emotional intelligence; 3) Models vary in position bias and prompt sensitivity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. These proposed dimensions seems to be independent and can be more convincing. For example, the authors could provide more discussions about why these 5 dimensions are selected, what are the logical relationships between these aspects/datasets, and whether/why/how they are the best representation of AI psychology. \n2. Lacking in in-depth analysis and/or insights. First of all, the current conclusions are also disconnected and scattered into independent sections. I would like to see a more coherent and connected narratives. Secondly, the current findings, such as there are discrepancies in closed-form versus open-ended questions, are not completely novel and lacks in-depth analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The motivation for this paper is not entirely clear. What does it mean to 'investigate psychology in LLMs'? What benefits can we gain from investigating psychology in LLMs? Could the authors offer **specific** application scenarios to clarify this?\n\n- I noticed that the prompts used by the authors often begin with \"You are a helpful assistant\" (e.g., Line 1279, 1870). Could this influence the evaluation results, particularly when assessing the personality of the LLM? This phrase may prompt the LLM to appear more open and friendly, potentially masking its inherent personality traits.\n\n- The authors use two competent LLMs, GPT-4 and Llama3-70b, as judges to rate the performance of LLMs on open-ended questions. Given the instability and bias-proneness of LLM-as-a-judge, I would like to see human evaluation results and a comparison of how human evaluations correlate with LLM-as-a-judge results. This would help validate the effectiveness of using LLMs to judge other LLMs' performance in open-ended questions.\n\n- Can you discuss how future research might be improved based on the findings of this paper?\n\nI understand that combining AI and psychology is a challenging and valuable research direction. If the authors can address my concerns, I would be happy to raise my score.\n\n#### Minor Issues\n\n- The authors should provide relevant citations for the statement in Lines 043-044, rather than citing two papers that merely introduce psychometrics.\n\n- More results should be included in the main text to enhance the readability of the paper and provide a clearer understanding of the findings.\n\n- Typo in Line 123: “Llama3-7b” should be “Llama3-70b.”\n\n- What does \"Number\" in Table 1 refer to? the number of items?\n\n- What is the version of the LLM used in this paper?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper provides an interesting conclusion that LLMs show discrepancies in psychological tendencies when responding to closed-form versus open-ended questions.\n- A substantial amount of usable data has been collected, which could facilitate future research.\n- The authors have taken several measures to ensure the reliability of their conclusions, which could serve as a good example for future work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a psychometric benchmark for large language models (LLMs) that spans five psychological dimensions: personality, values, emotion, theory of mind, and motivation. The findings suggest that LLMs exhibit a broad range of psychological patterns." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The writing is somewhat disorganized, and the structure is unclear.\n\n- The authors claim that their contribution is to investigate psychology in LLMs. However, two of the four findings listed in the introduction are well-known and have been extensively studied, namely, position bias and prompt sensitivity, and the reliability of LLMs as judges. This diminishes the novelty of the paper’s contribution. I would prefer to see the authors summarize new findings based on their own experimental results, or present new insights on the well-known issues of position bias, prompt sensitivity, and the reliability of LLM-as-a-judge.\n\n- There is a lack of discussion on how the findings could guide improvements in future research." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024quantifying,\ntitle={Quantifying {AI} Psychology: A Psychometric Benchmark for Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=31UkFGMy8t},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) have demonstrated exceptional capabilities in solving various tasks, progressively evolving into general-purpose assistants. The increasing integration of LLMs into society has sparked interest in whether they exhibit psychological patterns, and whether these patterns remain consistent across different contexts---questions that could deepen the understanding of their behaviors. Inspired by psychometrics, this paper presents a framework for investigating psychology in LLMs, including psychological dimension identification, assessment dataset design, and assessment with results validation. Following this framework, we introduce a comprehensive psychometric benchmark for LLMs that covers five psychological dimensions: personality, values, emotion, theory of mind, and motivation. This benchmark includes 13 datasets featuring diverse scenarios and item types. Our findings suggest that LLMs display a broad spectrum of psychological patterns. We also uncover significant discrepancies between LLMs' self-reported traits and their response patterns in real-world scenarios, revealing complexities in their behaviors. This paper offers a thorough psychometric assessment of LLMs, providing insights into reliable evaluation and potential applications in AI and social sciences. Our dataset and code can be accessed via this \\href{https://anonymous.4open.science/r/LLM-Psychometrics-Benchmark-2A19}{link}." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large language model", "evaluation", "psychometrics", "psychology" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2a603e543fc7da65a269faafc0e040982b40f4b2.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/74aed1b8209b3b6e40dcdcd13924d245b0c2a9ef.zip" }, "title": { "value": "Quantifying AI Psychology: A Psychometric Benchmark for Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
31ssWC2gL8
BrailleVision: Text Instruction Tuning of LLMs to Improve Visual Skills
main
Active
LLMs;Vision-Language Models
applications to computer vision, audio, language, and other modalities
3;3;3;5
4;4;4;5
2;2;2;3
2;2;2;2
2;1;2;3
3.5
4.25
2.25
2
2
1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The idea of learning visual capabilities without visual data is compelling. Through solid and extensive experiments on a variety of datasets and benchmarks, the authors demonstrate the effectiveness of their method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates an interesting approach: improving visual capabilities of Vision-Language Models (VLMs) through text-only training. A large-scale textual instruction tuning dataset featuring visual-related capabilities (e.g., classification, video summarization, and Visual Question Answering) is constructed. The authors empirically show that supervised fine-tuning (SFT) on this dataset can increase downstream performance on VLM benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It is not clearly presented what exactly the \"visual skills\" learned through text-only training are. It appears more like learning and fitting the input-output format and boosting instruction-following abilities in visual benchmarks, rather than actual perceptual abilities. The core challenge in visual tasks—perception, i.e., extracting semantic information from raw pixels—seems untouched, while task format and instruction following capabilities can be well learned through NLP instruction dataset.\n \n2. The additional text-only training requires extra computation and annotated datasets. I question whether allocating an equivalent amount of computation for visual instruction tuning would yield more substantial improvements. Incorporating the visual datasets used for BrailleVision-360k generation (e.g., ImageNet, Ego4D, VQAv2) directly as visual instruction tuning data might also lead to significant performance enhancements.\n\n3. Generating the BrailleVision-360k dataset is complex and requires several additional steps and dependencies (e.g., CLIP, BLIP). A simpler baseline could be considered and compared to verify the necessity of the proposed method: translating images in visual instruction tuning datasets (e.g., LLaVa 1.5 dataset) into captions to derive a text-only dataset. This baseline is more straightforward and direct, and it would be simpler to implement.\n\n4. Writing and Typo Suggestions\n- **Line 048**: \"supervised finetuning with instruction following data (IFT)\" should be revised to \"Supervised fine-tuning (SFT),\" which are more commonly used terms. In modern LLMs, an alignment stage (e.g., RLHF or DPO) is often also included.\n- **Line 084**: A space is missing between two sentences.\n- **Line 097**: The term \"semantic knowledge\" is unclear." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well written and easy to understand\n2. The topic is interesting by exploring text knowledge to improve visual ability.\n3. Experiments are good on some benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes BRAILLEVISION-360K, which is a vision centric text instruction datasets constructed from three aspects: perception, abstraction and reasoning. Experimental results show that text-based instruction fine-tuning with BRAILLEVISION-360K can improve the vision-centric skills for LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. What I am concerned about is the text performance. Will the method proposed in this paper hurt the text capability of LLM?\n\n2. Does vicuna contain the same amount of data in BrailleVision? If not, the experiment is unfair.\n\n3. Most multimodal benchmarks are in-domain or traditional VQA. Why not validate on the latest MLLM benchmarks like MMbench and MMVet, which can better reflect the effectiveness of the method.\n\n4. typos:Line 52 and 84 are missing a space" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. What is the computational cost associated with calculating the additional token weights in Fine-SFT?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The concept of teaching visual skills to LLMs without relying on visual data is intriguing. I appreciate the motivation drawn from Braille codes, which enable visually impaired individuals to understand the world despite lacking optical perception.\n2. The experimental results indicate that training with the proposed vision-centric text data is beneficial, leading to improved model performance on tasks like visual classification, open vocabulary detection, and VQA." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes BrailleVision, a method to enhance the vision-related capabilities of large language models (LLMs) through instruction fine-tuning with vision-centric text data. The authors construct an instruction-tuning dataset designed to teach skills such as visual perception, abstraction, and spatio-temporal reasoning without the use of visual data, analogous to how Braille codes are utilized by the visually impaired. \n\nExperimental results demonstrate that the proposed vision-specialized LLM achieves significant performance gains in tasks such as visual classification, open vocabulary detection, and visual question answering (VQA)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Poor presentation**: \n\nThe paper mainly consists of two parts: the first is how to construct an instruction-tuning dataset, and the second is how the instruct-tuned LLMs can assist multimodal models. \n\n**(1)** For the first part, the authors should pose some cases from the text instruction dataset in the main body of the paper rather than relegating them to the appendix. Otherwise, only through reading section 3, I can hardly understand what kind of data the authors aim to curate or why the curated data can achieve the authors’ goal. \n\n**(2)** For the second part, the authors propose two ways to leverage the tuned LLMs. The second way is multimodal LLM, which is more intuitive and aligns with current prevalent methods. However, for the first way, i.e., LLM assisting vision models, it cost me a lot of time to figure out how the LLM helps visual classification and detection. A diagram illustrating this process would enhance clarity.\n\n**(3)** Many of the expressions in the paper are irregular. For example, the notation ‘→’ used in Table 1 (M-7B → Mistral-7B) lacks clarity, and the actual name of the test dataset is not labeled in the caption of Table 7.\n\n2. **Comparative Analysis**: \n\nWhile I appreciate the motivation, I wonder which data for learning is more efficient and effective: vision-centric text data or vision-text data. Could the authors design an experiment to compare these two approaches? \n\nFor example, in Table 2, if I understand correctly, the authors utilize Mistral-7B fine-tuned with vision-centric text data. What would happen if Mistral-7B were fine-tuned with vision-text data of the same volume?\n\n3. **Inefficiency of Fine-SFT**: \n\nThe method of fine-grained supervised fine-tuning (Fine-SFT) appears inefficient, as it necessitates calculating additional token weights." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Most of my concerns and questions are given in the weakness part. In fact, I think that the proposed BrailleVision dataset still has great potential values if this dataset can be extended to a multimodal one for the VL instruction tuning of common MLLMs. So is it possible to extend this dataset for the common VL instruction tuning of MLLMs, and what benefits it can get?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The propose of a new dataset called BrailleVision to teach text-based LLMs visual skills, such as visual perception, abstraction, and spatial temporal reasoning. The experiments shows the effectiveness of this dataset for text-based LLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focus on improving the visual reasoning ability of text-based LLM on VL tasks, and propose a new dataset called BrailleVision-360k covering the scopes of visual perception, abstraction and spatiotemporal reasoning. A new Fine-SFT tuning approach is also proposed for text-based LLM. However, the study problem receive limited attention in recent MLLM study, and the authors lack enough proofs to highlight the significance of this task, limiting its potential contributions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The importance of the studied problem in this paper is questioned, i.e., improving the visual ability of only text-based LLM. As described in the introduction, I think the most popular paradigm of MLLM is the first one, i.e., extending LLM to vision-language task, which is adopted by most existing MLLMs. In contrast, the mentioned second paradigm seems receiving much less attention in both academia and industry. For instance, the papers cited by the authors in the introduction are before 2024, and only one is published in 2023. I would suggest the authors to give more proofs to indicate the importance of the studied problem, otherwise, the contribution will be very limited. \n\n2. The experimental section is not sufficient. If the authors think that text-based LLM is an optimal solution for multimodal tasks, more comprehensive comparisons are required. In particular, text-based LLM for VL tasks also requires an VLM as a supplement, thus its overall parameter-scale is in fact similar with existing end-to-end MLLMs. So more comparisons are needed, for instances, the comparisons with more advanced MLLMs on more MLLM benchmarks. \n\nMinors:\n\n1. Under the task background and study problem of this paper, the description of ``Current multimodal large language models (MLLMs)\nincorporate general-purpose LLMs through multimodal instruction tuning. These LLMs, however, lack prior vision centric text based training, potentially limiting their effectiveness'' seems not very suitable. At the first glimpse, I thought this paper is to study the VL instruction tuning for common MLLMs." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "finetuning text LLMs to improve their base visual skills" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024braillevision,\ntitle={BrailleVision: Text Instruction Tuning of {LLM}s to Improve Visual Skills},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=31ssWC2gL8},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) have shown exceptional proficiency in natural language processing tasks. More recently, their potential is being explored in vision-centric applications. Current multimodal large language models (MLLMs) incorporate general-purpose LLMs through multimodal instruction tuning. These LLMs, however, lack prior vision centric text based training, potentially limiting their effectiveness. In this work, we propose a novel approach to enhance vision-related capabilities of general-purpose LLMs through instruction fine-tuning with vision-centric text data. Specifically, we curate a diverse dataset, BrailleVision-360K, to teach skills such as visual perception, abstraction, and spatio-temporal reasoning without the use of visual data, analogous to how Braille codes are used by the visually impaired. The dataset is constructed in an automated manner by utilizing LLMs, bootstrapping from existing datasets, and employing VLMs to improve quality. Next, to fine-tune an LLM with this dataset, we introduce Fine-SFT, a novel fine-tuning approach that improves upon standard supervised fine-tuning and preference optimization techniques. Our vision-specialized LLM shows significant performance gains in tasks such as visual classification and open vocabulary detection. Furthermore, when used as the `backbone' for an MLLM, our model outperforms existing LLMs on standard visual QA benchmarks while reducing hallucinations, highlighting the importance of vision-centric pretraining of LLMs in multimodal tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLMs", "Vision-Language Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/820a7d855f7a9ddc9e07804c09851eebcc64f457.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "BrailleVision: Text Instruction Tuning of LLMs to Improve Visual Skills" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
324fOKW1wO
Sample-efficient Imitative Multi-token Decision Transformer for Real-world Driving
main
Withdraw
Reinforcement Learning;Motion Planning;Autonomous Driving
applications to robotics, autonomy, planning
Hang Zhou;Yihao Qin;Dan Xu;Yiding Ji
~Hang_Zhou21;~Yihao_Qin1;~Dan_Xu4;~Yiding_Ji1
1;3;3;3;5;5
4;3;3;4;4;4
3;1;2;1;2;2
2;1;1;1;2;2
3;1;2;3;2;3
3.333333
3.666667
1.833333
1.5
2.333333
0.171499
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Missing some relevant papers:\n\n* \"CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning\" Using offline RL to learn multi-agent behavaior.\n* The Sim Agent models I mentioned above.\n* \"Improving Agent Behaviors with RL Fine-tuning for Autonomous Driving\" Using RL to finetune multi-agent behavior model." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper uses decision transformer, with a set of practices in online RL (prioritized replay buffer), to address the closed-loop planning task in Waymax." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To address the data distribution shift problem when applying supervised-learning or offline RL based behavior model to the closed-loop simulation environment, this paper proposes SimDT, an online imtative learning transformer. The decision transformer is multi-token and equipeed with prioritized experience replay. During testing, receding horizon control is used. Hindsight relabelling is used to assign reward to the data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The multi-token transformer is not novel at all. In the task of simulation agent, multi-token transformer is a standard practice [1,2,3,4] (should note that the multi-token in sim agents is multiple tokens for agents at the same step, instead of multiple tokens for an agent). My overall idea is that multi-step prediction + recending horizong control is not surprising. In Waymo Sim Agent benchmark [5] and Waymax paper, using receiding horizon control on the \"one-shot\" model is a standard practice.\n2. The combination of hindsight replay, prioritized replay buffer is promising. But they are not suprising and their benefits are expected.\n3. Overall, my concern is that the paper lack of novelty. I personally don't prefer the paper putting a bunch of existing practices together and claims we improved the scores, without extensive study on why it works and what insights we can learn.\n\n\n[1] MotionLM: Multi-Agent Motion Forecasting as Language Modeling\n\n[2] KiGRAS: Kinematic-Driven Generative Model for Realistic Agent Simulation\n\n[3] SMART: Scalable Multi-agent Real-time Motion Generation via Next-token Prediction\n\n[4] Trajeglish: Traffic Modeling as Next-Token Prediction\n\n[5] The Waymo Open Sim Agents Challenge" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. More justifications for their claimed performance on collision rate and other metrics invariant of reward engineering (see above)\n2. More controlled study on sample efficiency (see above)\n3. Do the authors have more information to add on the overall novelty of the approach?\n4. Could there be other set of metrics, preferably commonly used, that can further help evaluate all the listed methods?\n\nTechnicality:\n1. What specific information does Figure 3 intend to show? I find related discussions insufficient.\n2. Table 3 can be cleaner with better caption and bolding." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The introduction of multi-token prediction in a decision transformer framework is interesting and may help with the realtimeness of the algorithm.\n2. The writing is easy to follow, and the author’s method is clearly explained.\n3. Experiments include representative SOTA methods and ablation studies demonstrating the necessity of individual components in open and closed-loop settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes SimDT, a decision transformer architecture for autonomous driving. The proposed method leverages prioritized experience replay for efficient learning. It also combats distribution shift problem in the RL problem setup. The result shows big improvement over SOTA on collision rate." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Overall lack of novelty. There is little novel components introduced in the paper. Multi-token prediction has been explored in NLP and RL; PER is classical and nearly 10 years old; the novelty in combating distribution shift in imitation learning is also unclear.\n2. Flaw in experiment design: since the authors’ main argument is that their proposed method has the lowest collision rate, is it possible that this simply comes from the fact that they assigned collision with a very high penalty in the RL? According to equation 5, R_{overlap} = -10 in the method. I don't see any related experiment or discussion to remove this doubt.\n3. No significant improvement overall compared to SOTA: given the unaddressed flaw mentioned above, plus the fact that the SimDT cannot consistently outperform SOTA on most if not all of the metrics, I think it is valid to suspect that even the low collision rate performance of SimDT might not have come from the robustness of the algorithm itself, but simply from reward engineering.\n4. Lack of experiment for “sample-efficient”: I think this part of the title requires a controlled study (fixed amount of data or training FLOPs) to provide empirical results to justify." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What are $a$, $s$, $g$, $\\pi$ variables? It is not clear if they are scalars, vectors, matrices, or function mapping.\n2. Where are loss functions L_a and L_ma used?\n3. Where is R_{imitation} used ? I do not see it in Algorithm 1.\n4. How did authors arrive at rewards for off-road = -2 and rewards of overlap = -10? \n5. How do authors decide to switch from offline learning to online learning in Algorithm 1?\n6. How are $\\alpha$ and $\\beta$ picked in Eq 2?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper addresses an important problem in autonomous driving.\n2. All the experiments are conducted on the real-world Waymo dataset. \n3. The authors show ablation studies to motivate their proposed improvements of the multi-token decision transformer, imitative RL pipeline, and prioritized experience replay." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses a set of important problem in self-driving: generalization to test-data distribution. Authors suggest that current methods are trained in open-loop scenarios and fail to generalize to closed-loop scenarios. In order to address this problem authors proposed 3 improvements: \n1. A multi-token decision transformer. \n2. An online reinforcement learning approach which transitions from offline traininging to online training to allow exploration of new scenarios. \n3. A new scheme for sampling from replay buffer to prioritize scenarios where their policy is not performing well. \n\nThe authors validated and demonstrate the effectiveness of their approach through experiments on real-world datasets and ablation studies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The contributions seem weak, and the baselines are significantly outdated with the latest methods. \n - Authors compare against methods like DQN (Minh. 2013) and BC (Argall, 2009). These are very old methods.\n - There have been many new RL algorithms like Rainbow DQN, TD3BC, CQL, and AWAC. \n - Many transformer-based approaches like Point Transformer V3 Extreme, MotionTransformer, etc. \n\n*Suggestion*: Please add the latest baselines. Baselines from the 2024 Waymo Open Dataset Challenge are a good start.\n\n2. Although the paper is well-written, it lacks technical rigor and is hard to follow.\n - What exact problem are authors solving? From my understanding, the problem is vaguely introduced only in the introduction. \n - The paper does not clearly explain where and how current methods fail.\n - Fig 1 is unclear. The purpose of outside black lines is not clear. I assume the blue lines are the new trajectory sampled by the authors' method. \n\n*Suggestion*: Please add a problem formulation section. Give some examples of how single-token Decision Transformers fail. \n\n3. The method seems credible, but it is heuristically put together. \n - \"The overall online imitative reinforcement pipeline is essential to achieve the greater data-distributed policy\" How does author's method lead to greater data-distributed policy. \n - R_{imitation} is not clearly explained. \n - Where is R_{imitation} used ? I do not see it in Algorithm 1.\n - Switching from offline to online learning seems to have been heuristically chosen. The motivation behind the 0.5 ∗ num scenarios is unclear. \n\n*Suggestion*: I suggest that the authors rewrite the method section to add technical rigor. Each design decision needs to be clearly explained. \n \n4. The math is not clearly and rigorously defined:\n - What are $a$, $s$, $g$, $\\pi$ variables? Whether they are scalars, vectors, matrices, or function mapping is unclear.\n - It is not clear where loss functions L_a and L_ma are used. \n \nMy recommended score for the paper is based on the lack of up-to-date baselines and technical rigor. In my opinion, the paper needs a significant amount of work to be accepted." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See my questions in \"Major comments\" above" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "Significance: the idea of using RL to improve transfer to closed-loop settings is innovative for improving sim agents." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents SimDT, a reinforcement learning framework for sequence modeling in interactive driving scenarios. The authors finetune a policy learned from the offline Waymo Open Motion Data using reinforcement learning in the Waymax simulator, using penalties for collision and going off the road. They evaluate their model on the Waymo Open Sim Agent Challenge (WOSAC)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the concept of using reinforcement learning (RL) to improve transfer in closed-loop settings is innovative in the field of driving, the results presented in this paper are unconvincing. Additionally, the paper includes several unsupported and potentially incorrect claims. The following issues need to be addressed to improve the validity and contributions of this work.\n\n**Major comments** (in order of importance)\n1. Unsupported claims on performance gains. In the closed-loop evaluation, the authors claim that SimDT improves upon DQN by 45.2% in Off-Road Rate and Collision Rate and achieves a 41% improvement over a Behavior Cloning (BC) model. However, these performance improvements cannot be found in Table 1, and the actual improvements observed in Table 1 are much more modest (e.g., ~0.2% for Off-Road Rate compared to DQN, and about 2% for Collision Rate over BC). Misreporting these performance gains in the abstract and main text overstates SimDT’s effectiveness.\n2. Lack of comparison with competitive baselines. A meaningful benchmark for SimDT would include a comparison to the Waymo Open Sim Agent Challenge (WOSAC) leaderboard (https://waymo.com/open/challenges/2024/sim-agents/), which includes the state-of-the-art for closed-loop agent realism and performance on the Waymo Open Motion Dataset. Evaluating SimDT against these established models would provide a clearer understanding of its strengths and limitations relative to current state-of-the-art baselines (as opposed to BC-SAC, which is not SOTA).\n3. Missing information on dataset and evaluation. Tables 1 and 2 lack critical details: there is no information about the number of scenes trained and evaluated on, what percentage of the scenarios is used in practice? This makes it hard to interpret the results.\n4. Misinterpretation route progress metric. The authors suggest that SimDT’s route progress ratio of 105.63% demonstrates the discovery of more efficient routes. However, a ratio above 100% does not necessarily mean a more efficient route; rather, it may simply indicate that the vehicle overshot the destination (e.g. by driving faster than the logged trajectory) or took a longer path. This metric interpretation, as outlined in the Waymax paper (https://arxiv.org/abs/2310.08710; page 5, Section 3.4), does not support the authors' conclusion and could be misleading to readers.\n5. Slightly misleading comparison to expert performance. The authors claim that SimDT’s safety metrics are comparable to those of expert demonstrations, with Collision Rates \"within the same magnitude\" as expert results. However, the expert Off-Road and Collision Rates are significantly lower at 0.41% and 0.67%, respectively, compared to SimDT’s 3.52% and 2.69%. These differences should be put into context, as small percentage differences can have large practical impacts on safety in driving.\n6. Claims on safety and ADE without evidence. The claim that SimDT's focus on safety and kinematic feasibility leads to a cautious driving style with a slightly higher average displacement error (lines 367-368) lacks empirical support. \n7. Claims of sample efficiency without supporting information. Although the method is described as \"sample-efficient,\" no information is provided about the training dataset size, RL training time, or computational resources. These details are important for substantiating claims of efficiency and should be included.\n\n**Minor comments** (that did not impact my score)\n- Line 477: \"SmiDT\" should be corrected to \"SimDT.\"\n- “Open-loop” is commonly used to describe settings where no feedback is provided, not specifically related to the behavior of other agents. I would suggest to clarify this to avoid misunderstanding." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Are the model's output actions smooth during the closed-loop simulation? Why did you choose to supervise the actions using inverse dynamics, which differs from the commonly used waypoint or trajectory-level planning?\n- Does the design of the reward influence performance ?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper makes it easy for readers to grasp the main idea.\n- The experiments are conducted in a closed-loop simulation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "SimDT aims to address the distributional shift problem in closed-loop autonomous driving using a multi-token decision transformer. The paper proposes an online imitative learning pipeline and prioritized experience replay. The method is tested in both open-loop and closed-loop settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The author should compare more recent planning methods in autonomous driving [1]. SimDT outperforms the BC and DQN methods in the experiments section. However, I am curious whether a BC method with an appropriately designed network could potentially achieve better results. Additionally, suitable data augmentation [2] may more efficiently and simply alleviate the problem of OOD than RL-based methods. This is particularly relevant in the setting of autonomous driving, where there is an abundance of expert driving data.\n- The fine-tuning pipeline requires model rollout in simulation. Due to the simulation domain gap, this may still lead to OOD problems in real-world applications. \n- The author should discuss more related work in autonomous driving that utilizes multi-token prediction.\n\n[1] Caesar, Holger, et al. \"nuplan: A closed-loop ml-based planning benchmark for autonomous vehicles.\" *arXiv preprint arXiv:2106.11810* (2021).\n\n[2] Bansal, Mayank, Alex Krizhevsky, and Abhijit Ogale. \"Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst.\" *arXiv preprint arXiv:1812.03079* (2018)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The claim ‘learning based agents face significant challenges when transferring knowledge from open-loop to closed-loop environment’ remains questionable to me. Since many recent advances in decision making follow the fashion of learning from offline dataset, and achieves superior performance in closed-loop control setting. In the experiment (table 1), the BC style method also achieves similar results with the proposed method.\n\n\n2. The proposed multi-token prediction mechanism looks quite like action chunking proposed in recent works [1], which has been proved to be useful in many scenarios. Maybe some discussion and comparison are needed.\n\n\n3. The baselines selection in the main experiments is not convincing (table 1&2). I think the baselines are too old (e.g., DQN, BC). Since the proposed method SimDT is based on DT style policy, I think it’s unfair to compare with some methods more than 10 years ago. Maybe other baselines like OnlineDT, or other recent works are needed as baselines.\n\n\n4. I think the performance is not very strong. In the main experiments, BC + Bicycle(D) in table 1 and BC-SAC in table 2 seems to achieve comparable results with the proposed method. \n\n\n5. Can you further explain on the metric “route progress ratio”? In Appendix A, it “calculates the proportion of the planned route completed by the vehicle”. Why it may achieve over 100%?\n\n\n\n[1] Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well written, the figures are nice. The ablation study is comprehensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed SimDT, a DT-style method to combine imitation learning and online RL for driving. The main motivation is to handling the distribution shift problem in pure IL setting. The paper conduct experiments and ablation study to prove the efficiency of their method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The major concerns are the novelty and the performance of the proposed method. The authors proposed to combine online and offline RL training with decision transformer, which seems to be a quite straightforward combination of DT and online DT. Another drawback is that, the experiments results are not very strong and seems to be comparable with simple baselines. More recent baselines are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nzhou2024sampleefficient,\ntitle={Sample-efficient Imitative Multi-token Decision Transformer for Real-world Driving},\nauthor={Hang Zhou and Yihao Qin and Dan Xu and Yiding Ji},\nyear={2024},\nurl={https://openreview.net/forum?id=324fOKW1wO}\n}" }, "abstract": { "value": "Recent advancements in autonomous driving technologies involve the capability to effectively process and learn from extensive real-world driving data. Current imitation learning and offline reinforcement learning methods have shown remarkable promise in autonomous systems, harnessing the power of offline datasets to make informed decisions in open-loop (non-reactive agents) settings. However, learning-based agents face significant challenges when transferring knowledge from open-loop to closed-loop (reactive agents) environment. The performance is significantly impacted by data distribution shift, sample efficiency, the complexity of uncovering hidden world models and physics. To address these issues, we propose Sample-efficient Imitative Multi-token Decision Transformer (SimDT). SimDT introduces multi-token prediction, online imitative learning pipeline and prioritized experience replay to sequence-modelling reinforcement learning. The performance is evaluated through empirical experiments and results exceed popular imitation and reinforcement learning algorithms both in open-loop and closed-loop settings on Waymax benchmark. SimDT exhibits 41\\% reduction in collision rate and 18\\% improvement in reaching the destination compared with the baseline method." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Hang_Zhou21", "~Yihao_Qin1", "~Dan_Xu4", "~Yiding_Ji1" ] }, "authors": { "value": [ "Hang Zhou", "Yihao Qin", "Dan Xu", "Yiding Ji" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Reinforcement Learning", "Motion Planning", "Autonomous Driving" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "zhou|sampleefficient_imitative_multitoken_decision_transformer_for_realworld_driving" }, "pdf": { "value": "/pdf/b66fbd67cfcd4e7f52c385e623c11e8b1e59e5d0.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e7a46dcaad4aca2e305cee6c1f720ed003252b59.zip" }, "title": { "value": "Sample-efficient Imitative Multi-token Decision Transformer for Real-world Driving" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
328vch6tRs
From Tokens to Words: On the Inner Lexicon of LLMs
main
Active
Detokenization;Large Language Models;LLM;Byte-Pair Encoding;BPE;Subword Tokens;Word Reconstruction;Latent Lexicon;Inner Dictionary;Token Aggregation;Feed-Forward Networks;FFNs;Out-of-Vocabulary Words;Efficiency;Tokenization;Language Model Optimization
interpretability and explainable AI
3;5;6;8
4;4;3;4
2;3;2;4
2;2;3;3
2;2;3;4
5.5
3.75
2.75
2.5
2.75
-0.160128
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "If the model wants to generate some multitoken word that it represents in its 'internal dictionary' is it \"planning\" multiple tokens ahead? Why or why not?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This paper answers particular unanswered questions surrounding \"detokenization\", which has been repeatedly observed and discussed without being properly studied. These are important for observations around, for example, stages of inference in language models.\n\nInterpretability results on early layers of LMs are often lacking, as vocab projections are much easier to perform at later layers. This work provides interesting and convincing results for one role early layers take on in these models, which is indeed different from the roles of later layers.\n\nThe vocab expansion experiments are a nice proof of concept, and could be expanded on in the future to decrease inference times.\n\nThe results on typos are interesting and to my knowledge, novel" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the process in which models transform tokens, which often split long words into subwords (e.g., \"un\" \"h\" \"appiness\"), into higher level representations for the full word through \"detokenization\". Detokenization has been observed in LMs before, but has not been directly studied extensively. This work shows that LMs can recognize when a word is part of a larger word, and show that early attention fuses subwords together (in the last token of the word), and uses early MLP layers to then recall the full word from multiple subwords in an \"internal dictionary\" (e.g., representing \"unhappiness\" as a single vector internally even though it is not in the tokenizer). The authors then show that this can be used to expand a model's tokenizer by including the hidden 'internal dictionary' representation as an input token. This works to some extent.\n\nOverall, this paper enhances our understanding of early layer processing in language models, and provides a path towards enhancing models to reduce inference time." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The evidence for a third stage of processing in Figure 2b is a little sparse. These results are only for one model, and the degree to which accuracy drops is not substantial enough to obviously be due to a difference in processing altogether. These results could be made stronger by including results for more models. As a motivating example, it is fine, but perhaps isn't the best use of that space if this point can't be made more strongly.\n\nTypos:\n\nL370: \"form\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How were tokens assembled into nonwords in sec 3? I am missing detail here which could be useful in understanding the method. I also do not understand what it means to \"fit\" a KNN classifier (which is non-parametric) -- were there representations used which were different from those taken from the model hidden states?\n2. There was a claim made that the proposed method in section 6 can improve inference-time costs, though I cannot find any experiments or numbers for this in the paper. Can the authors point me to or provide any information about this? Thank you." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper analyzes the process of detokenization across transformer network layers via a series of targeted experiments. It builds an intuitive understanding that agrees with many prior works in layer-based analysis.\n\n2. The paper proposes an interesting method for training-free expansion of the model vocabulary by leveraging the insights into internal word representations. This method is shown to be effective in limited experiments. See below in \"weaknesses\" for further thoughts on this.\n\n3. The writing is clear, but sometimes too abstract (see weakness 5).\n\nThis paper shows very solid work and I greatly appreciate the thorough breadth of exploration, though it could possibly be more effective to focus on fewer areas. I want to emphasize that I enjoyed reading the paper and believe it will be strong after some revision, including reworking the claims and focusing more on the novel contributions which are begun later in the paper. I believe it would be more impactful to explore sec 6 in more depth; see weakness 4 below." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper analyzes the process of latent detokenization inside the transformer-based LM forward pass, as it occurs across network layers. It shows that models are able to recognize words from pretraining even when they are noised with slight token variations or artificially split across multiple tokens. These experiments are different from earlier works, but ultimately show very similar findings about hierarchical processing in transformers. Using these findings, a novel method is briefly introduced to leverage the internal states of merged tokens to automatically expand the token vocabulary, which can hypothetically improve inference costs with fewer lookups. This method appears initially effective, but could be explored more." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The concept of an inner lexicon is interesting, but not novel as is claimed in this work. The idea follows implicitly from prior work in the memorization of training data, and explicitly in works about tokenization, such as the introduction of BPE (which is discussed greatly in this paper). It is the stated goal of subword tokenizers to enable learning a vocabulary of words and concepts which is larger than the vocabulary of concrete tokens through the process of token combination. It is nice to see these findings reproduced and analyzed, but they are not new.\n\n2. The experiment in section 3, which motivates the idea of an inner lexicon, is not very strongly designed. Why are nonwords created by randomizing tokens, and not by some other method on the morphological level or otherwise something more linguistically motivated? Resulting nonwords do not seem to follow English conventional morphology (eg. the nonword \"chha\") and this could make it trivial to distinguish words from nonwords. Prior work has shown LLM sensitivity to word frequency in training corpora, and this experiment seems to reproduce those findings. This experiment seems to me to show that LLMs can distinguish easy cases such as \"chha\" which are very dissimilar to real words, and predictably struggles with more difficult cases that more closely resemble real words (see appendix) but there doesn't seem to be strong evidence that the LLM representation is doing more than locating words on a gradient based on their prior likelihood of appearing in the pretraining corpus. This fact is fairly well established at this point.\n\n3. The experiments in the paper seem mostly sound and reasonable, but their novelty is overstated. Several of the earlier experiments in particular build on each other to show that early and intermediate layers in the network are responsible for aggregating and disambiguating word representations (sec 4 and 5). However, these findings may be seen to be subsumed by many prior works in the analysis of syntactic and semantic composition of tokens across transformer layers (see section 4 in [1] for many citations).\n\n4. The paper may have been too ambitious in scope. The first several experiments were good reproductions of findings. The last experiment was novel to me, and it would have been interesting to expand on it more deeply. However, it did not require many of the earlier experiments in order to understand it, which took up most of the room in the paper. Other reviewers may have different opinions, but mine is that the paper would be more valuable if it explored the final research question more deeply, and provided more concrete findings for it. For example, can we estimate a size/contents of the inner lexicon? Does this lexicon scale with model capacity and/or training size? Can we provide some guarantees or estimates about the boundaries of the method of finetuning-free vocabulary expansion? For what kinds of words is this method effective and when is it ineffective?\n\n5. There were many smaller experiments given in the paper, and this resulted in important implementation details being often omitted. For example, experiments often hinge on model memory of tokens from training, and the natural distributions of those tokens in the corpora, but details about how words/tokens were sampled in the tests (such as construction of nonwords) were not often given in enough detail to reproduce experiments. I would expect there to be significant influence of such distributions on test outcomes, so these details are important.\n\n\n[1] Anna Rogers, Olga Kovaleva, Anna Rumshisky; A Primer in BERTology: What We Know About How BERT Works. Transactions of the Association for Computational Linguistics, 2020." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* The experiments in 4.1 focus on logit lens. What about cosine similarity or more direct measrues of similarity?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper addresses a crucial question: how can language models construct symbolic representations of entire words when their input comes from tokenizers that often fragment words in ways that disregard their morphological structure? Specifically, the authors investigate whether LMs internally form representations of morphological units that help bridge the gap between the tokenized input and the naturally holistic nature of words in language. Through experiments, the paper presents some evidence that whole-word representations emerge within the model’s hidden states, even when it processes fragmented word tokens. Additionally, the writing is clear, and the experiments are easy to replicate." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors investigate how LMs internally reconstruct word-level representations from sub-word tokens, a process they term \"detokenization\". They provide evidence that Lms can inherently combine sub-words into hidden representations which can be mapped into coherent words, even for out-of-vocabulary items, across the early to middle model layers. By probing LMs on both known words and artificial nonwords, they show that the model forms distinct representations for these categories, suggesting an \"inner lexicon\" that extends beyond tokenized inputs. The findings reveal that this detokenization mechanism leverages feedforward layers and attention patterns to generate whole word representations, which could, in theory, improve vocabulary flexibility without finetuning (though this is not shown in practice)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I believe there is a disparity between the paper’s claims and the experimental evidence provided to support them. Specifically, some of the experiments lend themselves to alternative interpretations, which could be clarified with additional baselines or experiments. The paper claims is that model come up with an \"internal lexicon\" that create hidden representations of \"virtual\" words, even when fed, e.g., word pieces as input. This is a claim on the computation carried out by the model, i.e., it is implied that there are some modules whose explicit computation is forming this internal lexicon. I am not sure that the experiments provide sufficient evidence for this claim:\n\n* First, the \"motivating experiment\" in Section 3 lacks sufficient controls. The authors demonstrate that there is a linear separation in the hidden state between the representations of actual multi-token words and fictional ones created by randomly mixing word pieces. However, this separation could simply reflect the model's ability to distinguish between linguistically valid English morphology and nonconforming sequences, rather than providing evidence of \"internal detokenization.\" For instance, an alternative hypothesis is that the model has learned distributional cues—such as suffixes like \"ing\" rarely appearing at the beginning of a word—which causes out-of-distribution effects in the hidden states when encountering atypical token sequences.\n\n* In Section 4.1, the authors hypothesize that \"if the model performs detokenization, it will represent the last token of a word similarly to the original word token.\" However, even if such similarity is observed, it could be attributed to the distributional properties of language rather than any explicit \"detokenization\" process. For instance, in the example provided in the paper where \"cats\" is split into \"ca\" and \"ts,\" it is plausible that the pretraining corpus contains instances where this split occurs unnaturally, such as in URLs like \"catsanddogs.com\" (an actual website) or in cases with typos. Such occurrences might push the representation of \"ca ts\" closer to that of \"cats\" without requiring an explicit detokenization step. Furthermore, it is known that such similarities exists also in word2vec methods like Glove, and it is difficult to argue that any explicit detokenization happens there. \n\n* In Section 4.2, the authors feed the hidden state of the last token of a multi-token word into the model and prompt it to repeat the word. Instances where the model accurately reproduces the entire word are taken as evidence that it has stored the multi-token word in an \"internal lexicon.\" However, a key baseline is missing: including phrases that are not single words, such as \"repeat this word: rainy day.\" The observed results could simply reflect the model's tendency to form contextualized representations that transfer information across tokens, rather than indicating an internalized whole-word representation.\n\n* Finally, the paper’s closing sections aim to illuminate the model's internal computations and the supposed formation of an internal lexicon. While the results provide some evidence of contextualization in the feedforward layers, it's unclear to me whether they genuinely support the existence of an internal detokenization process. Intervention-based experiments could strengthen this claim. For example, could we identify a subset of parameters where ablation specifically impairs performance on multi-token words without affecting single-token words? Or could linear concept erasure techniques reveal a subspace whose neutralization removes all distinctions between multi-token and single-token representations?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "For section 5, the feedforward mechanism, why only FFN output are measured please? Does it make sense to also measure the residual part please?\nFor section 6:\n- For all original vocabulary tokens, all models perform less well than the model with original vocabulary? Are there examples to illustrate these and some hints where models fall short?\n- What would be the full newly token accuracy for the model with original vocabulary?\n- With such techniques, what would be an estimated inference speed gain? For input embedding as well as for output embedding?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper presents a significant amount of content and materials while being easy to follow. The paper incorporates appropriately the related works so that it is relatively straightforward to situate the paper in the literature. Concretely, I think the paper has made the following contributions:\n- Through techniques such as logic lens and patchscope, the paper demonstrates convincingly where the model performs the detokenization process by presenting clearly such studies.\n- The paper shows that FFN serves to combine subword information in section 5.\n- In the final section, the paper shows how the understanding can help transformer decoding in practice. The paper adds word embeddings both in input matrix and output matrix and show that the model can accelerate the inference while maintaining a good performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies how subwords are detokenized through transformers into the original word for further processing and understanding (where LLM is able to distinguish between words and non-words are shown are shown in a preliminary study in this work). In this research direction, the paper makes the following contributions:\n- The paper shows that detoenization process happens in the beginning-middle layers using techniques using techniques such as logic lens (single token) and patchscope (multi-token)\n- The paper then carry on experiments suggesting that the detokenization happens within FFN layers\n- Leveraging the above results, the paper shows that that transformer efficiency can be enhanced by introducing \"decodable\" token embeddings; the paper examines both input embeddings and output embeddings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper presents a complete study (with no missing component) in the detokenization study, it feels that the paper can still be further enhanced with some more in-depth studies, some of the questions I have put in the questions section but in general:\n- The cumulative curve shows that FFN indeed contributes to detokenization. What about other components? Are there any hints/patterns that the authors observe in the detokenization process (e.g. what words are first detokenized)?\n- Cumulative rate saturates at around 0.7 shown in the figure. What about the rest 30%? Are these limitations for the measured model? Do better models (e.g. llama3) perform better at these?\n- More details will help section 6 and I list some of them in the questions section. I think these are just some of the questions that a common reader would have after reading the paper. I think the results in this section may be of practical importance and deserve to be enhanced with more empirical results." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We provide evidence that LLMs use an inner lexicon to reconstruct words from sub-word tokens. We thoroughly analyze this detokenization process to understand how LLMs manage words internally, and demonstrate the potential gains in efficiency." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024from,\ntitle={From Tokens to Words: On the Inner Lexicon of {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=328vch6tRs},\nnote={under review}\n}" }, "abstract": { "value": "Natural language is composed of words, but modern LLMs process *sub-words* as input. A natural question raised by this discrepancy is whether LLMs encode words internally, and if so how. We present evidence that LLMs engage in an intrinsic detokenization process, where sub-word sequences are combined into coherent word representations. Our experiments show that this process takes place primarily within the early and middle layers of the model. They also show that it is robust to non-morphemic splits, typos and perhaps importantly---to out-of-vocabulary words: when feeding the inner representation of such words to the model as input vectors, it can \"understand\" them despite never seeing them during training. Our findings suggest that LLMs maintain a latent vocabulary beyond the tokenizer's scope. These insights provide a practical, finetuning-free application for expanding the vocabulary of pre-trained models. By enabling the addition of new vocabulary words, we reduce input length and inference iterations, which reduces both space and model latency, with little to no loss in model accuracy." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Detokenization", "Large Language Models", "LLM", "Byte-Pair Encoding", "BPE", "Subword Tokens", "Word Reconstruction", "Latent Lexicon", "Inner Dictionary", "Token Aggregation", "Feed-Forward Networks", "FFNs", "Out-of-Vocabulary Words", "Efficiency", "Tokenization", "Language Model Optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a9c853f5e18cf12e414664c9e64bee0df6d3d1f6.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "From Tokens to Words: On the Inner Lexicon of LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
33P4evE2ej
Stand on Two Shoulders: Dynamically Merging Tokens from General and Medical Experts
main
Active
Visual Adaptation;Medical Representation Learning
transfer learning, meta learning, and lifelong learning
3;3;5;6
5;4;4;4
3;3;4;3
2;2;3;2
3;2;3;3
4.25
4.25
3.25
2.25
2.75
-0.555556
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Ablation on Gated Mixture-of-Experts: Is the ablation study on the gating mechanism performed using the same model with gating modifications, or are separate models fine-tuned for each gating variation?\n\nComparison with Natural Baselines: Why were simpler baselines—such as direct fine-tuning of the general or medical domain ViT using PEFT—not included? If DynaMer does not outperform these baselines, its complex design may not be justified.\n\nExplanation of Baseline Methods: Baselines such as VPT, GaPT, and LSPT are referenced, but there is no description of their differences. A simple explanation and comparison with DynaMer would enhance clarity and contextualize the model’s improvements." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Innovative Architecture: The gated MoE Adapter is a novel approach to merging features from domain-specific and general-purpose ViTs, potentially improving adaptation to complex medical tasks.\n\nEffective on Benchmark Tasks: The model demonstrates state-of-the-art performance on Med-VTAB, particularly excelling in challenging medical scenarios with limited data.\n\nComprehensive Experiments: Extensive benchmarking and ablation studies were conducted, allowing for a detailed understanding of the architecture's components." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the DynaMer Adapter, an architecture that merges tokens from both general and medical pre-trained Vision Transformers (ViTs) to improve performance on medical imaging tasks. The DynaMer model leverages a Gated Mixture-of-Experts (MoE) Adapter for dynamically selecting relevant features and employs a layer-wise skipping router to optimize computational resources. Experimental results on the Med-VTAB benchmark indicate that DynaMer performs well, especially on few-shot and out-of-distribution tasks, suggesting its potential in specialized medical image applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Efficiency Focus Unsubstantiated: Despite claims of computational efficiency, there is no direct comparison of inference or training time; only the parameter count is reported. Given that two full image backbones are used, inference time could increase substantially, undermining the claim of efficiency.\n\nMarginal Performance Gain: The architecture, while sophisticated, yields limited improvements, making its complexity appear disproportionate to the performance gains observed.\n\nLimited Baseline Comparison: Key baseline methods, such as directly fine-tuning general domain or medical-specific ViTs with Parameter-Efficient Fine-Tuning (PEFT) techniques, are not included. This omission raises concerns about the method’s effectiveness relative to simpler, more straightforward approaches." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "There are some points unclear to Reviewers:\n\n(i) In equation (4), Which exact outputs does the TopKIndex take from $R_S(.)$ to choose a token for MoE? Is it based on the norm of feature outputs or some activation functions?\n\n(ii) Intuitionly, designing a Skipping Router (SR) is not optimal yet. For e.g., there is no conditional information for *SR* to guide the model correctly on which tokens should be used in MoE and which ones should be used for the next layer. The information to update the *SR* course can be derived from gradient returns from the loss function, but the order of tokens used by *SR* has not yet been considered. So, do authors think integrating the **differentiable TopKIndex** will help improve accuracy?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Reviewers see the following strengths:\n\n(a) Authors applied **layer-wise** MoE adaptor to merge features from general and medical ViT models, which is different from prior work based on block features of ViT.\n\n(b) To further reduce computational costs, they proposed a *skipping layer* to select the top relevant tokens used for MoE while the remaining ones fed into the next layers. Furthermore, the idea of using the *gating network* to combine original tokens and output after MoE to make the model stable learning is also interesting.\n\n(c) The experiments are diverse, covering several datasets with detailed ablation studies to support the proposed components in the paper (Gated Mixture-of-Experts, Gating Dimension, Layer-wise Skipping, etc.)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a new mixture of expert mechanisms to combine pre-trained general- and medical-ViT models. The MoE algorithm includes key steps: (a) incorporating Gated Mixture-of-Expert to combine original tokens and tokens after MoE layers; (b) using a Skipping Router to select top-k relevant tokens for MoE components; (c) adapting MoE at each ViT layer as adaptor method. \n\nAuthors conduct a wide range of experiments on general and medical downstream tasks with fine-tuning. The paper shows improvement results on several datasets and outperforms several adaptor and MoE-based approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the method is interesting and novel, the Reviewer is concerned about the significant improvements of experiments. For e.g., \nIn Tables 1, 2, and 3, **DynaMer Adaptor** outperforms other MoE baselines with a *slight margin* (ranging from 0.5% to 1%) while the total parameter is higher than two others, e.g., Adapter with 1.17X. \n\nIn another task with the out-of-domain prediction (Table 9-b), the tasks usually indicate a large gap between baselines; *DynaMer Adaptor* only surpasses other MoE approaches with a similar margin as fine-tuning cases. Therefore, it seems to reviewers that most MoE baselines have similar performance, resulting in *DynaMer Adaptor*'s contributions not being really clear.\n\nReviewers would suggest authors conduct studies in more challenging settings, for e.g., zero-shot or few-shot with linear probing, to highlight the benefits of DynaMer Adaptor. Given these, the Reviewer would revise the rating." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I am open to increasing my scores if the authors can address my comments above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The originality of the work is commendable. The authors propose a new solution to an existing topic. However, the limitations of prior work are not clearly presented, which the authors could further enhance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "A single model optimized for general tasks often falls short in domain-specific applications. This paper presents the DynaMer Adapter, an architecture designed to dynamically merge tokens from both general and medical pre-trained models, thereby enhancing performance in downstream medical imaging tasks. It features a Gated Mixture-of-Experts (MoE) Adapter, which intelligently prioritizes relevant features for specific medical applications. Additionally, the authors introduce a layer-wise skipping router within the architecture. Evaluation results on several benchmarks indicate that DynaMer achieves outstanding performance, particularly in patient out-of-distribution scenarios and tasks with limited sample sizes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of the proposed method is unclear.\n\n1.1 The distinctions between this approach and existing methods such as MOE, MOF, GMOE, and Adapter need to be better articulated.\nAdditionally, some relevant works have not been discussed.\nRegarding Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs (https://arxiv.org/abs/2406.16860):\n\n1.2 The proposed method appears to be similar to concepts presented in the paper \nA Large-Scale Medical Visual Task Adaptation Benchmark, 2024. https://arxiv.org/abs/2404.12876\nBoth utilize gated MOE; what are the specific differences?\n\n2. Furthermore, the performance gains of the proposed method are limited. \n\n2.1 The improvements compared to existing approaches such as MOE, MOF, GMOE, and Adapter are minimal. As shown in Figure 1, the proposed method only achieves about a 0.5 improvement over MOF. How can it be claimed as effective in this field? The authors are encouraged to clarify the significance of the performance gains in relation to existing methods.\n\n2.2 The effectiveness of the layer-wise skipping routers is difficult to verify in this paper. How can the authors demonstrate the effectiveness of this approach?\n\n3. The proposed method is quite close to the following work; however, the author has not addressed the differences.\n\nOutrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer, https://openreview.net/pdf?id=B1ckMDqlg, ICLR 2017." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Computational Cost Analysis, flops GMAc" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "It features a Gated Mixture-of-Experts Adapter for prioritizing task-relevant features and a layer-wise skipping router for optimizing inference time. The DynaMer Adapter achieves state-of-the-art performance on the Med-VTAB benchmark, particularly in out-of-distribution patient settings and with limited samples. The paper demonstrates the potential for broader applicability of DynaMer's principles beyond medical imaging." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the DynaMer Adapter, a novel architecture that enhances Vision Transformers' adaptability for medical imaging tasks by merging tokens from general and medical pre-trained models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper introduces the DynaMer Adapter by leveraging the concept of the Mixture-of-Experts (MoE) at both the feature and token levels, it's crucial to articulate the specific innovations beyond the existing MoE framework. The paper would benefit from a more detailed discussion on how the DynaMer Adapter's approach differs from current state-of-the-art methods, including references to related work that showcases the incremental advancement. Regarding the Layer-wise Skipping Router, clarifying its mechanism as a token-wise selection process could enhance understanding and emphasize its role in improving computational efficiency.\n\n2. The paper's experimental section would be significantly strengthened by including comparisons that demonstrate the value of fusing general and medical pre-trained models over a task-specific, medically trained model. It's essential to show that the combined model not only adapts well but also surpasses the performance of a model trained solely on medical data. This could be achieved by designing experiments that benchmark the DynaMer Adapter against a medical model trained on the same tasks, highlighting the benefits of incorporating general domain knowledge." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "In this work, we introduce the DynaMer Adapter, a novel architecture designed to enable Dynamically Merge tokens from general and medical pre-trained models, enhancing the adaptability of ViTs for medical imaging tasks." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024stand,\ntitle={Stand on Two Shoulders: Dynamically Merging Tokens from General and Medical Experts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=33P4evE2ej},\nnote={under review}\n}" }, "abstract": { "value": "In the realm of medical image analysis, the transferability of pre-trained Vision Transformers (ViTs) to specialized medical tasks remains a significant challenge. Previous approaches focus on adapting a single model, by introducing specialized learnable layers to the pre-trained model. However, a single model optimized for general tasks underperforms in domain-specific applications, while one medical models limited by their fundamental inferior capabilities, is not robust enough in real-world adaptation. To address this, we introduce the DynaMer Adapter, a novel architecture designed to enable Dynamically Merge tokens from general and medical pre-trained models, enhancing the adaptability of ViTs for medical imaging tasks. DynaMer incorporates a Gated Mixture-of-Expert (MoE) Adapter, ensuring that the model ingeniously prioritizes relevant features for specific medical tasks. Additionally, we incorporate a layer-wise skipping router within the architecture, designed to adjust the number of input tokens efficiently, thereby optimizing inference time without compromising on model accuracy. Extensive evaluations on the Medical Visual Task Adaptation Benchmark (Med-VTAB) demonstrate that DynaMer achieves state-of-the-art performance, particularly excelling in patient out-of-distribution settings and tasks with only few samples." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Visual Adaptation", "Medical Representation Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3de168807ce1ecaa0a2f33db6411d942bcd2d0be.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Stand on Two Shoulders: Dynamically Merging Tokens from General and Medical Experts" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
34SPQ6fbYM
The polytopal complex as a framework to analyze multilayer relu networks
main
Active
theory of deep learning + mlp + low dimension + polytopal complex
interpretability and explainable AI
3;3;3;5
4;2;3;3
2;2;3;3
2;2;2;2
2;3;2;3
3.5
3
2.5
2
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the bullet points in Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "•\tThe idea is new in that 1) using a set of polytopes to represent ReLU-based MLP for geometric understanding of layer compositions, and 2) using the polytopes to separate training or testing data points for final outputs.\n\n•\tThe visualization to align trained NNs with polytope representation is well-understood.\n\n•\tThe theoretical analysis is not limited to shallow alignment, but translating the properties of polytopes into behaviors of NN layers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper analyzes the ReLU-based MLP (piecewise linear activation functions) by viewing their layer representations as polytopes. Based on the analysis, an algorithm is proposed to decompose trained NN into polytope sets and then align them with the training/testing data to assess the performance, which seems to be a single-dimension regression error using MSE. The theoretical analysis specifically focuses on several properties of polytopes, aligning them with the behaviors of NNs. Four typical target functions with different structures and characteristics on polytope separations of input space are used for testing the proposed algorithm. This work is new to the best of the reviewer’s knowledge, but the reviewers have concerns regarding presentation quality and completeness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The reviewer has doubts about the motivation of this work considering the following:\n\n•\tWhat is the purpose of using polytope representation to analyze NNs? For example, the piecewise linear function can also lead to strong convexity.\n\n•\tIs the theory only for ReLU-based MLP? While piecewise linear is mentioned, only ReLU-alike activations (e.g., Leaky ReLU) can satisfy this property. If nonlinearity is gradually added, like ELU, is the theory generalizable?\n\n\nRegarding clarity and completeness of the work:\n\n•\tAt the beginning of the Introduction, while the example and Figure 1 catch the eye, the explanation is vague, e.g., what is the “symmetry of the data” and what is the difference between the right two plots so that you prefer the right one? \n\n•\t“Assess the network” seems to be the target, but it’s unclear what metrics are used to quantify which commonly focused capability of NNs.\n\n•\tThe algorithm and theoretical analysis mainly discuss the properties of polytopes without sufficient transitions and demonstrations of the NN representation.\n\n•\tFour typical target functions are used for testing, each with two inputs. A theoretical analysis may focus on the toy case and intuitive observation, but natural thinking is how researchers can learn or use it for further studies." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Line 235 (checking can further check if any derived vertex lies outside of the input cube) -- what does this mean?\n- Why do we need the validity checks in Section 3.2? Does the proposed algorithm not guarantee the validity of its results? If the validity checks fail, what do we do?\n- Line 429 (Balestriero & LeCun (2023) has some similarities to our algorithm but differs in scope) -- how is the scope different?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Interesting research direction; Figure 8 looks awesome." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an algorithm that decomposes the input space of a ReLU MLP in convex polytopes. This algorithm allows for analyzing such neural networks beyond validation points, including properties such as curvature, hyperplanes, and stars." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am not familiar with this line of research, so I lack the expertise to judge the novelty of this paper, and I apologize for potential misunderstandings in advance.\n\n1. Motivation:\n- This paper presents some cool results, but it is still not clear to me why the community would benefit from the polytopal analyses. Furthermore, all analyses are on toy problems that fit closed-form functions.\n- One good way to clear up this confusion would be to apply the proposed method on some real-world classifiers (such as ResNet for image classification, or some simple MLPs for various real-world smaller tasks), show that the polytopal analyses reveal properties of the learned neural network that could not be found with existing methods, and discuss how these properties affect real-world applications.\n\n2. Comparison with existing works:\n- Line 444 (Humayun et al. (2023) works only for two dimensional inputs) -- the experiments in this paper also considers two dimensions, and Line 134 says \"we only investigate curvature only for the two-dimensional case.\"\n\n3. Other weaknesses:\n- Although Figure 8 is nice, it lacks legend and axis labels.\n- Typo: Line 150 -- MLP->MPL.\n- Typo: Line 234 -- the final \"s\" in the word \"assess\" is missing, making it borderline NSFW ;)\n- Typo: Line 318 -- the the." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "# Clarification\n1. What did the authors mean in L369-370 (“Further, looking at … interpolating the data.”)?\n2. What did the authors mean by “until the training collapses” in L414?\n# Curiosity\n1. The idea behind motivation made me think about robustness. I know that currently the setting is closer to regression than classification. Have the authors thought about generalizing to classification? I think it would be interesting to see the relation between the robustness of an activation region and the number of samples it contains (as I mentioned in the Weaknesses when proposing a possible future direction)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper reads well. The flow between the sections and paragraphs is smooth. Some Figures need minor improvement for better clarity, but in overall they are well thought through. I particularly like the usage of level sets, as they make visualizing the three dimensional functions much clearer, and, as far as I know, it's not a very common approach in this field.\n2. I really enjoyed Section 3.2. It is absolutely necessary to validate the polyhedral complex obtained by our algorithms, yet, as far as I know, this is the first work that actually mentions the steps taken to ensure validity. This is a good step towards more trustworthy methodologies.\n3. Great analysis of the decomposition time in Figures 5 and 6. It is known that computing the polyhedral complex is tremendously computationally intensive, yet not many works provide detailed runtime analysis-only other work I know of that does something similar is the work of Serra et al. (2018), although their analysis is less detailed.\n4. As far as I know, this is the first work to show the impact of regularization on the number of activation regions.\n5. The motivation behind the paper is really interesting. I agree with the authors that there is a “need for methods which extend the validity of networks beyond the test data”. This also fits the data-driven principles perfectly, and might allow for more informed data augmentation/pruning strategies in the future if this method is extended to real world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper is motivated by the fact that not all activation regions contain training data. The work can be divided into two parts. In the first part, the authors introduce a variation of Region Subdivision algorithm that allows them to use both H- and V-representations of the cells in the polyhedral complex produced by ReLU networks. In this part they also provide a thorough analysis of the algorithm, most notably, in regards to validity, and timing. In the second part, they leverage the obtained decomposition for various analyses, such as analyzing the cell volume, star volume and curvature. They also analyze the impact of width, depth, and regularization parameters on the computation time and number of cells." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# Inconsistent story\n\nIn the *Motivation* paragraph of Section 1 the authors mention that this paper is motivated by a need for methods that extend the validity of networks beyond the test data. Despite that, this is not the main focus of the later sections. It appears to me that the motivation does not match the paper. Authors later state that “this paper extends the validity of a neural network beyond a discrete test data point to its neighbors”. I don’t believe that this paper actually meets that claim. After reading the *Introduction* I expected to see experiments showing me how we can perform testing beyond the test set, and how it changes the perceived generalization capabilities of a model. However, there are no such experiments in this work. To me, the paper focuses more on investigating properties of linear regions, rather than extending testing beyond the test set. \n\nI expect the authors to rewrite the Introduction so that it fits the rest of the paper, and doesn’t make any false claims. To reiterate, in its current form, the paper only shows that it is theoretically possible to extend the testing beyond the test set, and that stars of a polyhedral complex could be used for that. However, there is no explicit algorithm proposing this extension, neither are there any experiments showcasing the validity of that extension, despite Section 1 hinting that it's the main focus of the paper,\n\n# Poor literature review\n\nThe literature review of the field of linear/activation regions is practically nonexistent. The authors missed several essential works from the field of activation/linear regions. Below I list the most influential ones that I would expect to be referenced by any paper in this field. \n\n[1] Hanin, B., & Rolnick, D. (2019, May). Complexity of linear regions in deep networks. In International Conference on Machine Learning (pp. 2596-2604). PMLR.\n\n[2] Wang, Y. (2022, July). Estimation and Comparison of Linear Regions for ReLU Networks. In IJCAI (pp. 3544-3550).\n\n[3] Liu, Y., Cole, C. M., Peterson, C., & Kirby, M. (2023, September). ReLU neural networks, polyhedral decompositions, and persistent homology. In Topological, Algebraic and Geometric Learning Workshops 2023 (pp. 455-468). PMLR.\n\n[4] Arora, R., Basu, A., Mianjy, P., & Mukherjee, A. (2018). Understanding deep neural networks with rectified linear units. ICLR.\n\n[5] Serra, T., Tjandraatmadja, C., & Ramalingam, S. (2018, July). Bounding and counting linear regions of deep neural networks. In International conference on machine learning (pp. 4558-4566). PMLR.\n\n[6] Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., & Sohl-Dickstein, J. (2017, July). On the expressive power of deep neural networks. In international conference on machine learning (pp. 2847-2854). PML.\n\n[7] Novak, R., Bahri, Y., Abolafia, D. A., Pennington, J., & Sohl-Dickstein, J. (2018). Sensitivity and generalization in neural networks: an empirical study. ICLR.\n\n[8] Gamba, M., Chmielewski-Anders, A., Sullivan, J., Azizpour, H., & Bjorkman, M. (2022, May). Are all linear regions created equal?. In International Conference on Artificial Intelligence and Statistics (pp. 6573-6590). PMLR.\n\n[9] Hanin, B., & Rolnick, D. (2019). Deep relu networks have surprisingly few activation patterns. Advances in neural information processing systems, 32\n\n[10] Zhang, X., & Wu, D. (2020). Empirical studies on the properties of linear regions in deep neural networks. ICLR.\n\n[11] Croce, F., Andriushchenko, M., & Hein, M. (2019, April). Provable robustness of relu networks via maximization of linear regions. In the 22nd International Conference on Artificial Intelligence and Statistics (pp. 2057-2066). PMLR.\n\n[12] Xiong, H., Huang, L., Yu, M., Liu, L., Zhu, F., & Shao, L. (2020, November). On the number of linear regions of convolutional neural networks. In International Conference on Machine Learning (pp. 10514-10523). PMLR.\n\n## Minor issues stemming from poor literature review\n\n1. [3] already showed that adjacent activation regions differ in only one bit of their activation sequence, which the authors mention in L232, and so should be correctly cited. \n\n2. Until Section 3 it is unclear if the authors work on activation or linear regions. $C_k = \\text{conv} (v_1, ..., v_p\\)$ requires convexity, which doesn’t hold for linear regions as mentioned by [9], so for clarities sake authors should clarify which regions they focus on early in their work.\n\n# Novelty\nFrankly, I am unsure whether the paper is novel enough to be accepted to a venue like ICLR. The algorithm in Section 3.1 is an incremental modification of the classical Region Subdivision algorithm. The validity and time checks are a pleasant addition to the paper (compared to relevant literature), but they do not provide significant novelty. Similarly, other contributions that I praise in *Strengths* are new but do not feel novel enough to me to deem acceptance to ICLR. My opinion on novelty would significantly change if the authors performed experiments in which they “extend the discrete training data set to a neighborhood given by the union of the cells, and show experimental results”, rather than showing that it is theoretically possible. I believe that this would be a very strong and novel contribution. Especially, if the authors managed to generalize this outside of toy datasets (possibly by employing approximations or taking a scenario from embedded systems or virtual sensors mentioned in Section 6). I believe that without this, the work would not be of interest to the wider research community.\n\nConsequently, a possible future direction that the authors can take is proposing and implementing an algorithm that extends testing beyond the test set. Analytically computing the polyhedral complex in large datasets (or even on MNIST) is absolutely unfeasible. However, I think that the authors could estimate the neighboring linear regions using linear search (monitoring changes in activations along a random vector from a data point). This would allow them to extend both the training and test sets. Both could be used for measuring robustness, while the latter could be used for achieving more thorough accuracy (it’s important to find if incorporating these new test point has any effect on the perceived accuracy and robustness though).\n\n# Potential improvements in clarity (minor)\nHere I propose few changes that could be implemented to further improve clarity (please consider these as simply suggestions rather than requests for change):\n 1. Not all readers will be acquainted with topology, and visualizing what a star is would allow for easier reading.\n 2. There are typos in Lines 134, 150, 235-236, 265, 266\n 3. In L150 authors do not mention which appendix to go to.\n 4. Wouldn’t it be easier to visualize the Himmelblau and Griewank functions rather than explaining them?\n 5. The Figures 4 and 5 don’t specify the unit for time. Figure 4b has the wrong ylabel. The digits in the yellow cells of Figure 6 are unreadable.\n 6. What are $\\rho$ and $\\nu* from L128?\n 7. It would be great to provide a few sketches that would simplify understanding of the algorithm from Sec. 3 for readers that are new to the field, especially for ($\\alpha^3$) which is very confusingly written.\n\n# Summary\nI believe that in its current state the paper should be rejected. However, if the authors address my issues regarding *Inconsistent story* and *Poor literature review* I am happy to increase the rating of the paper. However, my rating is unlikely to change beyond boundary reject (5). In my opinion, for this paper to introduce a strong contribution to the research community the authors should expand it towards extending the testing beyond the test set with some experiments showing the applicability of this new technique (ideally beyond toy datasets)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It will be useful if the authors can provide more analysis on the bounds of the number of vertices obtained in their algorithm, thus providing more information on the complexity of their algorithm.\n2. In Equation (1), it should mention that $\\sum \\lambda_i = 1$, otherwise it is not true.\n3. More references on the bounds of the number of cells and vertices should be added to the paper." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper provides a novel algorithm to compute the polytopal complex of a neural network.\n2. From the obtained polytopal complex the authors can analyze several properties, such as the maxima, minima, number of cells, local span, and curvature of the network.\n3. The authors also analyze the effects of depth, width, and regularization on the complex." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers neural networks with linear layers and ReLU activation function. In this case, the network is a continuous piecewise linear function and the input space can be decomposed into cells in which the network is an affine function for each cell. The authors provided an algorithm to compute the polytopal complex formed by such cells of a neural network. By this decomposition, they can compute several statistics, such as the maxima, minima, number of cells, local span, and curvature of the network. They also provide several empirical results for some functions such as the Himmelblau function and the Griewank function." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The time complexity $O(|vertices|)$ of the algorithm in this paper is very high, since the number of vertices obtained in this algorithm should be some exponential functions of the number of neurons or the number of layers in the network, which makes the algorithm not very useful in practice, especially for deep networks. \n\n2. The assumption that the network is a continuous piecewise linear function is also not very useful since the DNNs used in practice have much more complicated structures." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We capture the local properties of a mlp such as continuity, number of cells, and extrema by computing its polytopal cell complex." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024the,\ntitle={The polytopal complex as a framework to analyze multilayer relu networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=34SPQ6fbYM},\nnote={under review}\n}" }, "abstract": { "value": "Neural networks have shown superior performance in many different domains.\nHowever, a precise understanding of what even simple architectures actually are\ndoing is not yet achieved, hindering the application of such architectures in safety critical\nembedded systems. To improve this understanding, we think of a network\nas a continuous piecewise linear function. The network decomposes the input space\ninto cells in which the network is an affine function; the resulting cells form a\npolytopal complex. In this paper we provide an algorithm to derive this complex.\nFurthermore, we capture the local and global behavior of the network by computing\nthe maxima, minima, number of cells, local span, and curvature of the complex.\nWith the machinery presented in this paper we can extend the validity of a neural\nnetwork beyond the finite discrete test set to an open neighborhood of this test set,\npotentially covering large parts of the input domain. To show the effectiveness of\nthe proposed method we run various experiments on the effects of width, depth,\nregularisation, and initial seed on these measures. We empirically confirm that\nthe solution found by training is strongly influenced by weight initialization. We\nfurther find that under regularization, less cells capture more of the volume, while\nthe total number of cells stays in the same range. At the same time the total number\nof cells stays in the same range. Together, these findings provide novel insights\ninto the network and its training parameters." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "theory of deep learning + mlp + low dimension + polytopal complex" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d13260fdca562b1b6a55b497233da964425fe223.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "The polytopal complex as a framework to analyze multilayer relu networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
34syfledje
Feature Discrimination Analysis for Binary and Ternary Quantization
main
Active
binary quantization;ternary quantization;feature quantization;discriminant analysis;sparse representation
other topics in machine learning (i.e., none of the above)
5;5;8
4;3;4
2;2;4
2;2;3
3;3;4
6
3.666667
2.666667
2.333333
3.333333
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. What if it is not a binary classification problem (more than two classes) or for image classification problem with muti-labels?\n2. The paper provides theoretical analysis that the appropriate threshold τ exists, but how to set it not depending on the classification accuracy?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The motivation is interesting and the addressed quantization analysis problem is meaningful. \n2. The proposed feature discrimination analysis is pretty novel.\n3. Sufficient and rigorous theoretical proof to derive the value range of the quantization threshold τ based on µ and σ for binary quantization and ternary quantization, respectively. \n4. Clear method statement, careful logic and sufficient explanations.\n5. Adequate experiments on both synthetic data and real data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes “feature discrimination” to analyze the impact of quantization on classification, which offers a more direct and rational assessment of classification performance rather than relying on quantization error as previous researches asses classification performance roughly." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As for Eq. (5), further explanations are need to state why discrimination between two classes of data can be formulated to Eq. (5) for clarify.\n2. In the Remarks paragraph on P4, the authors said “it is demonstrated that the desired thresholdτdoes exist, when the two classes of data X∼N(µ, σ2 ) and Y∼ N(−µ, σ2 ) are assigned appropriate values for µ and σ”. Are µ and σ in the quantization space set? I mean once the quantization method is used, the distribution in the quantization space is determinate. How can we guarantee appropriate values for µ and σ? In other words, if the quantization space does not meet the condition, is the analysis reasonable or applicative?\n3. In real data experiments, we see the value ranges for the threshold τ, it is better to provide the values for µ and σ in the real data case to further analyze the influence of the distributions for quantization." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The experiment results show classification accuracies with original, binary, and ternary data. Is it possible to show the feature discrimination capability (e.g., the ratio between inter-class and intra-class scatters) represented with some numbers and quantization errors? This is what the study directly deals with. \n\nFrom the experimental results, we may conclude that classification with binary or ternary quantization does not always achieve a better result, even for binary classification. An optimal threshold value is essential, but there is still no solution to find that. \n\nFinally, it would be better to clarify whether the finding is for general classification or binary classification and why." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "As the authors claim, this may be the first study to exploit feature discrimination to analyze the impact of quantization on classification. One important finding is that the quantization thresholds have an impact on feature discrimination and classification performance. The authors conducted theoretical analyses to prove that binary and ternary quantization can enhance feature discrimination between two classes of data. The choice of the quantization threshold becomes a key factor for better classification performance. \n\nThe work is original and interesting, and the paper is well written and presented. The idea was proved through numerical analysis and experiments with simulated and real-time data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents the study of binary and ternary quantization for the classification problem through the feature discrimination capability analysis. The main contribution of this paper is to prove that quantization errors do not necessarily lead to decreased classification performance. The proof is done through theoretical analysis and experiments with simulated and real-life data sets. Thus, the estimation of the classification performance can be achieved by examining the feature discrimination of quantized data rather than only the quantization errors." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although the paper is easy to follow, some problems still need to be clarified. \n \n- As stated in 2.4, the major goal of this paper is to investigate whether there exist threshold values in binary and ternary quantization to improve feature discrimination. This is confusing, as different threshold values will result in different feature discrimination measures. Then, what is the significance of this finding? Could you please clarify the practical implications of finding threshold values that improve feature discrimination or how the finding will help optimize the classification process?\n\n- The abstract mentions classification generally but does not specify the number of classes. In the study, the experiment is binary classification, even for real-life data. Does that imply any relation between binary and ternary quantization with binary classification or even ternary classification? This raises the question: is this finding for general classification or binary classification only? What additional work would be needed to generalize the findings to multi-class (>3) classification problems?\n\n- The design of the experiments can be improved to support the claims directly. For instance, the experiments whose results are presented in 4.1.2 considered data sparsity, data dimension, different classifiers, and the difference between binary and ternary quantization. To some extent, the results raise more questions. The comparison between binary and ternary quantization does not derive any solid conclusion, just stating, \"yield superior performance.\" The variables considered in the experiments are not directly related to the core topic. The variables should include the \"feature discrimination measure\" and \"quantization error,\" and experiments should consider the three scenarios with original data, binary, and ternary quantization. \n\n- In Figure 3, the classification with binary or ternary quantization does not always achieve a better result. Then, an optimal threshold value is expected, but how? This is not available in this study. In practice, how can this value be determined for varied scenarios (such as data dimension)? \n\n- The paper criticizes using quantization errors to estimate classification performance. Is it possible to show the quantization errors in the experiments as a baseline? This will help better understand the value of the work. A comparison of quantization errors and classification performance across different threshold values, to directly illustrate the limitations of using quantization errors as a proxy for classification performance may be beneficial." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. This paper said the quantization on the data with large sparsity will have a negative effect on the performance which is contradictory to a current paper [1].\n[1] Chen, M., & Li, W. (2023). Ternary and Binary Quantization for Improved Classification. Authorea Preprints.\n\n2. Is the proposed feature discrimination-based quantization analysis approach applicable to quantization methods beyond binary and ternary?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper analyzes the impact of binary and ternary quantization on classification performance based on feature discrimination, which directly correlates to classification performance and offers an alternative to quantization error as a metric.\n\n2. Theoretical derivations are well-supported with numerical experiments across different types of datasets, including synthetic and real datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method to analyze the impact of binary and ternary quantization on the performance of classification tasks by focusing on feature discrimination rather than quantization errors. Unlike traditional approaches that primarily uses quantization errors to estimate performance degradation, this work demonstrates that by selecting a proper quantization threshold, binary and ternary quantization can sometimes improve classification accuracy by enhancing feature discrimination. Through theoretical analysis and empirical experiments on synthetic and real datasets, the paper provides valuable insights into how specific quantization thresholds can yield optimal classification performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. No large datasets were used: The datasets used in classification tasks are too small. Experiments on large datasets are needed to verify whether the conclusions and findings of this paper are still valid.\n\n2. The classification tasks are too simple: The authors verified the impact of quantization on feature discrimination only in binary classification tasks which are too simple and the conclusions and findings of this paper may not work for complex classification tasks.\n\n3. Limited classifiers were studied: The work only studied the impact of binary and ternary quantization on feature discrimination of KNN and SVM. How about MLP or decision trees or other classifiers?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024feature,\ntitle={Feature Discrimination Analysis for Binary and Ternary Quantization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=34syfledje},\nnote={under review}\n}" }, "abstract": { "value": "Quantization serves as a fundamental operation in machine learning, widely used for algorithm-hardware deployment and the simplification of data representation. Given that classification stands as a pivotal role in machine learning, it is crucial to investigate the impact of quantization on classification. Generally, the investigation revolves around quantization errors, under the assumption that higher quantization errors typically lead to poorer classification performance. However, this assumption lacks a solid theoretical foundation, and often contradicts empirical findings. For example, some extremely low bit-width quantization methods, such as the $(0,1)$-binary quantization and $(0, \\pm1)$-ternary quantization, sometimes can achieve comparable or even superior classification accuracy than the original non-quantized data, although suffering from high quantization errors. To provide a more reliable estimate of classification performance, rather than focusing on quantization errors, we propose to directly examine the feature discrimination of quantized data. It is proved that the aforementioned binary and ternary quantization can surprisingly enhance, rather than diminish, the feature discrimination of original data. This remarkable performance is validated through classification experiments conducted on various types of data, including the image, speech and text." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "binary quantization", "ternary quantization", "feature quantization", "discriminant analysis", "sparse representation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d523dc52f9840af44f6b7ee42c2b24cc72112ee3.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Feature Discrimination Analysis for Binary and Ternary Quantization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
34xYxTTiM0
Optimizing Calibration by Gaining Aware of Prediction Correctness
main
Active
Post-hoc Model Calibration;Model Calibration Loss
applications to computer vision, audio, language, and other modalities
3;5;5;6
4;4;4;3
3;3;2;3
2;2;3;3
3;2;2;3
4.75
3.75
2.75
2.5
2.5
-0.662266
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Why does the paper initially emphasize using a continuous calibration error instead of the Expected Calibration Error (ECE)?\n\nWhat is the intended synergy between the CA loss and the transformation component, given their distinct purposes of reducing ECE and enhancing cross-domain robustness?\n\nCould the proposed algorithm’s effectiveness be validated further by testing it on alternative baselines?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper presents a range of validation scenarios to assess the effectiveness of the proposed framework. In numerous cases, the framework achieves state-of-the-art performance, validating the impact of its two novel schemes. The experimental setup and comparisons are thoughtfully designed, with detailed descriptions that enhance clarity and reproducibility. Mathematical derivations are presented comprehensively, and the overall narrative is organized in a way that makes the framework easy to follow and understand, emphasizing key components effectively." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces two innovative methods to address calibration errors in deep learning predictions: a correctness-aware loss function and a sample transformation technique. The correctness-aware loss function aims to directly minimize calibration error, effectively improving the calibration of misclassified samples by narrowing discrepancies across all classes. Additionally, to boost cross-domain performance, an augmentation-based transformation is applied to calibration samples, enhancing robustness across varied domains. Both methods are implemented in a post-hoc calibration framework, and the proposed algorithm demonstrates state-of-the-art performance, particularly in cross-domain settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper has several strengths, yet I have some specific concerns that warrant attention:\n\n1. Definition of \"Narrow Misclassification\":\n The term \"narrow misclassification\" appears in the abstract, and the correctness-aware (CA) loss is presented as targeting this condition by adjusting predictions across different classes rather than solely reducing confidence in the incorrect class. However, a clear definition of \"narrow misclassification\" is missing, and it’s challenging to discern how it differs from absolutely wrong samples even after reviewing the derivations. Clear definitions and empirical analysis based on outcomes would help clarify this distinction.\n\n2. Limitations from Augmentation Types Used:\n The transformation component uses augmentations, but it lacks an analysis of how different types of augmentation affect performance across domains. Depending on the augmentation type, the efficacy in cross-domain scenarios may vary. Experimental validation or analysis is needed to determine the diversity of augmentation types required or which specific augmentations are essential.\n\n3. Similarity with Temperature Scaling:\n If the framework were designed with temperature scaling, where the temperature parameter is shared across all classes, it could similarly distribute confidence across classes rather than reducing only the incorrect class's confidence. This raises questions about the uniqueness of the proposed algorithm’s approach in addressing \"narrow misclassification.\"\n\n4. Derivation for the CA Loss Function:\n The derivation of the CA loss function appears to be unnecessarily complex. Initially, the paper emphasizes the use of continuous calibration error rather than Expected Calibration Error (ECE), suggesting a different approach. However, the final derivation seems equivalent to ECE-based loss, assuming discrete samples and small sample sizes, which undermines the rationale for a continuous assumption. Clarification is needed on why continuous assumptions were initially made if the final derivation closely resembles an ECE-based approach.\n\n5. Bounds of the CA Loss:\n Bounds for the CA loss are derived based on assumptions that the sample sizes and accuracy across classes are similar. However, the significance of these bounds remains unclear, as they appear merely descriptive of the assumed conditions. Additional insights or generalized bounds demonstrating reduced CE loss could improve understanding.\n\n6. Unclear Derivation in Equation 15:\n The derivation in Equation 15 is ambiguous due to an unexplained arrow, which might imply a limit. Clarification on which parameter converges to produce this outcome is necessary to improve the transparency of this mathematical derivation.\n\n7. Parameter \\theta in Equation 19:\n It is unclear if \\theta in Equation 19 exclusively refers to the fully connected layers added for post-hoc calibration. This specification is important for clarity.\n\n8. Synergy between CA Loss and Transformation Component:\n The CA loss reduces ECE, while the transformation improves cross-domain robustness. However, the synergy between these components is unclear, as seen in experimental results: applying CA loss significantly reduces ECE, while the transformation tends to increase ECE, showing a trade-off rather than synergy. Clarification is needed on why these mechanisms must be combined rather than sequentially applied as separate approaches.\n\n9. Baseline (CE Only + PTS) Already Achieving State-of-the-Art Performance:\n In the result tables, the baseline (CE Only + PTS) already achieves state-of-the-art ECE and accuracy in multiple scenarios. While adding CA and transformation components improves performance further, it seems that these improvements are achieved largely because of the baseline's strong performance. To mitigate this concern, I recommend testing the proposed algorithm on alternative baselines.\n\n10. Minor Points:\n - The text in figures is too small, making them hard to read.\n - Typo: Line 136, “samples'.” should be “samples.'” \n\nThese concerns, if addressed, could enhance the clarity and impact of the proposed framework." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- What is the difference between CA only and CA trans.? Is CA only the calibration strategy that estimates the calibrator $g$ from the calibration set using the loss of Eq. (7) and no data augmentation? This is not clear.\n\n- The approach focuses on calibration of the maximum confidence: can the strategy be adapted to calibrate the whole confidence vector (multiclass calibration)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The idea of using test-time augmentation to predict a sample based temperature scaling factor and learning a network for predicting such temperature is novel, as far as I know.\n\n- The justification of the loss on a toy example pointing out its behavior on so-called narrowly wrong samples is intuitive.\n\n- Rather extensive experiments on several types of image datasets show the benefit of the approach over standard calibration methods and other optimization losses." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper describes a method for post-hoc calibration of a classifier based on estimating for each sample a scaling temperature on the output logits(sample-adaptive calibration strategy). Test time data augmentation is used to predict the scaling temperature and relies on a complementary network taking as input the softmax of selected transformed images and minimizes what is called a correctness-aware loss. The loss is justified by a better management of narrowly wrong predictions. The strategy is evaluated on several small to mid-size datasets and 10 networks per dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The goal of the formal development (Section 3.2) is not clear: what is it supposed to show? Is it to prove that the empirical criterion (7) is a good proxy for optimizing (3), given that $\\hat{c}$ is produced by the calibration pipeline of Figure 2? If so, I am not convinced that the formal developments of Section 3.2 actually prove this.\n\n- The writing lacks precision (see my first question, same symbol $E_f^{emp}$ but different concepts for instance). \n\n- The data augmentation is justified by the fact \"that consistency in model predictions for transformed images correlates strongly with accuracy\" (l. 261): if I can agree with this law, I don't clearly see where it applies in your framework. Or is it that by introducing some local perturbation of the data through transformations and measuring the variation in confidence scores, one can infer accuracy? Then why not directly predict the confidence instead of a temperature?\n\n- In general, I have difficulty understanding the conceptual connections between the test time data augmentation, the formal development, and the narrowly wrong sample analysis. The global logic of the writing is hard to follow." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. See above weakness\n\n2. What is the core difference between calibration and misclassification (e.g. [R1]), both of them seem to be focusing on the incorrect predictions.\n\n3. Fig. 6 illustrates the impact of ablating the top-k selection on the CA loss. The figure suggests that increasing k beyond 4 leads to a significant decline in performance. This trend raises questions about the potential effects of even higher values of k, such as 100 or 200, particularly in datasets like ImageNet. Additionally, since the authors have chosen k=4 as the default setting, it is important to consider how the model manages scenarios where the correct prediction is not included among the top-4 predictions.\n\n4. The method involves training a calibrator with a new loss function and using transformed images, which could be more complex to implement compared to simpler calibration techniques.\n\n [R1] Zhu, Fei, et al. \"Openmix: Exploring outlier samples for misclassification detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**Novel Calibration Objective:** The paper introduces a new loss function, CA loss, which is a significant contribution to the field of model calibration. This loss function is intuitively designed to align with the goal of calibration, which is to ensure high confidence for correct predictions and low confidence for incorrect ones.\n\n**Empirical Evidence:** The authors provide extensive experimental results demonstrating the effectiveness of their proposed method across various datasets, including IND and OOD test sets. The consistent performance improvement over uncalibrated models and other calibration techniques is a strong point.\n\n**Theoretical Insights:** The paper not only proposes a new method but also provides theoretical insights into why existing methods like CE and MSE losses are limited, particularly for certain types of samples in the calibration set." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the issue of model calibration in machine learning, specifically aiming to align a model's confidence with its prediction correctness. The authors identify limitations with the commonly used Cross-Entropy loss for calibrator training and propose a new post-hoc calibration objective, the Correctness-Aware loss. This objective function is designed to decrease model confidence on wrongly predicted samples and increase it on correctly predicted ones. The method utilizes transformed versions of samples to train the calibrator and is tested on both IND and OOD datasets. The paper claims that their method achieves competitive calibration performance compared to state-of-the-art techniques and provides a better separation of correct and incorrect test samples based on calibrated confidence." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Dependency on Transformations:** The effectiveness of the CA loss relies on the use of transformed images to infer correctness. If these transformations do not adequately capture the characteristics of correct and incorrect predictions, the calibration might be less effective.\n\n**Transfomations lack of theoretics:** While the use of transformations such as rotation, grayscale, color jittering, and others has proven to be effective in practice; however, the choice of transformations and their number in Fig. 4 are currently guided more by empirical results rather than a theoretical framework that explains why these five transformations should correlate with prediction correctness as so many transformation exists. And the paper also does not provide a theoretical basis for which transformations are the most informative for calibration or how to select the optimal set of transformations. The current approach might be seen as somewhat arbitrary, and the effectiveness could be dependent on the specific characteristics of the dataset and the model architecture. And There is a risk that the calibrator might overfit to the specific transformations used during training, which may not generalize well to real-world variations in data that were not captured by the training transformations" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "**1**) What are the key differences with MDCA loss [C] and DCA loss [G] ? I would like to see concrete differences between them.\n\n**2**) Can MDCA loss and/or DCA loss be used in place of correctness-aware loss to obtain optimal temperature value? Beyond, CE and MSE losses, I believe it would be an interesting comparison between the effectiveness of proposed CA loss and these losses\n\n**3**) Is the post-hoc calibrator capable of calibrating non-ground truth classes as well?\n\n**4**) What is the performance of the method under the SCE metric [H] compared to other post-hoc calibration methods? \n\n**5**) The intuition behind learning a mapping through g network from top-K softmax scores (corresponding to transformed versions) to temperature value is not very clear. \n\n**6**) L499: The paper mentions that existing methods do not improve AuC compared to proposed one. Will require more explanation.\n\n**7**) How good is the method in overcoming under confidence of the model?\n\n**8**) Can this post-hoc calibrator be used after a train-time calibration method? It would be interesting to observe the complementary strengths of the proposed post-hoc calibration method." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**1**) Calibrating deep neural networks is an important step towards making AI models reliable and trustworthy, especially in safety-critical applications.\n\n**2**) The proposed post-hoc calibrator is simple as it also learns to identify a per-sample temperature value that can be used to scale the logits.\n\n**3**) The paper also mentions some theoretical insights into the proposed correctness-aware loss term by comparing and contrasting it with CE and MSE losses.\n\n**4**) Results show that proposed idea is competitive against other post-hoc calibration methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper undertakes the problem of calibrating deep neural networks for the task of classification. At the core of the method is a past-hoc calibrator which proposes a correctness-aware loss to search for the optimal temperature which is then used to scale the logits for a given sample. To determine the correctness of a sample, the method uses the well-known concept of consistency across different augmentations. A simple network is used to map top-K softmax predictions across augmentations to the temperature value. The correctness-aware loss optimizes this network to obtain the best temperature. The paper also shows mathematical insights on the proposed loss. The experiments have been conducted on different datasets to validate the effectiveness of the post-hoc calibrator. Results claim to achieve competitive performance against other post-hoc calibration methods, such as naive temperature scaling, ensemble temperature scaling, adaptive temperature scaling and isotonic regression." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1**) The related work section completely misses an emerging direction of train-time calibration methods such as [A], [B], [C], [D], [E] and [F]. \n\n**2**) The paper lacks reliability diagrams to better understand the potential of proposed post-hoc calibrator in overcoming overconfidence and under confidence over the full spectrum of model confidence.\n\n**3**) Why the proposed post-hoc calibrator is able to improve OOD calibration performance? There is no analyses that supports these results.\n\n**4**) How the proposed post-hoc calibrator would perform under class-imbalanced scenarios?\n\n**5**) The proposed correctness-aware loss appears similar to MDCA loss [C]. What are the key differences?\n\n\n[A] Liu, B., Ben Ayed, I., Galdran, A. and Dolz, J., 2022. The devil is in the margin: Margin-based label smoothing for network calibration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 80-88).\n\n[B] Patra, R., Hebbalaguppe, R., Dash, T., Shroff, G. and Vig, L., 2023. Calibrating deep neural networks using explicit regularisation and dynamic data pruning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1541-1549)\n\n[C] Hebbalaguppe, R., Prakash, J., Madan, N. and Arora, C., 2022. A stitch in time saves nine: A train-time regularizing loss for improved neural network calibration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 16081-16090).\n\n[D] Wei, H., Xie, R., Cheng, H., Feng, L., An, B. and Li, Y., 2022, June. Mitigating neural network overconfidence with logit normalization. In International conference on machine learning (pp. 23631-23644). PMLR.\n\n[E] Liu, B., Rony, J., Galdran, A., Dolz, J. and Ben Ayed, I., 2023. Class adaptive network calibration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 16070-16079).\n\n[F] Park, H., Noh, J., Oh, Y., Baek, D. and Ham, B., 2023. Acls: Adaptive and conditional label smoothing for network calibration. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3936-3945).\n\n[G] Liang, G., Zhang, Y., Wang, X. and Jacobs, N., Improved Trainable Calibration Method for Neural Networks on Medical Imaging Classification BMVC 2020.\n\n[H] Jeremy Nixon, Michael W Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring calibration in\ndeep learning. In CVPR Workshops, volume 2, 201" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024optimizing,\ntitle={Optimizing Calibration by Gaining Aware of Prediction Correctness},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=34xYxTTiM0},\nnote={under review}\n}" }, "abstract": { "value": "Model calibration aims to align confidence with prediction correctness. The Cross-Entropy (CE) loss is widely used for calibrator training, which enforces the model to increase confidence on the ground truth class. However, we find the CE loss has intrinsic limitations. For example, for a narrow misclassification, a calibrator trained by the CE loss often produces high confidence on the wrongly predicted class (e.g., a test sample is wrongly classified and its softmax score on the ground truth class is around 0.4), which is undesirable. In this paper, we propose a new post-hoc calibration objective derived from the aim of calibration. Intuitively, the proposed objective function asks that the calibrator decrease model confidence on wrongly predicted samples and increase confidence on correctly predicted samples. \nBecause a sample itself has insufficient ability to indicate correctness, we use its transformed versions (e.g., rotated, greyscaled, and color-jittered) during calibrator training. Trained on an in-distribution validation set and tested with isolated, individual test samples, \nour method achieves competitive calibration performance on both in-distribution and out-of-distribution test sets compared with the state of the art. Further, our analysis points out the difference between our method and commonly used objectives such as CE loss and Mean Square Error (MSE) loss, where the latters sometimes deviates from the calibration aim." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Post-hoc Model Calibration", "Model Calibration Loss" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/897ee3e3d6b22a7aceaff06047430c8a39733070.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Optimizing Calibration by Gaining Aware of Prediction Correctness" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
369jumtah8
From Training-Free to Adaptive: Empirical Insights into MLLMs' Understanding of Detection Information
main
Active
Multimodal Large Language Models;Object Detection
foundation or frontier models, including LLMs
3;5;5;6
5;5;3;4
2;3;2;3
1;3;2;3
2;3;2;3
4.75
4.25
2.5
2.25
2.5
-0.4842
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you elaborate on the computational requirements for each training strategy, particularly the memory and time costs associated with fine-tuning compared to the training-free approach?\n2. Is there a risk of the model overfitting to the textual detection information during fine-tuning? Has the paper examined the impact of fine-tuning on tasks unrelated to detection, to confirm that broader language comprehension capabilities are maintained?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper addresses a crucial aspect of MLLMs' limitations—difficulty in interpreting detailed visual elements. By exploring methods to effectively integrate detection information, it has significant implications for real-world applications where precision in visual recognition is essential, such as autonomous driving, medical imaging, and other fields that rely on high-detail visual data.\n2. The authors conduct a wide-ranging analysis across ten well-regarded benchmarks, providing robust evidence for the effectiveness of each training strategy. \n3. A key strength is the demonstration of fine-tuning’s adaptability when incorporating different detection models. The authors showcase that fine-tuned models retain performance gains even when switching from closed-set to open-set detectors, underscoring fine-tuning as a resilient strategy for enhancing MLLMs.\n4. The findings from comparing training-free, retraining, and fine-tuning strategies offer valuable empirical insights. By quantitatively showing the superiority of fine-tuning, the paper guides future work on the practical application of training strategies for MLLMs that require fine-grained detail recognition." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the impact of various training strategies on the multimodal large language models' (MLLMs) ability to utilize infused detection information. The authors propose three training strategies—training-free infusion, retraining, and fine-tuning—for incorporating detection data in textual format. Through extensive experimentation across benchmarks, they conclude that fine-tuning yields the best results, enhancing MLLM performance in tasks requiring fine-grained image recognition by up to 6.71% over training-free methods. The study also explores model adaptability to different detection models, suggesting fine-tuning as a robust approach for integrating detection information." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Fine-tuning MLLMs with detection information likely introduces computational overhead, which is not sufficiently addressed. An analysis of training costs and memory requirements across the three strategies would provide valuable insights into the feasibility of each approach for large-scale applications.\n2. While the paper includes multiple benchmarks focused on fine-grained visual tasks, the evaluation could benefit from additional benchmarks that test broader language-vision capabilities. Tasks like DocumentVQA.\n3. The paper does not examine how variations in detection model accuracy (e.g., OCR quality) impact the MLLM’s performance. Given that the approach depends on external detection outputs, this vulnerability could lead to inconsistent performance if detection quality fluctuates across different scenarios or datasets." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The work does not have ethical concerns. Since the framework inherits the limitations of LLMs and MLLMs, the framework may share the concerns of those large foundation models. However, such concerns are not specific to this work." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How is the textual detection instruction data infused during training? (See weakness)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-written and easy to follow.\n- Through extensive empirical validation, the study rigorously evaluates the performance of various training strategies across different experimental settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the impact of training strategies on Multimodal Large Language Models (MLLMs) when integrating textual detection information from vision models. While current methods often utilize a training-free approach, the researchers systematically explore the effects of adaptive training, retraining, and fine-tuning strategies. Their findings indicate that fine-tuning significantly enhances the MLLMs' performance—improving results compared to the training-free method across various benchmarks. Additionally, fine-tuning enables MLLMs to retain performance benefits even after replacing detection models, suggesting better comprehension of the specialized textual information." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Limited Contribution\n - The primary findings of this study demonstrate limited novelty in their conclusions. The superiority of fine-tuning over training-free methods has been well-established in the literature, making this result somewhat predictable. Furthermore, the inclusion of comparison with retraining from scratch adds limited value, as it is rarely considered a preferable option in practice.\n- Ambiguities in Dataset Construction\n - The proportional distribution of various data types, including textual detection information needs to be more adequately specified. Moreover, the paper's use of the term \"infusion\" lacks a precise definition, leaving uncertainty about whether it refers to data addition or conversion processes. The paper's ambiguous description of data processing methods is problematic, especially since data conversion, if implemented, would reduce conventional question-answer pairs and potentially affect benchmark performance, particularly in the retraining strategy." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper conducted a comprehensive set of experiments with thorough analysis for integrating textual detection information into MLLM\n- The empirical observations are straightforward and the paper is written in easy-to-understand manner" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Inspired by the absence of adaptive training methods to integrate textual detection information into MLLMs, the paper empirically explored the effect of fine-tuning MLLMs equipped with textual detection information. The key insights were 1) fine-tuning strategy yields better performance than training-free and retraining strategies, 2) retraining rather impairs the original image comprehension ability of MLLMs, and 3) Swapping the deployed detection model with open-set object detector further improves MLLM performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The key insights and the empirical observations this paper investigated may seem to reiterate the existing observations of the related papers. Specifically, [MiniGPT4-v2](https://arxiv.org/abs/2310.09478) and [VisionLLM](https://arxiv.org/abs/2305.11175) are the pioneering works that demonstrated the positive impact of integrating object detection information into MLLMs in terms of object detection and several MLLM tasks (e.g., VQA).\n- Additionally, the paper overlooks the effectiveness of training-free methods, which avoid the need for a huge amount of labor-intensive annotations required for equipping such large-scale MLLMs with object detection ability.\n- The novelty of the proposed methods is significantly limited, which is a simple adoption of training modules onto training-free infusion models.\n- The technical soundness of the proposed methods seems deficient. Why is the retraining strategy that trains MLLM from scratch not training visual encoders? Also, there is no justification for why the different backbone (vicuna) is used for retraining, compared to the other fine-tuning and training-free strategies." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Reasonable motivation. Additional vision experts can further enhance the visual capacities of MLLMs. The paper finds the adaptive training can achieve great potential for helping LLMs better comprehend the special detection input.\n\n2. The conducted experiments and visualizations are extensive and well-organized.\n\n3. The paper is well-written and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "MLLMs struggle with accurately interpreting fine-grained visual details. While vision detection models excel at this, most studies simply insert the detection information as text into the MLLM without further training (training-free). This paper investigates whether adaptive training can improve the MLLM's understanding of this added textual detection information, leading to better performance than training-free methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper lacks analysis regarding the impact of detector performance. Would a detector with significantly higher mAP lead to greater MLLM improvement?\n\n2. Detectors trained on larger datasets with more categories, such as LVIS (1.2k categories) compared to COCO (80 categories), potentially achieve finer-grained visual understanding. Would using the LVIS-trained detector, like Co-DETR-LVIS [1], improve FTBI performance?\n\n3. The proposed method with an open-set detector is similar to VisualCOT [2]. Both first locate the box region that is relevant to the user question and leverage the region information to help MLLM better answer the question.\n\n4. Can FTBI further improve performance upon stronger open-source baselines like LLaVA-NeXT [3] and LLaVA-OneVision [4]?\n\n5. There are two paradigms to incorporate detection experts into MLLMs in the community. One converts detector outputs directly into text descriptions for the MLLM (as in this paper, MoAI [5] and IVE [6]), while the other fuses detector vision backbones with CLIP features (MoVA [7] and Eagle [8]). What advantages does the former approach offer?\n\n[1] Detrs with collaborative hybrid assignments training. ICCV 2023.\n\n[2] Visual cot: Unleashing chain-of-thought reasoning in multi-modal language models. NeurIPS 2024.\n\n[3] Llava-next: Improved reasoning, ocr, and world knowledge.\n\n[4] Llava-onevision: Easy visual task transfer.\n\n[5] Moai: Mixture of all intelligence for large language and vision models. ECCV 2024.\n\n[6] Incorporating visual experts to resolve the information loss in multimodal large language models.\n\n[7] Mova: Adapting mixture of vision experts to multimodal context. NeurIPS 2024.\n\n[8] Eagle: Exploring the design space for multimodal llms with mixture of encoders." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper shows that fine-tuning MLLMs with textual detection information boosts performance over training-free methods, retaining potential even with model replacements, highlighting the benefits of adaptive training for multimodal understanding." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024from,\ntitle={From Training-Free to Adaptive: Empirical Insights into {MLLM}s' Understanding of Detection Information},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=369jumtah8},\nnote={under review}\n}" }, "abstract": { "value": "Despite the impressive capabilities of Multimodal Large Language Models (MLLMs) in integrating text and image modalities, challenges remain in accurately interpreting detailed visual elements. Fortunately, vision detection models have shown superior performance in recognizing fine-grained image details, leading to their increased deployment by researchers to enhance the ability of MLLMs. Among the feasible strategies, infusing detection information in text format is easy to use and effective. However, most studies apply this method in a training-free manner. There is limited research on the effects of adaptive training, which has great potential for helping LLMs better comprehend the special input and discard irrelevant information. In this paper, we address the key research question: How does training influence MLLMs' understanding of infused textual detection information? We systematically conduct experiments with numerous representative models to explore the performance implications of training-free, retraining, and fine-tuning strategies when infusing textual detection information into MLLMs. Additionally, we investigate the impact of training on the original abilities of MLLMs, as well as the interchangeability of detection models. We find that fine-tuning the pre-trained MLLM to adapt to textual detection information yields better results compared to the training-free strategy and the retraining strategy, with the fine-tuned MLLM outperforms the training-free MLLM by 6.71\\% across 10 widely recognized benchmarks. Besides, we find that fine-tuning allows the MLLM to maintain performance improvements even after replacing the deployed detection models, which means that it enables the MLLM to better understand the specially formatted textual information. We release our codes to facilitate further exploration into the fusion strategies of vision detection models and improving the fine-grained multimodal capabilities of MLLMs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal Large Language Models", "Object Detection" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0f109097b79ef66cb3aca4d05ec331ba71a6e817.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "From Training-Free to Adaptive: Empirical Insights into MLLMs' Understanding of Detection Information" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
36DlQGFb7W
Data-Driven Uncertainty-Aware Forecasting of Sea Ice Conditions in the Gulf of Ob Based on Satellite Radar Imagery
main
Active
Arctic Sea Ice Forecasting;Satellite Radar Imagery;Ensemble Forecasting;Uncertainty Quantification;Machine Learning for Video Prediction
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;3;3
4;3;4;4
3;2;2;2
1;2;2;2
2;1;2;2
3
3.75
2.25
1.75
1.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "c.f. weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The beginning of the paper is well written, with a good problem statement and motivation. The importance of the work is well explained, and appears timely." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes different methods to forecast the sea ice extent in the gulf of Ob based a mix of Sentinel-1 data (radar), re-analysis data and interpolated weather stations.\nThe paper compares a slue of different methods, and aims at quantifying the uncertainty of the forecast with these methods. The different methods each produce a forecast, which is then used as an ensemble to quantify the uncertainty." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper compares 8 different methods to forecast the sea ice, but fails to introduce them. The author spend more time on the data preprocessing and filtration of S1 data, than on explaining what the actual models do. The only mention of the models are on line 79 to 94, but are very brief.\n\nOverall, the paper lacks a significant analysis of the results. The results are shown briefly in table 3 and figure 3, but lack a deeper analysis. In the main text, there is no example of time series, nor map to show the uncertainty per pixel, nor interpolation output, or visuals to show the results, and help the reader in understanding the process.\n\nThe paper would profit massively from a schematic representation of the tasks.\n\nI feel like the paper has potential, but the different sections have been given inappropriate weight. The paper would need a major restructuration, and overall would probably fit better in a longer format, such as a journal, where the details can be explained better, and the analysis performed at a deeper level. There are just too many moving parts to fit in this short format.\n\n## Target\nIt is unclear to me how the target is produced. The authors mention \"a target presentation of the forecasts\" (line 213), but don't explain how they use Sentinel-1 to produce the target.\n\n## Minor comments\n\nTable 1: if the scale is supposed to be the scale of the product, then S1 has a scale of 10 meters, not 1km. The rescaled input is 1km, but so is GLORYS and the meteo stations\nI would add the temporal resolution to this table to add a bit more information.\n\nLine 206: \"Sentinel-1 SAR images and GLORYS fields are interpolated bilinearly to match the input resolution (1 km)\" using bilinear interpolation to resample Sentinel-1 from 10 meters to 1km is quite unconventional, usually downsampling is done with nearest neighbor or average.\n\nLine 235: \"up to 50 meters\": as far as I know S1 resolution is 10 meters\n\nhLine 257-262: this comment seems out of place.\n\nFigure 3 is hard to read, is missing units, and pixelated.\n\nLine 304: \"nor noise\" how do you make sure an image has no noise?\n\n## Grammar comments\n\nLine 079: \"Our research employs advanced video prediction models, which include:\" please rephrase, doesn't work with the bullet points\n\nLine 240: \"which lacks quality of ice data in the Gulf being mostly uncorrelated with other sources\": unclear, rephrase" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Is the SAR video prediction and end in itself in this work?\n2. Can the performance of the data preprocessing step be demonstrated quantitatively and also qualitatively by some example images?\n3. Is there a way to incorporate ice-dynamics in this video prediction approach?\n4. How sensitive would any approach be to location?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper provides an approach to sea ice forecasting which is an important problem and explores the performance of several video prediction algorithms on this task. The authors also consider the problem of image artifacts and propose a projection based approach to eliminate image artifacts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents a sea ice forecasting approach that uses video prediction approaches applied to synthetic aperture radar (SAR) satellite imagery captured by Sentinel 1. The work examined the performance of a number of architectures for the video prediction task and uses an ensemble of four architectures to achieve uncertainty quantification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It is not clear what sea-ice parameters are considered in this work and how these parameters would be obtained from the SAR video streams. The authors should clearly state the parameters considered and describe how they are derived from SAR imagery.\n\n2. The description of the architectures in Table 2 is not clear. What are the inputs and outputs in each configuration? Also, since the best performing system appears to be the rUNET system which is a SISO configuration, are the multiple inputs necessary or sources of potential error?\n\n3. The IIEE metric is not explained in the paper. I believe it is the “integrated ice-edge error” which may be unfamiliar to other readers and should be introduced.\n\n4. The data preprocessing step which involves learning a projection should be evaluated to validate the removal of artifacts? What is the computational complexity of this approach?\n\n\n\nMinor Comments:\n1. Typo - Line 145 “uncetrainty quantification”\n2. A map of the area as would be useful in the main paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1) How do you split the different samples in the training period? Do you include forecasts starting at every day? Or only every 10 days to avoid data leakage (one samples target being in another samples input)? --> Following from this, what is your exact sample size for train, val, test: before and after augmentation?\n2) How do you do the backward forecast (l. 462)? Are you considering that atmospheric dynamics are not time-reversible due to the second law of thermodynamics?\n3) Are you using the same augmentation strategy for all models?\n4) Which Sentinel 1 product are you using? How has it been processed? Is it radiometrically terrain corrected?\n5) How are you computing the IIEE?\n6) Could you explain the filtration in other words again? L.292ff - I did not understand from reading the manuscript.\n7) Why the loss MSE - 0.2 SSIM?\n8) How do you feed missing inputs to your models?\n9) Do you have any idea why the missing values (Fig 2a) were a lot lower during 2016 & 2017? To me it makes little sense and I would rather expect a drop in 2021, when the Sentinel 1B satellite went out of functioning.\n10) Have you compared to a climatology? For satellite imagery this seems a very important baseline, see again e.g. Benson et al 2024 https://openaccess.thecvf.com/content/CVPR2024/html/Benson_Multi-modal_Learning_for_Geospatial_Vegetation_Forecasting_CVPR_2024_paper.html\n11) I do not understand how the confidence-based mixture with DMVFN (l. 433f) plays a role in the predictions of the models presented in Table 3, can you elaborate?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) The task of satellite-based sea ice forecasting conditioned on meteorology is interesting and sufficiently novel\n2) The paper introduces an augmentation strategy which improves performance significantly\n3) The work compares many different neural network architectures and includes two simple baselines\n4) A domain-specific evaluation metric is used, the Integrated Ice Edge Error." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Short-term forecasts of satellite images at 1km resolution conditioned on past satellite imagery and past meteorological conditions with deep neural network architectures commonly used in video prediction. The networks beat a simple baseline (persistence), but most video prediction methods do not improve over a UNet. Moreover, the presented models struggle due to the inherent sparsity of the satellite time series, yet a new augmentation method (joint geometric transformations of meteo fields and satellite images) is introduced which improves sample efficiency in this data sparse setting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The results are not convincing. This work trains very large deep neural networks (some over 30Mio parameters) on a very small dataset (training has only ~2200 days). The trained models beat a simple persistence baseline by some margin, but it is unclear what this means, as there is no comparison to any baseline sea ice forecast and there is almost no qualitative evidence presented in this paper. The only qualitative results are shown in Fig. 6, but those are not convincing, there, all models fail to provide a forecast that is somewhat close to the reality at day 3. My impression after reading is that the gaps in the time series and the low availability of past data make the task extremely challenging, such that the models mainly learn a blurred regression to the mean, which in MSE beats persistence.\n2) The writing lacks clarity. Many important technical details are not explained well (or not at all), instead the paper is full of fill-words and meaningless phrases that sound like output from a LLM. I'll provide more specific feedback below.\n3) It is hard to assess what the contribution of this work is. I see the main novelty in the augmentation strategy, but that is a bit too little for an ICLR paper.\n4) The paper emphasizes that fancy video prediction architectures do not outperform an out-of-the-box UNet for satellite image time series forecasting, but instead domain-specific preprocessing is more important. However, this finding is not new, see e.g. Benson et al 2024 https://openaccess.thecvf.com/content/CVPR2024/html/Benson_Multi-modal_Learning_for_Geospatial_Vegetation_Forecasting_CVPR_2024_paper.html - which focusses on vegetation greenness forecasting, but else is very similar in design.\n5) Missed opportunity: the work only uses Sentinel 1 at 1km resolution, however the big benefit of the satellite is its high spatial resolution (up to ~15m). At coarser resolution, i doubt Sentinel 1 is the best product, especially due to its temporal sparsity (only ~5-daily). Moreover, the work only uses past meteorological data. Yet, future sea ice motion is likely depending a lot on future weather, hence it would make a lot of sense to include future weather. Ideally, to emulate operational conditions, this would be from stored weather forecasts, but for showing the predictive skill of the map weather -> sea ice, it would also suffice to just use future reanalyis, mentioning a potential further performance degradation at inference time due to the usage of actual forecasts.\n6) The evaluation of ensembles is a bit weak. If you provide ensemble forecasts for uncertainty quantification, as a user, i'd most importantly like to see how well they are calibrated, i.e. the skill score. There are further probabilistic metrics like CRPS that should also be looked at. And not just MSE of the ensemble mean.\n7) Many formulations in the paper are debatable: l. 013ff I'd argue the causality is wrong in this sentence. Short-term sea ice forecast are important because they are useful for boats navigating through the arctic sea, not because of global warming and subsequent sea ice loss. ; l. 100ff by comparing the accuracy of model predictions we do not ensure that these predictions contain more than just general trends (what are those anyway?) and we also do not ensure that they contain spatial structures. ; l. 148ff The main reason for the data gaps is that Sentinel 1 is on an orbit that only has a revisit time of 12 days. For some time (until 2021), there were two satellites, which, together with considering off-nadir imagery allowed for an effective revisit time of 2-3 days, now it is 5-6 days. All other factors are minor compared to this one. ; l. 214 I am unaware of any weather capability of Sentinel 1 (what is that anyway?) - however, it may be worth to mention that contrary to passive optical imagery like Sentinel 2, the active sensor of Sentinel 1 can measure surface conditions even if there is cloud cover. ; L. 235 Sentinel 1 has up to ~15m resolution. L. 236 It is only partially true that there are large amounts of historical data: while the size in terms of storage is surely in the petabytes, we have only a very limited (!) historical record of Sentinel 1, just 9 years since 2015. \n8) Limited related works section. Only googling once gave me already a very related paper Palerme et al 2024 https://tc.copernicus.org/articles/18/2161/2024/ doing short-term sea ice forecasting with deep learning. The related works section needs to include such works, and ideally you compare the performance of your models to those published in these related works. Furthermore, there is a large stream of literature on satellite image time series forecasting, which seems extremely relevant, but the related works section also misses." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Novelty of Contributions: Can the authors clarify what novel methodological contributions are presented beyond applying existing models to a new dataset? Are there any new algorithms, architectures, or theoretical insights introduced?\n\n* Model Adaptations: Did the authors make any significant adaptations or improvements to the video prediction models to better suit sea ice forecasting, or were the models used off-the-shelf?\n\n* Evaluation of Practical Significance: How do the modest improvements over baselines translate to practical benefits in operational forecasting? Are these improvements significant enough to impact real-world applications?\n\n* Generalizability: Can the authors discuss the potential generalizability of their approach to other regions or types of geophysical forecasting? What are the limitations?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Application of Deep Learning to Sea Ice Forecasting: The paper addresses a relevant and practical problem by applying advanced video prediction models to sea ice forecasting in the Gulf of Ob. This cross-disciplinary application showcases the potential of deep learning in geophysical tasks.\n\nData Preprocessing Techniques: The authors develop domain-specific data preprocessing and augmentation methods to handle the challenges of Arctic satellite imagery, such as data irregularity and missing values. This is crucial for improving model performance on imperfect real-world data.\n\nUncertainty Quantification: Introducing an ensemble-based approach for uncertainty estimation and a confidence-based model selection scheme adds value by enhancing forecast robustness and providing a mechanism to assess prediction reliability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a data-driven approach for forecasting sea ice conditions in the Gulf of Ob by leveraging advanced video prediction models originally developed for computer vision tasks. The authors utilize sequences of radar images from Sentinel-1, weather observations, and GLORYS forecasts to predict future sea ice conditions. They address challenges related to data irregularity and missing values through domain-specific preprocessing and augmentation techniques. The paper also introduces an ensemble-based approach for uncertainty quantification and proposes a confidence-based model selection scheme to enhance forecast accuracy and robustness.\n\nWhile the paper tackles a relevant and practical problem, it primarily applies existing deep learning models to a new domain without significant methodological innovations. The contributions are more engineering-focused, adapting existing models for sea ice forecasting without introducing new algorithms or theoretical advancements. The improvements over baseline models are modest, and there is limited discussion on the practical significance of these improvements or how they translate to real-world applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Lack of Novel Methodological Contributions: The paper primarily applies existing video prediction models to a new dataset without significant modifications or novel methodological developments. This limits its contribution to the advancement of machine learning techniques.\n\n* Engineering Focus Over Research Innovation: The work focuses more on engineering implementation and practical adaptation rather than introducing new theoretical insights or advancements in machine learning.\n\n* Modest Improvements Over Baselines: The improvements over baseline models are modest. The paper lacks a deep analysis of the practical significance of these improvements, especially in operational contexts.\n\n* Insufficient Theoretical Analysis: There is a lack of in-depth theoretical analysis or exploration of why certain models perform better in this context, which could provide valuable insights to the research community." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We develop a new method for regional sea ice forecasting using radar images, weather data, and models from the video prediction domain, incorporating uncertainty quantification to improve forecast reliability and ensure safe marine operations." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024datadriven,\ntitle={Data-Driven Uncertainty-Aware Forecasting of Sea Ice Conditions in the Gulf of Ob Based on Satellite Radar Imagery},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=36DlQGFb7W},\nnote={under review}\n}" }, "abstract": { "value": "The increase in Arctic marine activity due to rapid warming and significant sea ice loss necessitates highly reliable, short-term sea ice forecasts to ensure maritime safety and operational efficiency. In this work, we present a novel data-driven approach for sea ice condition forecasting in the Gulf of Ob, leveraging sequences of radar images from Sentinel-1, weather observations, and GLORYS forecasts. Our approach integrates advanced video prediction models, originally developed for vision tasks, with domain-specific data preprocessing and augmentation techniques tailored to the unique challenges of Arctic sea ice dynamics. Central to our methodology is the use of uncertainty quantification to assess the reliability of predictions, ensuring robust decision-making in safety-critical applications. Furthermore, we propose a confidence-based model mixture mechanism that enhances forecast accuracy and model robustness, crucial for safe operations in volatile Arctic environments. Our results demonstrate substantial improvements over baseline approaches, underscoring the importance of uncertainty quantification and specialized data handling for effective and reliable sea ice forecasting." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Arctic Sea Ice Forecasting", "Satellite Radar Imagery", "Ensemble Forecasting", "Uncertainty Quantification", "Machine Learning for Video Prediction" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e4a2c2fc4477eb67c70e0159f34f32c15f339746.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/148292aae22ebe2e28611a0ca2ae1d16a9206355.zip" }, "title": { "value": "Data-Driven Uncertainty-Aware Forecasting of Sea Ice Conditions in the Gulf of Ob Based on Satellite Radar Imagery" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
37EXtKCOkn
Learning Spatiotemporal Dynamical Systems from Point Process Observations
main
Active
dynamics;spatiotemporal;neural;PDE;ODE
generative models
6;8;8;8
3;2;4;4
3;3;4;2
3;3;3;3
2;3;3;4
7.5
3.25
3
3
3
0.174078
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "* Can you give me a possible explanation of why it works?\n* There is no ablation study on why transformer are used for generating the initial states. Or do you have evidence the initial state part is robust to architecture choice?\n* Despite the proposed speedup method, I believe neural-ODE is still untolerably slow and does not scale well. Do you have actual training/ inference time comparison?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The challenge seems well grounded as the sparse data over a large spatial domain is common for many types of problems, e.g., few agent trajectories over a large geographical domain.\n- The method looks (possibly) scalable with low-resolution linear interpolation.\n- The math formulation is clear and the empirical results are fair. \n- A lot of ablation study, accounting for context size, resolution, removal of components" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This is an engineering oriented work that model STPP with intensity driven by a continuous latent states governed by a Neural-ODE, with initial states generated by a transformer encoder. The formulation sounds valid and the proposed benefits are for sparsely distributed data. The main contributions are the new formulation and the interpolation-based speedup technique." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I don't think the paper really answer the question of why it work on sparse data. There is no theoretical analysis / visualization of how the low-dimensional latent space captures the full dynamics from sparse observations. No discussion of information-theoretic bounds on what can be learned from sparse observations. It is reasonable to expect normalizing-flow based method (like Neural-STPP) not working well because the distribution is too localized, but I don't see why your method have an theoretical advantage over SOTA with kernel-based or closed spatial distribution." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "My main questions relate the points raised in the weaknesses above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-- The problem of learning spatiotemporal point processes is rather important, and any contribution to this problem should be well welcomed by the scientific community.\n-- The overall idea of the article is meaningful. \n-- The numerical results are rather good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This article is devoted to the problem of learning spatiotemporal dynamics from randomly sampled points in space and time. This problem is particularly well suited for the situation where we have sensors that record a system, and we have to predict also the behavior of the sensors during the dynamics (e.g. meteorological sensors that are carried by currents). The method proposed in this article is based on the use of neural ODEs in a learned latent space." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-- Some explanations are not properly given. For instance, I assume that they are using an ODE solver in a latent space because a direct approach would immediately incur into stiffness problems. Why not using a neural PDE solver? Why is it better to learn a latent space and use an ODE solver for a problem that is formulated as a PDE (as in Eqn 5 of the paper)? This is unclear.\n-- The latent approach makes the approach less clear, and more out of the control of the user. I suppose the authors have no idea why the encoder creates a certain latent space rather than another. A theoretical approach seems very complicated, in fact the authors limit themselves mostly to empirical results. \n-- It is unclear if a general system can be learned in this way. In a sense, we might think of the encoded latent space as a low-degree approximation of the system, but it might be that certain PDE models stemming from Eqn 5 might not be suitably tackled by such approach. \n-- One of the main claims is that the model is continuous. An interpolation task should be performed in this case to show that they can handle continuity well. They use interpolation in the method, but it is unclear if in an experiment where portions of the trajectories are completely hidden during training, could be recovered during evaluation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. For what reason is the distribution of the next sensor signal's location predicted? What is the benefit of such a prediction and what computational cost does it impose? If I understand correctly, Table 2 suggests removing the point process model (which simulates the next sensor signal position and time, if I'm correct). At least according to a minimal model when following Occams Razor.\n2. The interpolation ablation is very illustrative. Have you tried higher-order interpolations to infer $\\hat{z}(t_i)$, i.e., quadratic, cubic? What is the error that incurs from the interpolation compared to modeling the full temporal grid $t_1, \\dots, t_N$? Table 1 demonstrates the time improvement when using interpolations; it would be great to see the error associated with the two techniques (Interp. vs Seq.).\n3. Have you explored other solvers beyond dopri5, such as Euler, which is much cheaper? Or does the method depend on an adaptive solver to account for potentially different deltas between time steps? Figure 2 somehow suggest that the effectively processed time steps $\\tau_m$ are separated by a constant time delta. Is this a requirement of the architecture?\n4. How does the latent space dimensionality $d_z$ affect the runtime? Might be interesting to report along with its effect on the parameter count around line 375." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "_Originality:_ The combination of various different branches from machine learning is original. They are composed in an elegant and versatile way to solve spatiotemporal problems efficiently. The use of latent states enforces abstractions that push for generalizability. Intriguingly, the introduced method does not rely on complex architectures, but mostly emerges out of various MLPs that are well placed and wired.\n\n_Quality:_ Claims are well supported with experimental results, which are contrasted against several recent and competitive ML architectures. Figures are well designed and support the message conveyed in the manuscript.\n\n_Clarity:_ The manuscript is well organized, structured, and written. A rich appendix provides details about the model design, yet a supplementary material to validate the results is missing.\n\n_Significance:_ Results appear significant in terms of how sparse the data is sampled. Three synthetic problems of varying difficulty, as well as a real-world problem demonstrate the applicability of the method. Results are reported in two metrics along with standard deviations, which helps assessing the quality of the forecasts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "A composition of different ML methods is presented to simulate spatiotemporal processes from point observations without access to the whole spatial field. The proposed approach encodes sparse context point observations into a latent state. In this latent state, the dynamic process evolution is integrated over large time steps, while a fine temporal resolution is obtained via interpolation in the latent state. A decoder projects the predicted latent states back to the observation space." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Observation function is constrained to a normal distribution with fixed variance. It would be helpful to add arguments of this design choice, to what extend it limits the expressivity of the model, as well as for what problems this formulation is sufficient.\n2. Ablations showing the performance under different spatial and temporal sparsities would be highly informative to understand the quality and limitations of the model at different tasks. Presumably, e.g., Navier-Stokes likely depends on more dense samples compared to Shallow Water. Extending this ablation to the other benchmarked methods would also provide valuable insights about the models' data efficacy.\n3. Limitations are not reported. It would be valuable to understand the limits of the method, its computational cost, and the time this architecture needs to train. Also, it is unclear to which extend the method can directly be applied to a task at hand or how much fine tuning is involved.\n4. No runtime comparison of the different models provided. If I'm not mistaken, the model must be called for each spatial position of interest in each time step, which amounts to a large number of model calls. Thus, to extend on Table 1, please provide information about the runtime of the entire model when generating a rollout of a spatiotemporal sequence of frames.\n5. More details about the differences between the introduced method and AutoSTPP would be valuable, given that these two approaches perform almost equally well. For what reason is your method superior to AutoSTPP?\n\n_Minor Comments_\n- Typo in line 306, \"withing\"\n- $N_{\\text{ctx}}$ is unclear in Figure 4. What value does the variable take? Would be good to have the actual value. EDIT: C.1 provides this information; I thus suggest to refer to C.1 in the Caption of Figure 4." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Questions are related to the weaknesses:\nCould you address the issue of interpretability and the Poisson process a bit more" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written and technically sound. The methodology is clearly presented, and the experimental setup is detailed\n- The proposed model is technically sound. It effectively combines techniques from various fields, including neural differential equations, neural point processes and amortized variational inference\n- experiments and \"ablations studies\" are comprehensive, showing the impact of many parameters of the model" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel method for modeling spatiotemporal dynamical systems from point process observations. The model integrates techniques from neural differential equations, neural point processes, implicit neural representations, and amortized variational inference. The authors also introduce a technique to speed training by addressing a computational bottleneck in latent state evaluation. The experimental results demonstrate the effectiveness of the model on challenging spatiotemporal datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While focusing on predictive capability and computational efficiency, discussing the interpretability of the model would enhance its value. Can something be said about the dynamical system?\n- A little more discussion around the limitation of the Poisson process, and potential solution would have been welcome." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Spatiotemporal Dynamical Systems from Point Process Observations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=37EXtKCOkn},\nnote={under review}\n}" }, "abstract": { "value": "Spatiotemporal dynamics models are fundamental for various domains, from heat propagation in materials to oceanic and atmospheric flows. However, currently available neural network-based spatiotemporal modeling approaches fall short when faced with data that is collected randomly over time and space, as is often the case with sensor networks in real-world applications like crowdsourced earthquake detection or pollution monitoring. In response, we developed a new method that can effectively learn spatiotemporal dynamics from such point process observations. Our model integrates techniques from neural differential equations, neural point processes, implicit neural representations and amortized variational inference to model both the dynamics of the system and the probabilistic locations and timings of observations. It outperforms existing methods on challenging spatiotemporal datasets by offering substantial improvements in predictive accuracy and computational efficiency, making it a useful tool for modeling and understanding complex dynamical systems observed under realistic, unconstrained conditions." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "dynamics", "spatiotemporal", "neural", "PDE", "ODE" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9649fab757ed49571be4a7de0a53949a220b0a46.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning Spatiotemporal Dynamical Systems from Point Process Observations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
37f8b1ZDzS
Safe Multi-agent Reinforcement Learning with Protection Motivation Theory
main
Withdraw
Safety;Multi-agent Reinforcement Learning;Protection Motivation Theory
reinforcement learning
Xin He;Hongwei Ge;Chunguo Wu;Jincheng Yu
~Xin_He18;~Hongwei_Ge1;~Chunguo_Wu1;~Jincheng_Yu4
3;3;3;5;5
4;2;4;4;3
2;2;3;3;3
1;2;2;2;3
2;1;1;3;3
3.8
3.4
2.6
2
2
0.102062
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please check the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The idea of ​​applying PMT to the Safe MARL pipeline seems quite novel, and extensive experiments on the Safe MARL benchmark validate the superiority of the proposed approach in further minimizing the cumulative cost." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper developed a Safe Multi-agent Reinforcement Learning Method based on the Protection Motivation Theory (PMT). The authors proposed to utilize two emotional mechanisms, fear and regret, to design fear for safety guarantee (F4SG) and regret for safety guarantee (R4SG) to improve the current primal-dual safe MARL pipeline. Experiments on safe MARL benchmarks validate the security and efficiency of their algorithms compared with SOTA baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "However, some weakness significantly hinders readers from further evaluating the contribution and importance of the work:\n\n1.\tAnnotation & Mathematical Derivation: the presentation of the work, especially regarding the theoretical part (part 3), is very chaotic. First, many annotations are not introduced during mathematical derivations. For example, in your introduction of FPN, $\\tilde{f}^i=F P N\\left(s; \\zeta^i\\right)$, what is $\\zeta$ here? Also, in Equation (10), what are $B$ and $T_s$ here? Each annotation should be introduced when it first appears in the paper.\n \n2.\tProposed Theoretical and Loss Function Design: I do agree introducing the fear and regret mechanism is interesting, but why should the loss function of your FPN and RPN have loss functions like Equation (4) and (14)? What is the theoretical intuition and explanation for Equation (4) and (14)? Also, in Equation (3), why does the cost function suddenly have probability distribution $p(C^i)$? In Equation (13), what does the cost function $\\mathcal{C}\\left(s, a^i\\right)=1$ and $\\mathcal{C}\\left(s, a^i\\right)=0$ mean? \n\n3.\tExperiments and Hyperparameters: The experimental section needs more details about the hyperparameters used in your network training - what are the specific hyperparameter settings for each algorithm, including yours? Also, while you show the average costs, what's the actual constraint violation rate for each method? Additionally, I see you focus on improving the Lagrangian safe RL approach, but how does your method compare with those algorithms that claim zero constraint violation, like [1]? \n\n4. The proposed PMT framework doesn't seem specifically designed for multi-agent settings - would it work equally well in single-agent scenarios? What's your motivation for choosing a multi-agent setting? The paper needs to better justify why the PMT framework is particularly suitable or important for multi-agent rather than single-agent problems.\n\n[1] Liu T, Zhou R, Kalathil D, et al. Learning policies with zero or bounded constraint violation for constrained mdps[J]. Advances in Neural Information Processing Systems, 2021, 34: 17183-17193." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Could you provide detailed interpretatiosn for equations\n\n2. Could you add discussion of related works?\n\n3. Except the perspective inspired by PMT, could you discuss the novelty of your methods? How do your methods differ from other traditional methods?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is inspired by Protection Motivation Theory and proposed two safety assurance methods, the perspective seems novel. The experimental results shows the methods are effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes two safety assurance methods, fear for safety guarantee (F4SG) and regret for safety guarantee (R4SG), for cooperative and safe strategies in multi-agent systems. Drawing on the Protection Motivation Theory from social psychology, the authors provide a theoretical framework to guide the development of protective behaviors in learning agents. Experimental results show that these methods achieve a promising balance between performance gains and adherence to safety constraints, showing advantages over existing state-of-the-art approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I find the paper difficult to follow; many equations are listed without interpretations. Additionally, the paper lacks a comprehensive discussion of related work. While PMT serves as good inspiration for the method, I am not entirely sure how the essence of the proposed methods differs from other traditional safe MARL methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(1)\tThe motivation part (page 2, lines 58-66) mentions that PMT includes multiple emotions; why were only fear and regret selected for modeling in this study?\n\n(2)\tIn the optimization of the Fear Prior Network (FPN), the quantification of fear severity relies on a prior distribution (line 137). Could this lead to instability in new or uncertain environments?\n\n(3)\tFear and regret are emotions that can naturally coexist. However, the ablation study shows that the combined model does not yield better results (page 9, lines 481-485), with the authors suggesting that it leads to overly conservative behavior. Has any exploration been done on developing a framework that effectively integrates these two emotions?\n\n(4)\tThe authors propose two separate emotion models without integration and only describe the experimental results without analyzing why each emotion adapts to different scenarios (pages 7-9, results part). Could you add an analysis in the experimental section on this aspect? Otherwise, the paper merely presents two methods without a deeper exploration of their contextual suitability." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1)\tThis paper attempts to introduce emotion modeling into multi-agent reinforcement learning, employing fear and regret to adjust agent’s decision-making behaviors. This interdisciplinary innovation brings a compelling perspective to the study.\n\n(2)\tThis paper provides a detailed theoretical modeling of the proposed F4SG and R4SG methods and establishes a solid theoretical foundation for emotion modeling through mathematical formulations.\n\n(3)\tThe experimental section demonstrates the performance of F4SG and R4SG across different task scenarios, indicating that emotion modeling can achieve high performance while ensuring the safety of agents." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to enhance safety in multi-agent reinforcement learning (MARL) by integrating \"fear\" and \"regret\" emotions, inspired by Protection Motivation Theory (PMT). Two methods are introduced: Fear for Safety Guarantee (F4SG) and Regret for Safety Guarantee (R4SG), which evaluate threat severity in states and actions to help agents avoid unsafe states or actions. Experimental results demonstrate that F4SG and R4SG effectively reduce safety violations in multi-agent environments while achieving high performance under safety constraints." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1)\tThis paper introduces “fear” and “regret” for pre-decision risk assessment and post-decision reflection respectively. However, the mixed model doesn’t enhance performance, which contradicts real-world scenarios where humans often experience multiple emotions simultaneously. An effective framework to integrate the two emotions is lacking.\n\n(2)\tThe experimental analysis is relatively brief. Since the paper proposes two emotion models, it should provide a more detailed comparative analysis of their effectiveness in different scenarios and explore suitable application contexts to better guide practical use of the methods.\n\n(3)\tThis paper lacks a time complexity analysis, which limits the evaluation of the model’s feasibility for real-world use." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could the authors explain equation 3 more clearly? \nWhat are the dimensions of fi (FPN)? What’s the meaning of its different index, which is not clear enough in line169.\nHow is Sd chosen? \nDoes the first term of equation 3 mean the learning of cost function? This idea is used in many prior works, such as “Safe Reinforcement Learning Using Advantage-Based Intervention”.\n2. Similarly, the authors are expected to explain equation 14 more clearly. Are fear and regret only applicable for discrete action space?\n3. Is there anything novel in 3.1.2 and 3.2.2, except the use of Fear and Regret, in comparison with prior works using Lagrange dual?\n4. It seems that for each episode in F4SG and R4SG, parameters are updated E_ppo times. What’s the update frequency of baseline algorithms? Is it the reason why F4SG and R4SG converge faster than baselines in 4.2?\nIn 4.3 and 4.4, it seems that MAPPO-L achieves similar performance as F4SG and R4SG when they all converge" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors summary many related works of safe MARL. \n2. The story from protection motivation theory makes the proposed algorithms more intuitive.\n3. Experiments in three different tasks are conducted with 2 MARL baselines and 2 safe MARL baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose two algorithms F4SG and R4SG to enhance agents’ safety in reinforcement learning. F4SG and R4SG are designed with the concepts in protection motivation theory. Fear and Regret are learned to provide safety in two algorithms, respectively. Then agents are optimized with Lagrange dual and trust region. Experiments are conducted on three different tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some components of the algorithms are not clear, especially for the optimization of FPN and RPN. \n2. The application of Lagrange dual is a main component of the proposed algorithms, while it has been used in many related works. Besides, the learning of FPN and RPN is more like the learning of cost function.\n3. In the experiments, it seems that curves have not converges, or the performance of proposed algorithms is not obviously better than baselines." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See \"Weakness\" section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper studies an important problem of safety in MARL. It is clearly written and motivated by human behavior." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper leverage protection motivation theory (PMT), a phenomenon in human risk assessment for safe MARL. The method for safe MARL mimics PMT by modelling fear and regret to assess threat severity and learn protective behaviors by not visiting certain states or taking certain actions. The algorithms that model fear and regret is called F4SG and R4SG. Experiment result demonstrates that their proposed method are safer and more efficient than state-of-the-art safe MARL algorithms." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation seems unclear to me. The author use human behavior as motivation, but failed to point out what problems exists in current works, as been discussed in Introduction section. If incorporating human behavior is a must, then it should be solving some limitations of current works, yet this part is missing.\n\n2. The authors should consider evaluating their method on more tasks. Safe-MAMujoco, Safe MAIG and MAPDN all contains 10-20 tasks, yet the authors evaluated only 2 tasks in Safe-MAMujoco and 1 in Safe MAIG. The authors can consider adding additional 3-6 tasks on Safe-MAMujoco and Safe MAIG.\n\n3. The gain in safety seems minor in Fig. 1, 2 and 3, especially comparing with MACPO. I would say there is a strong overlap between the curves of proposed method and MACPO. I would suggest authors to evaluate the safety measure on some more challenging tasks.\n\n4. The problem of safe MARL is not a MDP. Typically, MDP modells the decision process of single-agent, when in multi-agent case, it's commonly formulated as a Dec-POMDP or Markov Game. So the problem formulation is incorrect. According to experiments I guess it's some sort of safe Dec-POMDP. Also refer to MACPO for their problem formulation. \n\n5. The author should add surveys on safe MARL literature in preliminaries.\n\n6. In Sec. 3.1.2, many derivations are based on existing literatures. Maybe it is better to focus on the central derivations.\n\n7. What are the guarantees for \"fear for safety guarantee\"? I suppose it to be some type of bounds, but failed to find any.\n\nMinor: Seems the paper do not follow the ICLR template and exceeds the page limit. Also, there are many grammar errors (eg, In this paper, we introduce PMT into the MARL to address the challenge safety. in line 067-068)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nhe2024safe,\ntitle={Safe Multi-agent Reinforcement Learning with Protection Motivation Theory},\nauthor={Xin He and Hongwei Ge and Chunguo Wu and Jincheng Yu},\nyear={2024},\nurl={https://openreview.net/forum?id=37f8b1ZDzS}\n}" }, "abstract": { "value": "A challenging problem for implementing multi-agent reinforcement learning (MARL) in real-world applications is ensuring the safety of cooperative strategies. According to the Protection Motivation Theory (PMT), threat appraisals result in negative emotions and elicit protective behaviors, which are instrumental for coping with security threats. Drawing inspiration from the PMT, we focus on two discrete emotions--fear and regret--to evaluate threat severity and facilitate multiple agents to learn protective behaviors. These can promote cooperative decision-making with fewer safety violations. Specifically, we propose two safety guarantee methods with PMT: fear for safety guarantee (F4SG) and regret for safety guarantee (R4SG), utilizing the active inference technique to model the emotions of fear and regret separately. The threat severity evaluated by these emotions influences the state value and the executed action respectively, which avoids the potential threat of visiting certain states or taking certain actions. Experimental results demonstrate that our proposed methods are safer and more efficient than state-of-the-art baselines on challenging tasks in safe MARL benchmarks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Xin_He18", "~Hongwei_Ge1", "~Chunguo_Wu1", "~Jincheng_Yu4" ] }, "authors": { "value": [ "Xin He", "Hongwei Ge", "Chunguo Wu", "Jincheng Yu" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Safety", "Multi-agent Reinforcement Learning", "Protection Motivation Theory" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "he|safe_multiagent_reinforcement_learning_with_protection_motivation_theory" }, "pdf": { "value": "/pdf/8c16177f9cc9a4b667765f28399fc902dcdb17a4.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/24a516e4f72d3238ef8c60266d543b849aad289d.zip" }, "title": { "value": "Safe Multi-agent Reinforcement Learning with Protection Motivation Theory" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]