id
stringlengths
10
10
title
stringlengths
3
179
track
stringclasses
1 value
status
stringclasses
3 values
keywords
stringlengths
2
2.39k
primary_area
stringclasses
21 values
author
stringclasses
501 values
authorids
stringclasses
501 values
aff
stringclasses
1 value
aff_domain
stringclasses
1 value
position
stringclasses
1 value
rating
stringclasses
355 values
confidence
stringlengths
0
19
soundness
stringclasses
642 values
contribution
stringclasses
596 values
presentation
stringclasses
782 values
rating_avg
float64
0
9
confidence_avg
float64
0
5
soundness_avg
float64
0
4
contribution_avg
float64
0
4
presentation_avg
float64
0
4
corr_rating_confidence
float64
-1
1
project
stringclasses
1 value
github
stringclasses
1 value
Review
listlengths
2
10
25kAzqzTrz
Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning
main
Active
deep semi-supervised learning;generalization error;feature learning
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;6;8;8
3;4;4;3
2;3;3;3
1;3;4;4
4;3;3;3
6.25
3.5
2.75
3
3.25
0.366508
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why can’t the augmentation be agnostic about what the patch contains, what is the theoretical bottleneck here? What impact would a uniformly random mask have? Could there be a more realistic setting where distribution-agnostic data augmentation could still achieve similar results?\n2. While the theory here follows very closely to that of AllenZhu and Li [2023], it seem to have missed some previous works exploring the effects of augmentation on feature learning process [1,2]. The authors can refer to the designs of augmentations and their corresponding analysis in these papers.\n\n[1] Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning. Zixin Wen, Yuanzhi Li [ICML 2021]\n\n[2] The Mechanism of Prediction Head in Non-contrastive Self-supervised Learning. Zixin Wen, Yuanzhi Li [NeurIPS 2022]" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper provides a new theoretical analysis of the FixMatch method, particularly on multi-view structured data distributions, demonstrating its effectiveness in learning features and its advantages over supervised learning. The characterization of FixMatch's two-stage learning process is insightful, offering a clearer understanding of how the model learns from both supervised and unsupervised data.\n\n2. The authors propose a new semantic-aware augmentation technique that aligns with their theoretical findings, which improved the performance of FixMatch." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the feature learning process of neural networks trained with the FixMatch method, which is a semi-supervised learning method, demonstrating its theoretical advantages on data distributions with a “multi-view” structure. The authors characterize the FixMatch learning process as a two-stage process: initially, the model learns like supervised learning and learns most of the features, followed by a second stage where it learns the missing features through unsupervised learning from augmented data. Based on these theoretical insights, the authors introduce a semantic-aware augmentation in FixMatch to enhance its performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The assumptions regarding data augmentation appear artificial. The augmentation method knows which feature is in each patch and can distinguish between feature and noise patches. The augmentation randomly mask the noise patch and one of the feature, to enable the FixMatch to focus on the unlearned features. Even though such augmentation can be easily achieved in the theoretical setting, it is smarter than what is originally used in FixMatch.\n2. The proposed SA-FixMatch, although is interesting and shares closer connection to the theory, introduces added complexity by using Grad-CAM for augmentation, which can slow down training." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Do you feel there is more potential to be extracted from the CutOut-SA line of thinking? For example, could doing multiple cutouts on the image to enforce exactly one classifying feature being present in the strong augmentation be a future avenue of improvement? Or did you already try multiple variants of such schemes and found the one you eventually presented in the paper to be the best?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and gave me the impression that I was able to follow its goal.\n2. The results from FixMatch-SA seem to confirm the pertinence of the analysis, and intuitively I found it made logical sense.\n - Some gains from CutOut-SA are truly impressive, including for recent FixMatch derivatives.\n3. I particularly liked that the paper didn't limit itself to a theoretical analysis but also provided an experimental validation on common SSL benchmarks.\n4. I find the FixMatch-SA method very elegant and effective and appears simple to implement which I consider a quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes two contributions:\n1. A theoretical analysis to explain why semi-supervised learning (SSL) techniques such as FixMatch generalize better than classical supervising learning (SL).\n2. A new method FixMatch-SA (semantically aware) which builds on the analysis to further enhance FixMatch.\nThe improved performance of FixMatch serves to experimentally corroborate the theoretical analysis.\n\nI understood the substantiating argument of the theoretical analysis as follows: the correct classification of sample is typically based on multiple features (at least 2). In SL, learning of all features is not necessary to minimize the loss. Meanwhile, in FixMatch, the strong augmentation drops some features and therefore requires the network to learn all the features to minimize the loss. \n\nDisclaimer: the theoretical analysis felt above my skill, mathematically speaking. I tried to follow it to the best of my ability but there could be alternate conjectures which I am not aware of to explain the observed generalization gains." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My own lack of knowledge on the theoretical side made it hard for me to estimate the originality of the approach. It's not per-se a weakness of the paper but rather a warning that I simply don't know.\n\nTypos (obviously this didn't influence my rating, it's for authors to polish their manuscript)\n- Line 87, wrong citation \"FixMatch (Xie)\" => \"FixMatch (Sohn)\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Questions were asked in the section above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The presentation of this work is impressive. The paper is not only easy to read, but the authors do a good job of highlighting their contributions and how it differs from previous works. The writing is clear and concise, and the figures and tables (although there are not that many) are not needlessly overcomplicated.\n- The proposed SA-FixMatch seems like a intuitive improvement to FixMatch, and does show to improve on the performance of FixMatch.\n- The theoretical justification in Section 4 seem to be sound." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- This paper provides a theoretical analysis, aimed at answering why FixMatch-like algorithms (for Semi-Supervised Learning, or, SSL) generalizes better than supervised learning. \n- The analysis is focused on CNNs (unlike previous comparison works that provide analysis by using linear model assumptions)\n- The paper proposes a improvement to FixMatch, called Semantic-Aware FixMatch (SA-FixMatch). The SA-FixMatch essentially masks out the semantically relevant parts of a high-confidence image sample (the region that is identified by GradCAM) in a CutOut-like fashion." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- My main concern of this paper is the overall motivation. My main question for the authors is: Why do we need to have a good theoretical understanding of why FixMatch generalizes better than Supervised Learning? The following is my thought process: Let's say we have a dataset that is fully labeled. In this case, we would obviously use supervised learning (since we have all labels) to train the model. But now, let's consider the case where only 10% of the data is labeled. Obviously, given that SSL can leverage 90% of the dataset while SL can only leverage 10% (9x the size), we would apply SSL to train the model. We already know that leveraging more data will lead to better performance - so then what is the point of trying to theoretically understand why FixMatch generalizes better then SL, given that SL in this case is using a subset of the data that FixMatch is using? The worst case for SSL is that it performs equally as SL. As shown in the paper, FixMatch learns more semantic features, but that seems a bit obvious, since FixMatch is able to utilize the unlabeled samples, while SL receives no training from these unlabeled samples. Perhaps a fairer (and more interesting) setting would be to compare SSL vs Supervised learning, given the same number of total training samples (where the 'unlabeled' samples of the SSL dataset is labeled for SL). I hope I am not coming across as too offensive with this comment, but I am just trying to understand the significance of such analysis. I hope the authors can convince me otherwise. \n\n- The implications of the analysis is somewhat underwhelming. \n - The proposed SA-Cutout does not feel like a novel contribution, given that there are previous works that use guided data augmentation for other tasks (e.g., \"Crafting Better Contrastive Views for Siamese Representation Learning\" in CVPR 2022). Also, there are some gradient-based masking techniques, such as \"Adversarial Dropout for Supervised and Semi-supervised Learning\" in AAAI 2018 that have very similar motivations as SA-Cutout, and the resulting solution is quite similar as well (masking out highly semantic regions).\n - Are there any other takeaways from this analysis? For example, could this type of analysis be extended to a broader scope?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The theory relies on 3-layer ConvNets. However, the experiments obviously hold for a wider range of architectures. Is it possible to extend it to more sophisticated architectures. For example, ConvNets with residual connections, additional layers, ViTs? If so, would it change the results somehow? Can we derive conclusions that certain architectures generalize better with SL compared to other architectures? That could be really exciting!\n\n2. Can you explain in theorem 4 why the margin scales as log(k) (where k is the number of classes). How come we get better classification margin for a more complex task with more classes? \n\n3. In theorem 4 you use $T=poly(k)/\\eta$ to represent the amount of iterations until convergence. What should I expect the degree of the polynomial and its leading coefficient to be? I want to have some concept of how many iterations we need." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The theory presented is compelling. The authors provide a strong argument, without relying on overly strict assumptions, that training a realistic neural network (a 3-layer ConvNet) with FixMatch-type algorithms allows us to (1) fit the training data and (2) generalize well to unseen samples. This stands in contrast to supervised learning, where the model often fails to generalize well to certain types of samples within the distribution.\n\nAdditionally, the authors propose an improved variation of a FixMatch algorithm, demonstrating that their theory not only explains the success of this family of algorithms but also predicts new results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the theoretical aspects of why the SSL method FixMatch outperforms supervised learning method in generalization for deep neural networks (DNNs). Previous studies have shown that SSL methods like FixMatch achieve higher test accuracy, but the mechanisms behind this advantage are not obvious. The authors provide theoretical justification for the enhanced generalization of FixMatch for convolutional neural networks. Their analysis reveals that FixMatch captures all relevant discriminative features for each class, whereas SL approaches tend to capture only a random subset of features, an effect attributed to the lottery ticket hypothesis. This framework is shown to extend to other SSL methods similar to FixMatch, such as FlexMatch, FreeMatch, Dash, and SoftMatch. Based on these findings, the authors propose an enhanced version of FixMatch, called Semantic-Aware FixMatch (SA-FixMatch), which is validated experimentally, demonstrating improved generalization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of this paper lies in its technical presentation.\n\nWhile I appreciate that the theoretical framework developed here is complex, making it challenging to present in an accessible way, I believe certain aspects could have been simplified for clarity.\n\nThis could be achieved by following these guidelines:\n1. Use standard notations. For instance, the authors use symbols like $Z_l$ to denote a labeled dataset, whereas $S$ is typically used for sample sets.\n2. Avoid re-using variables. In lines 126-128, for example, the symbol $i$ is used for multiple purposes, such as indexing both patches and classes, which can be confusing.\n3. Simplify complex definitions. Concepts like Definition 1 could be broken down and explained in more detail, with examples illustrating each component. Providing an example of a distribution that meets these conditions would clarify the distinction between single- and multi-view samples and help readers appreciate the significance of the conclusions in lines 284-287.\n\nMinor comment:\nIn the theorems (e.g., Theorem 4), instead of writing \"for any \\((x,y) \\sim D\\) with probability ..., we have ...,\" I would suggest phrasing it as \"with probability ... over the selection of \\((x,y) \\sim D\\), we have ...\". It is just more mathematically accurate and is consistent with the appendix." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=25kAzqzTrz},\nnote={under review}\n}" }, "abstract": { "value": "Semi-supervised learning (SSL), exemplified by FixMatch (Sohn et al., 2020), has shown significant generalization advantages over supervised learning (SL), particularly in the context of deep neural networks (DNNs). However, it is still unclear, from a theoretical standpoint, why FixMatch-like SSL algorithms generalize better than SL on DNNs. In this work, we present the first theoretical justification for the enhanced test accuracy observed in FixMatch-like SSL applied to DNNs by taking convolutional neural networks (CNNs) on classification tasks as an example. Our theoretical analysis reveals that the semantic feature learning processes in FixMatch and SL are rather different. In particular, FixMatch learns all the discriminative features of each semantic class, while SL only randomly captures a subset of features due to the well-known lottery ticket hypothesis. Furthermore, we show that our analysis framework can be applied to other FixMatch-like SSL methods, e.g., FlexMatch, FreeMatch, Dash, and SoftMatch. Inspired by our theoretical analysis, we develop an improved variant of FixMatch, termed Semantic-Aware FixMatch (SA-FixMatch). Experimental results corroborate our theoretical findings and the enhanced generalization capability of SA-FixMatch." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "deep semi-supervised learning", "generalization error", "feature learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/961afa0290a0841924253ce00810f787489068b7.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/8dc5731d46ade159a79cf5558355c1da9c9fbb97.zip" }, "title": { "value": "Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
25l4SWH2eS
IFAdapter: Instance feature control for grounded Text-to-Image Generation
main
Active
Generative diffusion models;Layout to image generation
generative models
5;6;6;6
5;4;4;5
2;3;4;3
2;3;3;3
3;3;3;3
5.75
4.5
3
2.75
3
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why IFAdapter enables it to be seamlessly applied to various community models. For example, appearance queries are trained , how do the authors ensure that the feature has sufficient generalization capabilities.\n2. Why can appearance-related features be extracted from the bounding box through Fourier and MLP, especially when the model inference cannot obtain image input?\n3. Is the model-free method based on the same backbone as the proposed method? Are all the comparison methods that require training trained on the provided dataset? The VLM used to annotate the proposed dataset is the same as the VLM used in the evaluation metric, which may be unfair to methods that are not trained on the proposed dataset.\n4. VLMs are likely to produce hallucinations. How do the authors ensure that the annotations of the provided dataset are free of hallucinations?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ The authors propose an important task, namely Instance Feature Generation. In addition, the authors also provide a benchmark and a verification pipeline\n+ The proposed method seems to achieve good results. Qualitative and quantitative experiments demonstrate the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to improve the capability of Text-to-Image diffusion models in generating precise features and positioning multiple instances in images. The proposed IFAdapter enhances feature accuracy by integrating additional appearance tokens and constructing an instance semantic map, ensuring that each instance's features align accurately with its spatial location. The IFAdapter is designed as a flexible, plug-and-play module, allowing enhanced control without retraining." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tCompared with instance diffusion and dense diffusion, this paper shows a small number of objects and does not show the situation where the object is denser.\n-\tThe details of the method are not explained clearly. The details of the dataset construction and the baseline setting are not presented clearly. Ablation experiments lack a basic baseline presentation.\n-\tAuthors should carefully check for errors in the text. For example, a sentence appears twice in succession in Introduction." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Is the proposed model able to generate images with some intricate prompts. For example, a blue dog with red legs running on a colorful river, (with dog, legs, river assigned to different boxes). I want to see some limitations of the proposed method, or in other words, I want to know how IFA copes with the semantic issues which may inherit from the base model given out-of-domain prompts and instructions. I would love to revise my rating after further discussion with the authors." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The paper is well written and easy to follow. \n2. The designed approaches efficiently incorporate the external object semantics and layout information into the generation process. The proposed Appearance Tokens aggregate the textual semantic and bbox information with learnable tokens. The proposed Instance Semantic Map accurately reallocates the spatial area of different objects, and solves the semantic fusion and feature leakage problem. \n3. The illustrated visual results are impressive, which shows clear superiority against competing baselines.\n4. The proposed method is compatible with various community models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Instance Feature Adapter (IFA) for layout-to-image generation. The key insight of IFA is to incorporate additional appearance tokens corresponded to different objects, so as to steer the T2I models to generate images that convey precise layout information and text semantics. To achieve this, IFA first leverages learnable instance tokens to aggregate the textual and bounding box information of the specific objects. To cope with the feature leakage problems, IFA further introduce Instance Semantic Map strategy to reallocate the spatial area for different semantics, so as to alleviate the feature conflicts between different objects during external feature tokens injection process. A new benchmark is proposed, the visual improvement over different baselines is significant. Further, the proposed method is a plug-and-play module, which can be adapted to various community models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The design module is supposed to interface with the text encoders, and both Appearance Tokens and Instance Semantic Map introduce the attention mechanism. Will it be computational costly during the inference process. There should be a detailed discussion.\n2. The size and shape of different objects seem unstable when applying IFA to different community models (Fig. 4), is it caused by the re-weight strategy from Instance Semantic Maps?\n3. The L2I problem is not a novel task, and the main novelty mainly lies in the implementation detail of the layout incorporation strategy, which may not bring significantly inspirations to the community." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "A few suggestions to improve the paper. \n- Please remove point 3 in contributions. Comprehensive experiments are not a contribution but are rather required to support the claims made in the paper. \n- The related works section has not been used correctly in this work according to my opinion. Authors just cite all relevant work but fail to differentiate their work from the literature. Please discuss how IFG is different from prior work and how their method is different from previously proposed methods in the related works section.\n- If authors believe local CLIP score is suboptimal. I would recommend authors show (quantiatively) why VLMs are better than CLIP for this task. Please refrain from introducing a new metric and benchmark unless absolutely necessary.\n- L77-78 is a repetition. \n\nI'm willing to improve my rating if authors address the weakness section satisfactorily." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The perceiver like Resampler design to extract fixed set of appearance tokens is novel and effective. This addresses the problem of only utilizing the EoT token from the text encoders. \n- The gated semantic fusion to address multiple overlapping instances is very useful for layout to image generation methods. \n- The proposed method is simple and is architecture agnostic." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- This work tackles instance Instance Feature Generation (IFG) task, i.e. train a model that can generate images given a global caption, spatial locations and detailed local captions as conditioning.\n- Authors introduce IFAdapter, a plug-and-play module for improving IFG.\n- The IFAdapter first extracts a fixed number of appearance tokens from a detailed instance caption and its spatial location. Next, a 2D map called Instance Semantic Map (ISM) is constructed using the bounding boxes of instances to aid the adherence of the model to spatial location conditions.\n- IFAdapter is architecture agnostic and can be incorporated into existing open-source text to image models and authors show its effectiveness on the newly introduced IFG Benchmark constructed from the COCO dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Authors claim IFG as their first contribution. But this task has already been introduced in InstanceDiffusion [Wang et. al 2024c]. InstanceDiffusion uses detailed instance captions in addition to the global caption. What is the difference between their setup and IFG?\n- The experimental section needs heavy work to make this work ready for publication. With exponential progress in generative modeling, it is hard to control the experimental settings with other models, especially the training data. But that doesn't mean all other settings can be held constant to properly understand the contributions. InstanceDiffusion and GLIGEN use SD1.5 as their base generation model but authors use SDXL an already powerful base generator. This makes it hard to understand the improvements in Table 1 and 2. I recommend authors report numbers with SD1.5 as the base generator or retrain InstanceDiffusion with SDXL (since their code is available) to properly support their claims.\n- Authors introduce a new benchmark and evaluation metric for this task. Why can't they use the evaluation setup and metrics as InstanceDiffusion? If authors find flaws in InstanceDiffusion's setup, I recommend authors point it out and discuss the advantages of the IFG Benchmark (setup) and IFS Rate (metric). There is no point in creating multiple new benchmarks when existing ones are already rigorous. For IFS Rate authors use Grounding DINO whereas InstanceDiffusion uses YOLO. Please compare with InstanceDiffusion using their exact setup (COCO and LVIS val set and their metrics) to support your claims.\n- Authors claim that the a lightweight network $f$ provides an \"importance\" score for location (x,y) in the ISM construction and use it to compute $D(x,y)$. Please show qualitative or quantitative evidence that the network $f$ infact does what is claimed in the paper. While the idea sounds reasonable, I suspect how $f$ learns to predict the right \"importance\" scores without supervision." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please kindly provide visual comparisons between the ground truth (GT) and the generated images." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. A plug-and-play component that can be integrated with various existing models to enhance layout control capabilities.\n2. The paper also includes a user study, offering valuable insights into the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors aim to tackle the challenge of achieving controllability in generating precise instance features. To do this, they introduce the Instance Feature Adapter (IFAdapter), designed for instance-level positioning and feature representation. Specifically, the IFAdapter employs learnable appearance queries that extract instance-specific feature information from descriptions, creating appearance tokens that complement EoT tokens. Additionally, the IFAdapter constructs a 2D semantic map to link instance features with designated spatial locations, providing enhanced spatial guidance. In areas where multiple instances overlap, a gated semantic fusion mechanism is utilized to mitigate feature confusion.\n\nTo validate their approach, the authors have created a new dataset, referred to as the COCO IFG benchmark. They leverage existing state-of-the-art Vision Language Models (VLMs) for annotation, resulting in a dataset with detailed instance-level descriptions. Experimental results indicate that the proposed plug-and-play component surpasses baseline models in both quantitative and qualitative assessments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The quality of the proposed dataset has not been evaluated. It appears that all ground truths (GTs) are generated by existing Vision Language Models (VLMs). A human-level quality assessment would be beneficial for greater impact within the community.\n2. The assertion that the proposed component can “seamlessly empower various community models with layout control capabilities without retraining” (l.113) may be misleading. The IFAdapter is fundamentally a training-based method, and the phrase “without retraining” only holds true when applied to spaces closely aligned with COCO IFG, as the IFAdapter does not demonstrate zero-shot capabilities in this paper.\n3. The semantic-instance map does not appear to be novel. Please refer to \"BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion\" (ICCV 2023) and other zero-shot L2I methods for comparison.\n4. The appearance tokens show only minor improvements in Table 3. Additional explanations regarding this observation would be appreciated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024ifadapter,\ntitle={{IFA}dapter: Instance feature control for grounded Text-to-Image Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=25l4SWH2eS},\nnote={under review}\n}" }, "abstract": { "value": "While Text-to-Image (T2I) diffusion models excel at generating visually appealing images of individual instances, they struggle to accurately position and control the features generation of multiple instances. The Layout-to-Image (L2I) task was introduced to address the positioning challenges by incorporating bounding boxes as spatial control signals, but it still falls short in generating precise instance features. In response, we propose the Instance Feature Generation (IFG) task, which aims to ensure both positional accuracy and feature fidelity in generated instances. To address the IFG task, we introduce the Instance Feature Adapter (IFAdapter). The IFAdapter enhances feature depiction by incorporating additional appearance tokens and utilizing an Instance Semantic Map to align instance-level features with spatial locations. The IFAdapter guides the diffusion process in a plug-and-play module, making it adaptable to various community models. For evaluation, we contribute an IFG benchmark and develop a verification pipeline to objectively compare models’ abilities to generate instances with accurate positioning and features. Experimental results demonstrate that IFAdapter outperforms other models in both quantitative and qualitative evaluations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Generative diffusion models", "Layout to image generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/feb460b82087722d3ac83234da7c328bfc80b3e8.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/734519134f65a9596cab0c41d134d8e3a58e21a3.pdf" }, "title": { "value": "IFAdapter: Instance feature control for grounded Text-to-Image Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
26kgSlMmhA
Any-Property-Conditional Molecule Generation with Self-Criticism using Spanning Trees
main
Active
molecules; transformers; masking; molecule generation; property conditional generation
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;5;5;5;5;6
3;4;4;2;4;3;2
3;2;3;3;2;3;3
3;1;2;3;2;2;3
2;2;4;4;3;2;3
4.571429
3.142857
2.714286
2.285714
2.857143
-0.420084
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In the OOD setting, how the model’s performance varies under different guidance settings, especially for extreme or non-physical property values?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents molecule generation models, allowing multi-property control and self-assessment of generated molecules. It’s well-designed, with detailed experiments showing strong results across different datasets. The writing is clear and structured." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents STGG+, a model for molecule generation with conditional property control. The model also has the ability of self-criticism to select optimal outputs. It achieves high validity and diversity in generated molecules, efficiently handling both typical and extreme properties." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The self-criticism mechanism for filtering generated molecules based on property predictions is a key feature, but there is limited evaluation of its accuracy. Detailed analysis will be necessary.\n\nI am not an expert of this field so I will lower my confidence score." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In SMILES notation, the same molecule can have multiple string representations. Based on my (limited) understanding of STGG, this ambiguity seems present as well. How are the molecules canonicalized?\n2. In Tables 3 and A.10, do all methods reach peak performance only after generating 1 million molecules? Is the search space the same across methods?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. To my knowledge, this manuscript is the first to thoroughly examine STGG for reward conditioning and/or optimization.\n2. The work reflects a substantial effort to assess STGG+'s capabilities. Overall, the approach appears methodologically sound.\n3. Molecular property optimization is an open challenge. Given its competitive performance compared to existing algorithms and its use of a (somewhat) unique molecular representation, I expect this work will attract reasonable interest." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors build upon spanning tree-based graph generation methods to produce valid molecules with desired properties. They enhance the original network architecture by adding property embeddings and incorporate a properties predictor head for joint training. Through the use of classifier-free guidance and conditioning on these properties, the authors demonstrate that STGG+ can generate molecules conditioned on specific properties or with high reward values." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The primary limitation of this paper is that generating 'valid' molecules does not guarantee synthesizability. Many molecules presented in the appendix would be very challenging, if not impossible, to synthesize. Meanwhile, some baseline methods may perform slightly worse on reward but produce molecules that are easier to synthesize, avoiding \"reward hacking.\" A fairer comparison would involve evaluating the reward optimization performance of synthesizable molecules across different algorithms.\n2. While novelty is not an ideal measure of a paper's value, this work is highly empirical, with limited theoretical insight. This puts more emphasis on competitive performance, yet the paper lacks adequate baseline comparisons for reward optimization. Previous work has shown that methods such as GraphGA, LSTM-HC, and Reinvent are effective at maximizing OOD reward, and these baselines should be included in Sections 4.3–4.5 (particularly Section 4.5, which currently lacks any baseline comparison). This is especially relevant as the random guidance approach for OOD generation resembles slightly enhanced random sampling with a reward proxy.\n3. The molecular properties selected for optimization in this study are _very_ simple. For instance, molecular weight can be adjusted by adding or removing atoms, and logP by incorporating ionic groups (which the model does). Optimizing HOMO-LUMO gaps within the QM9 dataset is not useful, as these molecules contain only 9 atoms. These problems are generally considered solved.\n4. Although the work is extensive, certain details are presented inconsistently or lack substantiation. Some claims are unsupported by data (e.g., statements like \"other [configurations] were not beneficial/did not improve performance\" lack any data references). Additional issues include appendix figure captions that are unclear and lack cross-references in the main text (e.g., molecules with QED > 1 in figures 8 and 14), and captions inappropriately implying low QED correlates with implausibility (e.g., figure 9). Many terms are undefined, including \"synthe. MAE,\" \"HOMO/LUMO,\" \"SAS,\" as well as the precise definition of diversity used here. Additionally, error bars are missing in all tables." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In Table 2, why not report the MAE at different percentiles, instead of only the MinMAE? It's possible that the model simply memorizes some extreme cases seen in training so as to achieve a good minimum MAE." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper is presented with sufficiently clear descriptions.\n- The authors explored a wide range of techniques that can be applied in the under-explored context of multi-property conditional generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed to extend Spanning Tree-based Graph Generation (STGG) to multi-property conditional generation, namely STGG+, with improvements to successfully engineer and implement the concept. STGG+ achieves SOTA on both in-distribution and OOD conditional generation, as well as reward maximization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It seems to me that the authors invented complicated ad-hoc designs and specifically engineered to fix any issues that may arise, for example by masking the creation of rings when reaching max number (100), or alternating the use of CFG and ranking via a property-predictor. I'm afraid this hampers the overall generality of the proposed method.\n- Ablation studies are missing. What's the effect of the improved Transformer architecture against the vanilla one? How does the auxiliary property prediction loss contribute to the results? The same applies to CFG w/ and w/o random guidance, the masking mechanism, the ring overflow treatment, the order randomization, and the automatic construction of vocabulary instead of a predefined one. Detailed ablations are needed to validate the authors' special designs, and provide more insight to the community." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "My questions are closely related to the weaknesses mentioned above. The authors are encouraged to address the points raised in the weaknesses section. Some questions include but are not limited to:\n1. The STGG+ approach claims improvements over SMILES-based molecule generation methods. **What are some of the improvements that the SMILES representation fails to achieve or is difficult to achieve compared to the STGG+ representation?**\n2. The spanning-tree representation allows both explicit and implicit hydrogen atoms. Are the generated molecules restricted to explicit hydrogen atoms or can they also have implicit hydrogen atoms?\n3. How does the model/evaluation handle canonicalization of the generated sequences/molecules?\n - Does uniqueness consider canonicalization or does it only consider differences in the generated sequences?\n4. For reporting the property MAEs, was an external property predictor used for the evaluation? How is MinMAE reported?\n - If an external property predictor was used, provide the details of the external predictor for MolWt, LogP, QED, and HOMO-LUMO gap." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "### Originality\n1. This work proposed an improved method of the STGG model for any property conditional molecule generation.\n1. This work introduced classifier-free guidance and self-criticism into the transformer architecture.\n\n### Quality\n1. The proposed method is shown to improve the validity and diversity of generated molecules.\n1. The method is also shown to better generate molecules with desired/conditioned properties.\n1. The results are shown across multiple datasets and properties.\n\n### Clarity\n1. The STGG+ architecture is clearly explained.\n\n### Significance\n1. Any-property conditional generation is a challenging yet important task in technical applications. For instance, in drug discovery, it is important to generate molecules with desired curative properties and avoid molecules with toxic properties." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed the STGG+ method, an improved method of the Spanning Tree-based Graph Generation (STGG), for generating novel molecules with multi-property conditional generation. Architecture-wise, this work introduced\n\n1. an improved Transfomer with Flash-Attention. RMDProp, etc.\n1. an extended STGG model for more robust graph generation of molecules.\n\nBy randomly masking some properties at training time and using Classifier-Free Guidance (CFG), the model was shown to generate novel in- and out-of-distribution molecules with any property conditional generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "As a computational chemist with expertise in molecules (and SMILES), I am concerned with\n1. the contribution and improvement of this work, STGG+, to the original STGG model or the original SMILES-based molecule generation methods.\n2. this work's representation of chemistry and molecules in terms of correctness and novelty.\n\n**The STGG+ representation of molecules is within the capabilities of SMILES representation.**\n1. The STGG+ improvements to STGG are not significant enough\n - the proposed improvements such as masking of invalid tokens, automatic calculation of valency, etc., seem similar to adding if-else conditions to improve the original STGG model.\n - In my opinion, these improvements should be learned by the model itself during training. The model itself should learn to avoid invalid tokens and keep track of valency. These are all fundamental grammar rules that the model should learn by itself.\n - If these constraints are manually added, it limits the efficiency of the generation process. For example, valency calculation can be intricate in the generation process when rings are involved.\n - Because of these manual implementations, I am not convinced that the STGG+ representation is significantly better than the SMILES representation in terms of ensuring valid molecules.\n2. The proposed benefits of STGG+ (spanning-tree representation) compared to SMILES representation are not entirely true. SMILES can also achieve the claimed benefits with similar modifications. For example,\n - In SMILES, rings are represented by two identical numbers at the beginning and end of the ring. For cyclohexene, its SMILES representation can be `C1CCCCC=1` (or `C1-C-C-C-C-C=1`) and its spanning-tree representation can be `[bos]C[bor]-C-C-C-C-C=[eor1][eos]`. In the spanning-tree representation, a `[bor]` token must be paired with a `[eor#]` token before `[eos]` to form a valid molecule. Whereas in SMILES, a ring-starting number must be paired with the same number before the end of the string.\n - Automatic calculation of valency can be done in SMILES as well in the same fashion as STGG+ since the spanning-tree representation and the SMILES representation are interchangeable during the generation process.\n\n**This work lacks clarity on the spanning-tree representation such as explicit/implicit hydrogen atoms and canonical representation.**\n1. In Figure 1, line 140, the spanning-tree representation used a combination of explicit and implicit hydrogen atoms. The nitrogen atom was shown with an explicit hydrogen atom, and all the carbon atoms were shown without hydrogen atoms (implicit). However, from Appendix A.4, it seems that the vocabulary is collected for spanning tree representations with explicit hydrogen atoms.\n1. The above point leads to the question of canonical representation - The same molecule can have different SMILES representations and different spanning-tree representations. In other words, different sequences of tokens can point to the same molecule. For example, `[bos]C[bor]-C-C-C-C-C=[eor1][eos]`, `[bos]C[bor]-C-CH2-C-C-C=[eor1][eos]`, and `[bos]C[bor]-CH2-C-CH2-C-C=[eor1][eos]` can all represent the same cyclohexene molecule.\n1. For the reported generative efficiency (% of valid, novel, and unique molecules), was canonicalization performed/considered? If not, the reported efficiency might be overestimated. Additionally, the authors should provide some examples of the generated sequences and their corresponding molecules to clarify the canonicalization process.\n2. **Suggestions:** In Section 3.3, the authors should discuss the issue with explicit/implicit hydrogen atoms and canonical representation. Try to clarify the following points:\n - Does STGG+ generate molecules with explicit hydrogen atoms only or can it also generate molecules with implicit hydrogen atoms? The spanning-tree representation should allow both.\n - How is the canonical representation handled in the training and generation processes?\n - **What is the definition of valid, novel, and unique molecules?** Are molecules considered unique if they have different sequences of tokens but represent the same molecule?\n\n**The reported property MAE of the conditional generation needs more explanation.**\n1. For the properties of the generated molecules, were they calculated with the property predictor of the STGG+ architecture or with an external property calculator such as RDKit? The external predictor should provide the ground truth for the property values and should thus be used for the evaluation.\n2. The `MinMAE` reported in Table 2 needs more clarification: is it the minimum absolute error across the 2K generated molecules? What does \"minimum mean\" refer to?\n - If the minimum is reported, what about the mean of the absolute errors?\n - For such a large number of generated molecules, the mean absolute error is a better metric to evaluate the performance of the conditional generation. This is related to the application of the model (line 57) - validating the properties of the generated molecules in real life can be costly. Conditional generation aims to generate a small set of potential candidates. Minimum error implies that one has to test all 2K molecules (too large) to find the best candidate, while average error better represents the overall conditional generation performance.\n - The minimum absolute error might be more convincing if reported on a small population of generated molecules such as 10x molecules with multiple batches." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The authors improved the structure of the original Transformer, but the results do not seem to reflect the improvements, such as whether the generation time and the quality of the generated molecules have been improved.\n2. For the self-criticise, the authors should discuss the trade-off between performance gains and computational cost. Including a comparison of computational time for different values of k would clarify the model's efficiency.\n3. The authors should optimize the structure of the result table, as it is not clear what is being compared, e.g. modify the table head.\n4. For property-conditional generation. The authors only compare the MinMSE and should add some property distributions to demonstrate that the generated molecules approximate the given conditions." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper used a property-predictor-driven self-criticism mechanism that allows STGG+ to evaluate and select the best out of multiple generated molecules, improving fidelity to target properties." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an enhanced version of Spanning Tree-based Graph Generation (STGG+), tailored for multi-property conditional molecule generation. Based on the STGG, STGG+ includes improvements in the Transformer architecture, a flexible conditioning mechanism for any subset of properties, and a self-criticism mechanism that filters generated molecules based on a property predictor." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The model’s effectiveness relies heavily on the internal property predictor, which may be less reliable for out-of-distribution samples. This dependence could reduce fidelity in less representative scenarios.\n2. Although the model improves conditioning performance, it’s unclear how it balances molecule diversity and property, diversity is also a crucial metric in molecular generation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please address (to the extent possible) Q1-Q4 in the Weakness section. Specifically for Q1-Q3, please state if the alternate techniques were considered when designing the experiments/writing the paper. If so, please explain why those alternate ideas were not incorporated into the paper. If not, please discuss how these alternative techniques might be relevant to the problem considered in the paper." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The problem considered in this paper (molecule generation) has clear practical importance and given that there exists standard 1D string representation of molecules, using generative AI to create new molecules is definitely a promising avenue worth pursuing.\n\n* The paper has a pretty comprehensive experimental results and they show the efficacy of the proposed system.\n\n* While some of the techniques used in the paper might be `standard' for say language modeling, being able to apply these techniques in a completely new application domain and show improvement is very nice.\n\n* The ability to impose certain properties on molecules being generated and having the model be able to self-criticize seems like really nice capabilities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers the problem of generating (hopefully) new representations of molecules. Existing techniques either work with a 1D string representation of molecules or a 2D graph representation. This work considers the former representation and builds on a method called Spanning Tree-based Graph Generation (STGG) that was specifically created to use generative AI models. The present work differs from existing work along the following two high level aspects:\n\n1. Previous work would generate molecules without any restrictions. However, this paper considers the problem of generating molecules that has to satisfy (some) subset of a given set of properties that the generated molecules must satisfy.\n2. Unlike existing results the paper creates models that can _self-criticize_ by allowing the model to predict properties of the molecules it generates and uses that to prune out molecules that do not satisfy the required properties.\n\nThe paper lists conceptual improvements made in this work in the context of generating molecules but since I'm more familiar with Transformers and related literature, I will focus my review on those. Specific to the Transformer model used in this work (as opposed to the STGG work), the paper uses improvements made to the Transformer architecture (e.g. FlashAttention) over the last three years.\n\nThe paper presents a pretty comprehensive (at least to me) set of experiments and show that the proposed new system works better than existing systems on benchmarks that are used in this area." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Below are some questions that I was not able to answer based on what is there in the paper. (Again, as mentioned earlier, I focused in the Transformer aspects of the paper so all of the questions below are on that axis.) Also some of these questions might be asking for intuitive answers and not necessarily something that could potentially be answered with experiments-- but having these answers might be useful for the reader to understand some of the design choices made by the paper:\n\n* (Q1) The paper uses causal Transformer models-- is there any reason a non-causal Transformer model cannot be used? E.g. non-causal Transformer models have been used on applications other than language modeling (e.g. ViT in image processing)-- and pre-Transformer language models like BERT were non-causal. In theory non-causal models are more expressive than non-causal model (just because a causal model is trivially a non-causal model as well).\n\n* (Q2) The idea of having the model self-criticize the molecules it generates reminded me a lot of GANs. Have GANs been tried to generate molecules? If so, have they worked or is there some intuition for why they did not work?\n\n* (Q3) Over the last ~3 years there have been a fair amount of work on `Transformer-free' models for language generation. One such line of work is based on state space model (Mamba [see https://arxiv.org/abs/2405.21060 and https://arxiv.org/abs/2312.00752] being a model that has garnered a lot of attention in the language modeling literature). Some of these ideas have been used in genomic sequencing (e.g. Hyena DNA-- https://arxiv.org/abs/2306.15794). Where these recent models considered in this work?\n\n* (Q4) In lines 227-228, it is mentioned that the _number_ of masked properties $t$ was picked uniformly at random between $0$ and $T$. However, which of the $\\binom{T}{t}$ subset of properties were actually chosen to mask?\n\nBelow are some minor comments (that are purely related to presentation):\n\n* Lines 354-355: Instead of saying \"similar\" performance-- please quantify, i.e. within what percentage of existing work?\n\n* Table 1: Is _Distance_ in the table column name the same as FCD?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. I didn't completely follow the details of best-of-k filtering described in Section 3.6. It would be good if the authors could explain how this is exactly done. \n2. Based on observations: In Table 1, STGG+ shows an improved synth MAE, but other metrics appear comparable. In Table 2, STGG+ outperforms by design. In Table 3, the settings aren't directly comparable. Does this imply that the primary novelty lies in achieving property conditioning? If so, is the novelty somewhat limited, given that the extension feels intuitive?\n3. Considering the non-comparable nature of Table 3, could the experiments be repeated under consistent settings for a fair comparison?\n4. It’s currently unclear which modifications specifically drive the improvements in STGG+. Conducting ablation studies based on the points listed on page 2 (points 1-5) would provide more clarity.\n5. For conditional generation, online methods like GFlowNets could serve as an additional baseline. Would it be possible to include this baseline in Table 2?\n\n**Paper suggestions:**\n\na) I think, including Figure 3 (suplementary) instead of Figure 1 in the main paper would better showcase the contributions.\n\nb) For section 3.6, I found the figure a little confusing. It may help to have a better figure to explain the same." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well written.\n2. The authors suggest modifications that make sense and feel intuitive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles the issue of generating valid molecules by extending the Spanning Tree-based Graph Generation (STGG) to support multi-property conditional generation. The proposed STGG+ integrates a Transformer architecture, property masking, and an auxiliary loss for model self-evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I find the changes to be intuitive but somewhat obvious, leading me to believe that the paper lacks significant novelty.\n2. Please see questions." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Generating molecules conditional on desired properties using Spanning Trees with a model that can self criticize its own molecules" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024anypropertyconditional,\ntitle={Any-Property-Conditional Molecule Generation with Self-Criticism using Spanning Trees},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=26kgSlMmhA},\nnote={under review}\n}" }, "abstract": { "value": "Generating novel molecules is challenging, with most representations of molecules leading to generative models producing many invalid molecules. Spanning Tree-based Graph Generation (STGG) is a promising approach to ensure the generation of valid molecules, outperforming state-of-the-art generative models models for unconditional generation. In the real world, we want to be able to generate molecules conditional on one or multiple desired properties rather than unconditionally. Thus, in this work, we extend STGG to multi-property conditional generation. Our approach, STGG+, incorporates a modern Transformer architecture, random masking of properties during training (enabling conditioning on any subset of properties and classifier-free guidance), an auxiliary property-prediction loss (allowing the model to self-criticize molecules and select the best ones), and other improvements. We show that STGG+ achieves state-of-the-art performance on in-distribution and out-of-distribution conditional generation, as well as reward maximization." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "molecules; transformers; masking; molecule generation; property conditional generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4f2a4371eecd2e40578a79e388935d5613843a6f.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/de6aae8daabbf656de2966aa6e66e8689d782074.zip" }, "title": { "value": "Any-Property-Conditional Molecule Generation with Self-Criticism using Spanning Trees" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
26oSbRRpEY
StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
main
Active
Text-To-Video; Diffusion Models; Long Video; Autoregressive
generative models
5;5;5;6
5;3;4;5
3;2;3;3
3;2;2;2
2;2;2;2
5.25
4.25
2.75
2.25
2
0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. \"Table 6\" seems incorrectly labeled and should be \"Table 1.\" As far as I can see, there is only one table in the entire paper. \n2. In Table. 6, the right side of the table extends beyond the text area, making the layout appear cluttered." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The abstract and introduction repeatedly emphasize that the Appearance Preservation Module (APM) ensures the natural continuity of object characteristics in generated videos. However, the paper does not provide metrics similar to CLIP-I to quantify the preservation of subject consistency. \n2. When considering long video generation, users typically seek dynamic visuals rather than frames with the same semantic content. While methods like SEINE or DynamiCrafter may appear to have lower visual quality than this work, the APM module proposed in this paper, while enhancing content continuity, also restricts the range of generated video content. In my opinion, this is a trade-off with drawbacks. The authors could consider adding experiments to demonstrate that even with CAM and APM, the model can still generate content with semantic variation. \n3. This paper employs CAM to ensure short-term consistency in the video, a method that significantly increases the parameter count. In contrast, SEINE’s method, as mentioned, only slightly increases parameters. The paper lacks a clear ablation study to compare the two methods and determine which is superior." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents StreamingT2V, a method for generating high-quality, extended videos from text prompts, specifically addressing the challenge of ensuring smooth transitions in long-form content. Existing methods often struggle with abrupt cuts in longer videos. In contrast, StreamingT2V introduces three core components: (i) the Conditional Attention Module (CAM), a short-term memory mechanism that aligns each generated segment with its predecessor for seamless transitions; (ii) the Appearance Preservation Module (APM), a long-term memory unit that retains key features from the initial frames to maintain scene consistency; and (iii) a randomized blending technique that enables a video enhancer to be applied autoregressively, ensuring coherence over extended durations. Experiments demonstrate that StreamingT2V achieves high levels of motion and continuity, outperforming other models that tend to stagnate during prolonged autoregressive use." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The abstract and introduction repeatedly emphasize that the Appearance Preservation Module (APM) ensures the natural continuity of object characteristics in generated videos. However, the paper does not provide metrics similar to CLIP-I to quantify the preservation of subject consistency. \n2. When considering long video generation, users typically seek dynamic visuals rather than frames with the same semantic content. While methods like SEINE or DynamiCrafter may appear to have lower visual quality than this work, the APM module proposed in this paper, while enhancing content continuity, also restricts the range of generated video content. In my opinion, this is a trade-off with drawbacks. The authors could consider adding experiments to demonstrate that even with CAM and APM, the model can still generate content with semantic variation. \n3. This paper employs CAM to ensure short-term consistency in the video, a method that significantly increases the parameter count. In contrast, SEINE’s method, as mentioned, only slightly increases parameters. The paper lacks a clear ablation study to compare the two methods and determine which is superior." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "About video length stress test. In auto-regressive video generation, there exists error accumulation problem, i.e. the generated frames have different distribution from the training data distribution, which makes the subsequently generated frames degrades further. How does StreamingT2V address the error accumulation problem? What is the upper-bound generation length of this model?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The generated videos are sufficiently long, natural and with relatively large motion. The quantitative performance outperforms existing methods.\n2. The paper identifies a noise mismatch problem when enhancing long videos using chunk-wise SDEdit, and proposes a randomized blending method to address this problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a streamable text-to-video method which can generate up to 2 minutes or longer videos with seamless transitions. Three innovative methods are proposed to ensure the long video consistency and overall quality. Firstly, conditional attention module injects previous-chunk information into the pre-trained video diffusion model to ensure smooth transitions between chunks. Secondly, the CLIP feature of the first frame is injected to the video diffusion model to ensure a coherent scene and object appearance within the whole video. Thirdly, a randomized blending approach is introduced to address inconsistent transitions caused by noise mismatch within the video enhancer's denoising process. A novel motion aware warp error metric is proposed to assess both motion amount and consistency. Experiments are conducted to evaluate the proposed method qualitatively and quantitatively." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty is limited. Firstly, generating subsequent frames with the condition of previous frame chunks has already been explored [1]. Secondly, the appearance preservation module (APM) in this paper is much like the anchored conditioning method in ART-V [2]. \n2. The paper states that the training data is collected from publicly available sources, but the corresponding URLs or papers are provided or mentioned. Please provide the URLs or citations for these sources.\n3. Comparisons on general video quality benchmarks are missing, such as FVD and FID on MSR-VTT or UCF datasets.\n4. The paper is not well written. The formatting issues make the paper unfrendly to read, e.g. it is better to use brackets when citing papers; Table 6 exceeds the width limit.\n\n[1] Gao, Kaifeng, et al. \"ViD-GPT: Introducing GPT-style Autoregressive Generation in Video Diffusion Models.\" arXiv preprint arXiv:2406.10981 (2024).\n\n[2] Weng, Wenming, et al. \"ART-V: Auto-Regressive Text-to-Video Generation with Diffusion Models.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Anchor frame influence: During the training and sampling stages, anchor frames are randomly sampled. How significant is the impact of choosing different anchor frames on the final video generation? Why can't all frames from the first chunk be used as anchor frames to guide generation?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.The proposed autoregressive approach effectively leverages both short-term and long-term dependencies, facilitating the seamless creation of extended video content. This method adeptly addresses the challenges associated with producing longer video sequences by ensuring smooth transitions and continuity.\n\n2.Through the integration of the Conditional Attention Module (CAM) and the Appearance Preservation Module (APM), the model ensures that generated videos exhibit natural continuity and maintain consistent scene and object characteristics across their entire length." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel text-to-video diffusion model aimed at generating long videos. Addressing the challenge of abrupt transitions in extended videos, the model incorporates three key mechanisms: a Conditional Attention Module (CAM) for smooth short-term transitions, an Appearance Preservation Module (APM) to maintain scene consistency, and a randomized blending technique for refining generated videos." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. CAM design : In the W.A.L.T[1] method, a very straightforward auto-regressive generation approach is provided for frame prediction tasks, where past generated frames are used as conditions to guide the generation of subsequent video content through the standard classifier-free guidance method. Can the authors explain why this approach was not adopted in the design of the CAM module, but rather a ControlNet method was used? Additionally, can the authors provide a comparison of the FVD metrics for the CAM and WALT frame prediction methods on the UCF-101 or K600 datasets? \n\n2. Training details are missing: Can the authors provide details related to the training data?\n\n3. Evaluation is a bit weak: Can the authors provide a comparison of FVD with other methods on the UCF-101 or K600 datasets?\n\n---------\n[1].Gupta, Agrim, Lijun Yu, Kihyuk Sohn, Xiuye Gu, Meera Hahn, Li Fei-Fei, Irfan Essa, Lu Jiang, and José Lezama. \"Photorealistic video generation with diffusion models.\" arXiv preprint arXiv:2312.06662 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-An autoregressive long video generation framework is designed, which is novel, and shows stable video quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a long video generation framework from single text prompt. The main contribution is the proposed conditional attention module (CAM) and appearance preservation module (APM) for temporal consistent long video generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-I'm wondering the necessity of generating very long clip with only one short caption. In example videos provided in the supplementary material, it seems the content of video is limited to a very narrow domain with little variation, due to the design of APM. It is not suitable for very long video generation.\n\n-In line 299, \"...fuse x with output of the first temporal transformer block of CAM.\" Just curious about the fusion here, as in Figure 3, x seems to be added with the noised input after one encoding layer. Can this encoding layer described as the first temporal transformer block of CAM? As generally, the first block of CAM should have the skip connections to decoding part.\n\n-In line 417, the mean warp error W(V) is the average squared L2 pixel distance from a frame to its warped subsequent frame. So is it computed by calculating the warp error between anchor frame and all other frames? Or between two consecutive frames? What's the definition of warp error?\n\n-The quantitative comparison only includes the long video generation quality evaluation, lacking the common metric evaluation, such as FVD, LPIPS. Also lacks the evaluation on common datasets like MSRVTT and UCF101." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024streamingtv,\ntitle={StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=26oSbRRpEY},\nnote={under review}\n}" }, "abstract": { "value": "Text-to-video diffusion models enable the generation of high-quality videos that follow text instructions, simplifying the process of producing diverse and individual content.\n Current methods excel in generating short videos (up to 16s), but produce hard-cuts when naively extended to long video synthesis.\n To overcome these limitations, we present $\\textit{StreamingT2V}$, an autoregressive method that generates long videos of \\textbf{up to 2 minutes or longer} with seamless transitions.\n The key components are:\n (i) a short-term memory block called conditional attention module (CAM), which conditions the current generation on the features extracted from the preceding chunk via an attentional mechanism, leading to consistent chunk transitions, \n (ii) a long-term memory block called appearance preservation module (APM), which extracts high-level scene and object features from the first video chunk to prevent the model from forgetting the initial scene, and (iii) a randomized blending approach that allows for the autoregressive application of a video enhancer on videos of indefinite length, ensuring consistency across chunks. \n Experiments show that StreamingT2V produces high motion amount, while competing methods suffer from video stagnation when applied naively in an autoregressive fashion.\n Thus, we propose with StreamingT2V a high-quality seamless text-to-long video generator, surpassing competitors in both consistency and motion." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Text-To-Video; Diffusion Models; Long Video; Autoregressive" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d9731227d154b2493aff20dd8a0c1cc03673e1c7.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/7442f7987b5c4890d9900a3c9b78eec7419752b6.zip" }, "title": { "value": "StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
27Qk18IZum
PharmacoMatch: Efficient 3D Pharmacophore Screening via Neural Subgraph Matching
main
Active
Contrastive Representation Learning;Neural Subgraph Matching;Virtual Screening;Pharmacophore Modeling
learning on graphs and other geometries & topologies
3;3;5;5
3;4;3;4
2;2;3;3
1;2;2;3
2;2;3;3
4
3.5
2.5
2
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See comments in the Weaknesses part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ This paper is well-written and nicely organized. \n+ The proposed framework is novel and is nicely motivated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel contrastive learning approach based on neural subgraph matching, i.e., PharmacoMatch, and the authors claim that it reinterprets pharmacophore screening as an approximate subgraph matching problem and enables efficient querying of conformational databases by encoding query-target relationships in the embedding space." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I suggested the authors consider conducting more experiments over other datasets instead of only DUD-E.\n- I wonder if the proposed contrastive learning approach can be applied to other domain datasets?\n- This paper does not provide any unique and novel insights about why the proposed architecture that works well for the current dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How sensitive is the model to the choice of tolerance radius (r_T = 1.5Å)? Could you provide an analysis?\n2. How did you select the 10 DUD-E targets? Could you demonstrate the method's robustness on more targets?\n3. What is the impact of different augmentation strategies? For example, how much does performance degrade if you remove node deletion or displacement?\n4. Could you compare PharmacoMatch with recent methods like DrugClip and PharmacoNet on the same benchmarks?\n5. The speed advantage is clear, but can the author better justify the conceptual novelty? (Contrastive learning is not new, GNN for molecules is well-established and order embeddings have been used before)\n6. How much 3D geometric precision is actually lost during embedding? Could you quantify the tradeoff between speed and accuracy?\nI wonder if there are specific types of 3D arrangements where the method may fail." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The acceleration is impressive, with a thorough evaluation of embeddings and runtime analysis. The model provides a practical impact for screening billion-compound libraries. Besides, the reformulation of pharmacophore screening as neural subgraph matching is creative combining self-supervised training approach using augmentation strategies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce PharmacoMatch, a deep learning approach that reframes 3D pharmacophore screening as a neural subgraph matching problem, using a graph neural network trained through contrastive learning to encode pharmacophore structures into an embedding space. Their method achieves comparable screening performance to traditional alignment-based approaches while being approximately two orders of magnitude faster in matching speed, making it particularly valuable for screening extremely large molecular databases. The model is trained in a self-supervised manner on over 1.2 million unlabeled molecules from ChEMBL, learns to encode both structural and spatial relationships of pharmacophoric points, and demonstrates robust performance across multiple DUD-E benchmark datasets in a zero-shot setting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite mentioning recent works like DrugClip and PharmacoNet in the related work section, there are no direct comparisons with these methods. It seems the paper only compares against the traditional CDPKit alignment algorithm, missing comparisons with other more current learning approaches. Besides, there is no comparison or detailed discussion with simpler baseline models (e.g., basic GNN architectures without contrastive learning)\n\nThe discussion on the model details is not sufficient. Lacks systematic ablation studies of model architecture components (e.g., the impact of different GNN layers, the importance of skip connections). Missing analysis of the impact of different augmentation strategies on model performance and investigation of how embedding dimension and model size affect performance, as well as contrastive loss function impacts." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "## **Methods:**\n**Q1. Clarification of Pharmacophore Representation:** In the section detailing pharmacophore representation, it is mentioned that $\\mathcal{L}$ comprises only pharmacophoric descriptors. However, distances are subsequently incorporated into the representation. The notation and description should be revised for consistency and clarity.\n\n**Q2. Model Input Specification:** Within the model input section, node labels are currently denoted as $V_p$, which is a set of node. It may be more appropriate to represent these as node element to enhance the clarity.\n\n**Q3. Negative Data Augmentation Strategy:**\nThe current approach limits displacement directions to a single direction to avoid cancellation effects. Allowing for all displacement directions except for the case of cancellation can highly increase the diversity of negative training data, might lead to better model performance. Are there specific reasons why the authors chose to use only a single direction for negative displacements?\n\n## **Results:**\n\n**Q4. Evaluation on Recent Datasets:** To better assess the method’s applicability to real-world scenarios, evaluation on more recent datasets like LIT-PCBA[1] is recommended. Additionally, utilizing metrics beyond AUROC, such as enrichment factors (EF) (similar to BEDROC for early recognition of hit candidates), could provide a better understanding of the model’s performance in virtual screening tasks.\n\n**Q5. Benchmarking with other methods:** Since one of the main contributions of this paper is \"*fast virtual screening in the embedding space and evaluate the performance of our method through experiments on virtual screening benchmark datasets*\", the authors should benchmark their virtual screening performance with other models. It seems that there are no significant differences in the objectives or methods of the compared models relative to the current work. Are there reasons why the authors did not benchmarked their model with the previous works such as DrugClip or PharmacoNet?\n\n**Q5-1. Ambiguous criteria for showing virtual screening performance:** The authors compared their performance with CDPKit's alignment algorithm. It only make sense that the propose method does well on virtual screening only if CDPKit alighment algorithm's performance is already good enough. Authors may provide the performance of CDPKit alignment algorithm for virtual screening explicitly in their manuscript.\n\n**Q6: Limitation in Protein-Specific Virtual Screening Capability:**\nThe methodology seems similar to ligand-based approaches, where ligand structures are pre-generated and graph matching determines activity. This may limit the model's applicability, since considering only ligand structure of protein-ligand complex cannot fully consider protein pocket. For example, if protein has large binding site that various ligands with different pharmacophoric sites can interact with, considering only a single ligand might give a bias during virtual screening on the protein target. Why did authors adopted ligand-based approach instead of protein-based one, such as using protein binding site's pharmacophore instead its binding ligand?\n\n## **References:**\n[1] Tran-Nguyen, Viet-Khoa, Célien Jacquemard, and Didier Rognan. \"LIT-PCBA: an unbiased data set for machine learning and virtual screening.\" Journal of chemical information and modeling 60.9 (2020): 4263-4273." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "**S1. Alignment of Contribution and Results:**\n\tThe study presents a coherent alignment between its contributions and the resulting outcomes. The objectives set forth by the authors are consistently addressed throughout the work.\n\n**S2. Effective Representation Learning via contrastive learning:**\n\tThe approach to representation learning through a contrastive learning with data augmentation appears to function as intended. The learned representations for pharmacophores are well-clustered in embedding space." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a contrastive learning approach that emphasizes augmentation strategies by incorporating the concept of pharmacophore in neural subgraph matching. Furthermore, they apply this concept to ligand-based virtual screening, demonstrating that the results are well-aligned with CDPKit’s alignment algorithm. Although the learned representations effectively capture the proposed pharmacophore concepts, the performance in virtual screening—a primary objective of the model—does not appear sufficiently strong. This is primarily due to the lack of benchmarking against other models and the use of additional evaluation metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**W1. Limited Methodological Novelty:**\n\tThe proposed methods do not introduce significant novel approaches. As mentioned in the manuscript, the concepts of Neural Subgraph Matching or neural network architectures are already proposed by other previous works. While the formulation applied to pharmacophores—particularly the augmentation strategies for contrastive learning—is noteworthy, it may not sufficiently advance the general methodologies handled in the ICLR community. If there's any other novelty compared to previous works, the points should be clearly explained in the manuscript. Authors may propose novel input processing schemes or model architectures better suited to the pharmacophore graph, or improve neural subgraph matching techniques.\n\n**W2. Insufficient Experimental Results:**\n\n- **W2.1 Lack of Comprehensive Benchmarks:**\n\tFor a study proposing algorithms intended for virtual screening, it is essential to benchmark against established methods such as DrugClip or PharmacoNet to demonstrate comparative advantages. The absence of such comparisons, without a compelling justification, weakens the evaluation of the proposed method’s efficacy. Additionally, reliance on the outdated DUD-E benchmark and the arbitrary selection of only 10 targets out of 102 weakens the robustness of the experimental validation. (see Q4, 5)\n\t\n- **W2.2 Ambiguous goal of benchmark experiment:**\n\tThe goal of \"*to achieve comparable values between our model and the alignment algorithm*\" would be meaningful only if the alignment method inherently guarantees superior results compared to the previous methods, which is not adequately demonstrated in the current manuscript. (see Q5-1)\n\t\n- **W2.3 Suitability for Virtual Screening:**\n\tThe method appears to focus on ligand interactions with only parts of the protein pharmacophore, potentially neglecting the comprehensive information of the global protein binding site. This partial consideration might introduce bias, and it remains unclear whether the method can outperform existing techniques that utilize complete protein-ligand information. Empirical evidence showing superior performance in this regard would strengthen the study. (see Q6)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) From Methodology point of view, is there any major novelty in the current paper?\n2) Other than the efficiency in terms of the runtime, any other clear advantage of the current model?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "It is interesting to reinterpret pharmacophore screening problem as an approximate subgraph matching problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose a contrastive learning approach for pharmacophore screening, based on subgraph matching. The key idea is to employ approximate subgraph matching for querying conformational database, a main step in pharmacophore screening. The subgraph matching is done through a contrastive learning approach by encoding query-target relationships in the embedding space. Their model has been validated based on benchmark dataset including DUD-E." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "From the methodology point of view, the paper lacks novelty. The contrastive learning framework and augmentation module are all rather standard approach in GNN models. The contribution is not significant. Further, the performance is not very impressive as shown in Table 1. Even though the authors have emphasized that \"our goal is to achieve comparable values between our model and the alignment algorithm\", the only advantage of the current model seems to be the runtime efficiency." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "PharmacoMatch, a new contrastive learning approach, accelerates pharmacophore screening by encoding query-target relationships in the embedding space." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024pharmacomatch,\ntitle={PharmacoMatch: Efficient 3D Pharmacophore Screening via Neural Subgraph Matching},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=27Qk18IZum},\nnote={under review}\n}" }, "abstract": { "value": "The increasing size of screening libraries poses a significant challenge for the development of virtual screening methods for drug discovery, necessitating a re-evaluation of traditional approaches in the era of big data.\nAlthough 3D pharmacophore screening remains a prevalent technique, its application to very large datasets is limited by the computational cost associated with matching query pharmacophores to database ligands.\nIn this study, we introduce PharmacoMatch, a novel contrastive learning approach based on neural subgraph matching. Our method reinterprets pharmacophore screening as an approximate subgraph matching problem and enables efficient querying of conformational databases by encoding query-target relationships in the embedding space.\nWe conduct comprehensive evaluations of the learned representations and benchmark our method on virtual screening datasets in a zero-shot setting. Our findings demonstrate significantly shorter runtimes for pharmacophore matching, offering a promising speed-up for screening very large datasets." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Contrastive Representation Learning", "Neural Subgraph Matching", "Virtual Screening", "Pharmacophore Modeling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/38dac929af05471ea9e07a634cd86ccbe1cac832.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "PharmacoMatch: Efficient 3D Pharmacophore Screening via Neural Subgraph Matching" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
27SSnLl85x
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
main
Active
Gated Deep Linear Networks;Feature Learning Dynamics;Structured Mixed Selectivity;ReLU Networks
learning theory
3;5;6;8
4;3;3;3
2;2;3;4
2;2;2;3
2;3;3;3
5.5
3.25
2.75
2.25
2.75
-0.800641
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) Can you comment a bit more on Assumption 2.1 - I know that the mutally diagonalizable structure is quite restrictive, in particular, can you comment on why the tasks you chose follow these assumption(s). Which tasks can you not study, give these assumptions? \n\n2) Can you give a bit more experimental results, especially when training your networks. Maybe a hyperparameter table in the appendix is nice. Can you give more details how you derived hyperparameters when training networks, you for example mention some in Figure 2. I missed if these hps are analytically derived. \n\n3) I find the discussion wrt to compositionality and modularity, you mention \"strictly modular\" in the abstract, a bit unclear. Can you clarify, or even define in the work, what you mean with this - and then contrast this to your findings. I guess the same applies for an up-front (maybe even in the intro) definition of what you mean with this. I would find it easier to follow the paper, having these things more clearly explained." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Disclaimer: I did not study the proofs of the paper, as I was very late reviewing this paper. I will try to find more time in the coming weeks. \n\nI find the paper clearly writtin and well presented, the authors make an effort presenting the dense results in a clear manner. The authors, to the best of my understanding, extend the theory around GDLNs and allow for a more in-depth study of the training dynamics of ReLU networks. I find the exposition of quite toyish tasks enlightning and make the paper more easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work builds provides a step towards theory of feature learning in finite ReLU neural networks by building on an equivalence between ReLU networks and Gated Deep Linear Networks (GDLNs). The authors introduce \"Rectified Linear Networks\" (ReLNs), which are GDLNs specifically designed to imitate ReLU networks, allowing them to derive exact learning dynamics.\nThe key contributions are:\n\nThe introduction of ReLNs as a theoretical framework to analyze ReLU networks, providing tractable dynamics for finite-width networks during feature learning. A demonstration that ReLU networks exhibit an implicit bias towards structured mixed selectivity - where neural units are active across multiple contexts but in structured, reusable ways. \nEvidence that this bias towards mixed selectivity and node reuse is amplified when: 1) more contexts are added to the task\nand 2) additional hidden layers are included in the network\n\nThe authors support their theoretical findings with analytical solutions and empirical demonstrations on several tasks, including an adapted XOR problem and multi-context classification tasks. They show that while ReLU networks aren't biased towards strictly modular or disentangled representations, they do learn structured features that can be somewhat reused across contexts.\nThe work takes a step towards understanding how and why structured representations emerge in ReLU networks during learning, bridging a gap in theoretical understanding between linear networks and modern deep learning architectures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I can not judge fairly the novely of the paper, especially its relation to Saxe et a., 2022. For me, it would have been helpful to highlight the novelty a bit more." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could the main findings (e.g., mixed selectivity) be directly observed in the ReLU network without using ReLNs?\n2. Does the method extend efficiently to larger datasets (e.g., ReLU networks trained on MNIST), and can a ReLN adequately explain such networks?\n3. Could the authors clarify their definition of \"feature learning\" and what they mean by a pathway \"doing feature learning\" (line 302)?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The introduction of ReLN as a GDLN variant to study ReLU dynamics is innovative and offers a new angle on understanding feature learning in finite ReLU networks.\n2. The paper provides theoretical insights into inductive biases in ReLU networks, particularly regarding structured mixed selectivity and node reuse." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Rectified Linear Networks (ReLNs), a subset of Gated Deep Linear Networks (GDLNs) designed to capture the training dynamics of finite-dimensional ReLU networks. By drawing an equivalence between ReLU and ReLN, the authors aim to provide theoretical insights into feature learning and structured mixed selectivity in ReLU networks, especially in tasks involving multi-contextual inputs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It remains unclear why ReLU’s training dynamics and mixed selectivity properties cannot be derived directly from ReLU networks, as they’re already nonlinear.\n2. The approach is demonstrated on relatively simple tasks, raising concerns about scalability. For instance, applying ReLNs to realistic datasets like MNIST remains unaddressed.\n3. Terms like \"feature learning\" and \"pathway doing feature learning\" (line 302) lack precise definitions. More clarity is needed to distinguish “feature learning” in this theoretical context." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "**Questions**\n1. [L159] Would be helpful to comments on cases for each of the mentioned assumptions in `Assumption 2.1` since these seem to be not very general assumptions and can be easily violated.\n2. [L183] What is the network architecture for the XoR dataset? Is it a 2-layer network? How would the gating structure look like? It would be helpful to explicitly write down the equation or numerical examples instead of vague description of “2 pathways” and “4 pathways”.\n3. [L257] How do these pathways get identified? Are they identified manually based on the prior knowledge about the task? Since identifying gates would be a major bottleneck for the applications of ReLN, it would be helpful to provide more information on how to identify the pathways.\n\n**Suggestions**\n1. In the Contribution section, clearly state the models and dataset in which the theoretical and empirical results are obtained (single-layer network for theoretical results, and simple synthetic dataset with hierarchy and context (maybe would be helpful to give this dataset a name for easy reference?) to provide a clear expectation for the readers.\n2. Spending more space in the main text to explain how to identify the gating structure and pathways from ReLU to ReLN." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "**Originality**\n1. The paper presents a novel framework Rectified Linear Network to analytically investigate the dynamic of finite-width single-hidden layer neural networks via an extension of Gated Deep Linear Network.\n\n**Quality**\n1. The paper provides theoretical proof for the equivalence between ReLN and ReLU for the case of finite-width single-hidden layer network and verified that the predicted loss matches with the empirical loss in a synthetic dataset.\n\n**Clarity**\n1. The paper clearly present the background relevant work, including the formulation of Gated Deep Linear Network, decomposition of eigenmodes, and derived of learning dynamic based on these eigenmodes.\n2. The theoretical proofs are presented clearly with both informal and formal versions, along with a proof sketch in the main paper, and a detailed proof in the appendix, facilitating the reader's understanding\n3. The empirical results (including the dataset and figures) are clearly presented with clear figure and caption, along with descriptions in the main text.\n\n**Significance**\n1. The paper showed that there's an equivalent ReLN for ReLU for a single-hidden layer network, and it's possible to identify the gating structure of this equivalent ReLN via a clustering algorithm.\n2. By investigating the gating structure of the equivalent ReLN network, the paper shows that there's an implicit bias for structured mixed selectivity (e.g, one gating pathway can encode multiple context)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduced \"Rectified Linear Network\" (ReLNs) as a subset of Gated Deep Linear Networks (GDLN), and showed that for single-hidden layer network, for a ReLU network, it's possible to find a ReLN that have same loss and outputs at all time-steps (provided that the assumption 2.1 at L159 are satisfied) in a simple synthetic dataset. Using the ReLN as the equivalence of ReLU, the authors provided an analytical solution for the training dynamic for the finite single-hidden layer network with synthetic dataset, and also demonstrated that the predicted loss of ReLN matches with the empirical loss of ReLU networks. In this specific synthetic dataset (which includes hierarchical structure (animals, plants, fish, birds), and multiple contexts), the papers show that the equivalent ReLN network employs an implicit bias for structured mixed selectivity (e.g, one gating pathway can encode multiple context)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper proposed a novel framework ReLN to study learning dynamic and feature learning in finite-width neural network, a significant topic in the machine learning community, the paper has several limitations in both theoretical and empirical results, which I would clarify below:\n\n1. Theoretical results: The main theoretical results of the paper requires very strong assumptions (L159, *Assumption 2.1*) in both (1) input dataset (*The dataset correlation matrices are mutually diagonalizable*) and (2) model training trajectory (*The neural network weights align to the singular vectors of the dataset correlation matrices.*). These are very strong assumptions, and the authors did not offer any analysis on specific scenario in which these assumption holds or not hold. As we see in section 6, assumption (2) is violated in the case of 2-layer hidden network. As the paper only uses a very simple synthetic dataset (one-hot vector input and sparse binary-vector output), it's difficult to tell whether assumption (1) can hold for realistic dataset with complicated distribution, even in the 1-layer hidden network case.\n\n2. Empirical evaluations: The empirical experiments that supports the claim and theoretical results only includes single-layer hidden network on a simple synthetic dataset (the paper did include 2-layer hidden network but this experiment did not align with the theoretical results, due to violation of the theoretical assumption). While it's reasonable to use this simple synthetic dataset as a proof of concept to verify the theoretical results and illustrate the mixed-selectivity phenomenon, it would be helpful and more convincing if the authors can demonstrate that the proposed approach work in a realistic image dataset (MNIST, CIFAR, etc.) with a more realistic and variety of model architectures (multiple layer perceptron, convolutional neural nets, etc.). The inclusion of real dataset and variety of architecture is even more important since the theoretical results require such strong assumptions.\n\n3. Identification of gating structure: The gating structure identification is a central bottleneck of this framework, since gating *g* is treated as inputs, and needed to be identified before training. Identifying a fixed gating structure, or varying gating structure through training, would be one of the major difficulty of the framework. The paper did propose a clustering algorithm to identify the gating structure of ReLN from the representation of ReLU models. However, since the paper only operates with very small synthetic dataset and models, it's unclear whether this clustering algorithms can scale with realistic dataset and larger models. Therefore, this is another reason that it is critical to evaluate this proposed framework on more realistic dataset, instead of only on the simple synthetic dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "**Critical questions (order: most important to least important):**\n\n1. Your analysis looks very particular to the structure of your synthetic task. Do you think it will be feasible to find unique or representative GDLNs for more complex datasets and architectures?\n\n a. Are the assumptions more generally broken, even for 2-layer ReLU nets beyond this toy task of hierarchical structure without noise? Atanasov et al. (2021) show that non-whitened data can weaken the silent alignment effect.\n\n b. ‘Once the common pathway has finished learning there is no correlation between the context features and labels’. But in practice, likely learning is not perfectly separated. Which impact does this have on the generalizability of your results?\n\n2. You claim that the learning dynamics of the ReLN exactly describe those of the ReLU network. Why are the curves in Figure 4 then not exactly matching?\n3. How do your results depend on the number of neurons? You never explicitly mention this.\n4. Your proposed clustering algorithm in Appendix B evaluates the model at different stages of training. Is this not dangerous in cases where the learned function evolves in a more complex way than in your toy task and systematically changes? Then the algorithm tries to cluster outputs from intermediate functions that are rather unrelated to the final learned function. I would like to see whether clear gating mechanisms can be identified when applied to real data.\n\n**Questions out of curiosity:**\n\n1. In Figures 4 and 5, why is not the largest SV learned first?\n2. Where does the bias towards node reuse come from? Can you see this in the learning dynamics equations?\n3. Can your analysis be extended to SGD? Would you expect fundamentally different dynamics?\n4. Can this theory also inform how to maybe slow down learning to learn disentangled representations?\n\n**References:**\n\nAtanasov, Alexander, Blake Bordelon, and Cengiz Pehlevan. \"Neural networks as kernel learners: The silent alignment effect.\" *arXiv:2111.00034* (2021)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The approach of identifying the linear submodules learned by ReLU networks and using this for tractability of the analysis is interesting. At least on toy datasets, natural modules to learn the structure in the data are exactly recovered by ReLU networks together with the learning speed. This provides mechanistic understanding how the bias toward learning speed explains why 2-layer ReLU networks learn common and context-specific but entangled representations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper shows that, under strong alignment assumptions of the data and network weights, finite feature learning ReLU networks learn modules that can be represented by Gated Deep Linear Networks (GDLNs), for which the exact learning dynamics are analytically tractable. For a synthetic task with hierarchical structure, the training dynamics of 2-layer ReLU nets are shown to exactly match those of a GDLN constructed for the task. Through this equivalence, it is shown that ReLU networks learn mixed-selective representations due to a bias for optimal learning speed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I cannot strongly recommend acceptance as some key questions remain unaddressed. While this is a nice and tractable toy model for the learning dynamics of 2-layer ReLU networks, generalizability of the findings is questionable. The results only hold for the synthetic and very specific structure considered. The relevance on real data sets or practical architectures remains elusive, as it is not evaluated. The authors show and acknowledge that the silent alignment Assumption 2 already does not hold for 2-hidden-layer ReLU nets. In Figure 6, against the authors’ claim of ‘sharing the same characteristics’, ReLU networks appear to make larger, sharper steps in the loss. What happens in more practical architectures and real datasets, and whether an appropriate and interpretable GDLN can still be identified remains completely unclear. Experiments that show that this approach works on real data would greatly alleviate these concerns. See other critical questions below.\n\n**Typos in lines:** 30, 392, many in 462-463, 473, 476, 477, 503, 527" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We establish an equivalence between finite feature learning ReLU networks and Gated Deep Linear Networks to analyse the full learning dynamics of the ReLU network. We find a bias towards structured mixed selective representations on a set of tasks." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024make,\ntitle={Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning Re{LU} Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=27SSnLl85x},\nnote={under review}\n}" }, "abstract": { "value": "In spite of finite dimension ReLU neural networks being a consistent factor behind recent deep learning successes, a theory of feature learning in these models remains elusive. Currently, insightful theories still rely on assumptions including the linearity of the network computations, unstructured input data and architectural constraints such as infinite width or a single hidden layer. To begin to address this gap we establish an equivalence between ReLU networks and Gated Deep Linear Networks, and use their greater tractability to derive dynamics of learning. We then consider multiple variants of a core task reminiscent of multi-task learning or contextual control which requires both feature learning and nonlinearity. We make explicit that, for these tasks, the ReLU networks possess an inductive bias towards latent representations which are *not* strictly modular or disentangled but are still highly structured and reusable between contexts. This effect is amplified with the addition of more contexts and hidden layers. Thus, we take a step towards a theory of feature learning in finite ReLU networks and shed light on how structured mixed-selective latent representations can emerge due to a bias for node-reuse and learning speed." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Gated Deep Linear Networks", "Feature Learning Dynamics", "Structured Mixed Selectivity", "ReLU Networks" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6e4fab4f52cb82b1055707f2fd8a68af9c44ab4f.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/a56e7a21e3e062c676bdb198aa320dbd51e45938.zip" }, "title": { "value": "Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
27n0kvWgqT
Parameter-Efficient Fine-Tuning of State Space Models
main
Active
parameter-efficient fine-tuning;state space model;mamba;lora
transfer learning, meta learning, and lifelong learning
5;5;6;6
3;3;3;4
3;2;3;3
3;2;3;3
3;3;3;3
5.5
3.25
2.75
2.75
3
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tDuring the process of selective dimension tuning, the authors select the target channels and states based on magnitude, any other metrics have been tried?\n2.\tWill SDLoRA's training speed be slower compared to vanilla LoRA? How much slower will it be?\n3.\tWhat is the accuracy of SDLoRA on a larger data set, such as ImageNet?\n4.\tSome other advanced parameter-efficient tuning method like DoRA [1] can be adapted to Mamba? or the proposed SDLoRA can be adapted to Jamba?\n\n[1] DoRA: Weight-Decomposed Low-Rank Adaptation" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tReasonable theoretical analysis and comprehensive experiments.\n2.\tThe introduced SDLoRA is novel and effective.\n3.\tThrough extensive experiments, the findings in this paper is useful and inspired." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents the first study on the performance of PEFT methods applied to SSM-based models. Specifically, prompt-based and parameter-based methods are involved. With theoretical analysis and extensive experiments, LoRA tends to achieve better performance. To further improve the performance, this paper introduces SDLoRA, which selectively updates certain channels and states on SSM modules while applying LoRA to linear projection matrices." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe speed of SDLoRA is nor reported.\n2.\tExperimental results on larger datasets are needed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See the weaknesses" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1) Efficiency and Scalability: By focusing on selective parameter updates, the SDLoRA method enhances computational efficiency, which is crucial for large-scale models. Experimental results show that SDLoRA consistently outperforms traditional LoRA across several benchmarks, proving its efficacy in SSM architectures.\n(2) Adaptability: The proposed SDLoRA method demonstrates adaptability across multiple tasks, including NLP tasks and vision tasks" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores parameter-efficient fine-tuning (PEFT) methods for deep State Space Models (SSMs), especially in the context of language modeling tasks. It investigates the effectiveness of various PEFT methods, such as prompt-based prefix-tuning and Low-Rank Adaptation (LoRA), applied to SSM architectures like Mamba. A new variant called SDLoRA (Selective Dimension LoRA) is proposed in this paper to selectively update channels and states in the SSM modules, aiming to enhance the fine-tuning performance while reducing parameters. The results indicate that SDLoRA outperforms conventional LoRA when fine-tuning SSM-based models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) Limited Applicability Beyond SSMs: The focus on SSMs means SDLoRA may not generalize well to non-SSM architectures or hybrid models such as Transformer models or Transformer-SSM combinations. Its broader applicability to other architectures remains untested.\n(2) Parameter Selection: The dimension selection process in SDLoRA relies on parameter magnitudes, which may not be optimal and could benefit from a more sophisticated selection algorithm. And what if the magnitude of each channel changes during the fine-tuning stage?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Have you looked at all into Hybrid Architectures?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is looking at something which definitely needs to be studied, as it is not a foregone conclusion that adapters, prompt tuning, etc... will have the same benefits for SSMs as they do for transformers. A detailed study understanding the different tradeoffs brought on by the SSM architecture should be explored.\n\nThe paper has a good mix of theoretical and empirical analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper aims to explore the application of LoRA to SSMs. It explores different adapter types as well as different ways of how to apply them to the SSM block. The paper additionally includes theoretical results to justify their choices." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper, to me, does not have enough substance. It needs either more detailed theoretical results, which result in some innovation or needs more empirical results which help the community understand how different adapters perform with transformers vs SSMs.\n\nHere are some specifics:\n1. Mamba2 is not included in any of the results, the paper should be updated to look at this architecture as well\n2. The theoretical analysis is largely only for the S4 block, its not clear to me the conclusions would extend to S6, and aren't convincing to me for that reason.\n3. The empirical results are lacking. The SDLoRA module is slightly different application of the standard LoRA block, essentially targeting different layers within the block instead of new a design. Also only two adapters are compared within this work. To have a true empirical study of this, many more experiments need to be conducted, even in the presence of theoretical results.\n\nI think the paper might be more interesting if it were to do something like the following:\n- Compare many adapters in a standardized for Transformers, Mamba1, and Mamba2\n- Isolate differences in which adapters perform well for each class of model\n- See if these insights give rise to a new adapter design specifically for this model class\n- Draw theoretical results to try to explain why certain adapters perform differently than in Transformers\n\nThe current structure doesn't feel as though it is contributing much" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could the authors comment on how SDLoRA benefit to hybrid models that combine SSMs and Transformers? \n2. On Mamba-130M, the authors use GLUE and DART benchmarks, while on Mamba-1.4B, they use SAMSum and Spider benchmarks. Could the authors elaborate on the considerations behind this benchmark selection strategy?\n3. In Table 4, SDLoRA outperforms Full Fine-Tuning. Was this outcome expected, and if so, could the authors provide insights into why this might be the case? Additionally, have the authors considered conducting experiments on more challenging visual tasks, such as Imagenet-1k, to further validate the effectiveness of SDLoRA?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper systematically analyzes how existing PEFT methods perform on SSM-based models and which modules are most effective for fine-tuning, revealing that prompt-based methods like prefix-tuning are no longer effective. Additionally, it shows that applying LoRA to linear projection matrices without modifying SSM modules yields the best results. The paper further introduces SDLoRA, a novel approach to parameter-efficient fine-tuning (PEFT) for SSMs. This method's innovativeness lies in its selective dimension updating strategy within SSM modules. The document also references various datasets used for evaluating the proposed methods, and the clarity of the paper is generally good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a systematic study on the application of parameter-efficient fine-tuning (PEFT) methods to Deep State Space Models (SSMs). The paper reveals that prompt-based methods are less effective for SSMs, while LoRA remains effective. The authors further propose SDLoRA, which selectively updates certain channels and states on SSM modules while applying LoRA to linear projection matrices, demonstrating improved performance over standard LoRA." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper presents a novel approach for parameter-efficient fine-tuning SSMs, the innovation seems to be more incremental than groundbreaking. \n2. The paper lacks a detailed analysis of the computational overhead from the selective dimension tuning process. This is crucial to understanding the trade-offs between parameter reduction and computational efficiency in SDLoRA.\n3. The paper would benefit from a detailed analysis of SDLORA with hybrid models that combine SSMs and Transformers, as these models are becoming increasingly popular and have shown promise in various domains." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024parameterefficient,\ntitle={Parameter-Efficient Fine-Tuning of State Space Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=27n0kvWgqT},\nnote={under review}\n}" }, "abstract": { "value": "Deep State Space Models (SSMs), such as Mamba `(Gu \\& Dao, 2023)`, have emerged as powerful tools for language modeling, offering high performance with efficient inference and linear scaling in sequence length. However, the application of parameter-efficient fine-tuning (PEFT) methods to SSM-based models remains largely unexplored. This paper aims to systematically study two key questions: (i) How do existing PEFT methods perform on SSM-based models? (ii) Which modules are most effective for fine-tuning? We conduct an empirical benchmark of four basic PEFT methods on SSM-based models. Our findings reveal that prompt-based methods (e.g., prefix-tuning) are no longer effective, an empirical result further supported by theoretical analysis. In contrast, LoRA remains effective for SSM-based models. We further investigate the optimal application of LoRA within these models, demonstrating both theoretically and experimentally that applying LoRA to linear projection matrices without modifying SSM modules yields the best results, as LoRA is not effective at tuning SSM modules. To further improve performance, we introduce LoRA with Selective Dimension tuning (SDLoRA), which selectively updates certain channels and states on SSM modules while applying LoRA to linear projection matrices. Extensive experimental results show that this approach outperforms standard LoRA." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "parameter-efficient fine-tuning", "state space model", "mamba", "lora" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/92dbb956d1528dcba9b76b94ddc2ba428238cd53.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Parameter-Efficient Fine-Tuning of State Space Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
28TLorTMnP
A Novel Soft Alignment Approach for Language Models with Explicit Listwise Rewards
main
Withdraw
large language models;preference alignment;listwise optimization objective
foundation or frontier models, including LLMs
zhihao dou;Yi Zhao;Michael Zhu;Kaizhu Huang;Aaron Xuxiang Tian
~zhihao_dou2;~Yi_Zhao18;~Michael_Zhu3;~Kaizhu_Huang1;~Aaron_Xuxiang_Tian2
1;3;3;3
5;4;3;4
1;2;2;2
1;2;2;2
1;1;2;3
2.5
4
1.75
1.75
1.75
-0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* line 169: what is the index t?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* Some of the problems identified in this work seem convincing -- the use of only two outputs, and the problem of decreasing likelihood of preferred responses. \n\n* Results show improvement compared to baselines on mt-bench and alpaca-eval\n\n* Analyasis shows reguarlization in SPO-abs in effective, indeed the prob. of the preferred response does not decrease anymore." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new offline alignment method - SPO and SPO-abs (I have seen SPO already at least twice, here is one instance https://arxiv.org/abs/2401.04056 so maybe a different acronym is needed).\n\nSPO is different from DPO is a few ways:\n1. Extension from binary classfication from multiclass classifiction by changing the sigmoid cross entropy loss in DPO to softmax cross entropy in SPO. This\n\n2. Adding softness by assuming a teacher model provides a distribution over the multiple possible responses and minimizing cross entropy with respect to that distribution (instead of assuming only a single response is labeled as gold). Notice this strongly assumes we can get a scalar reward for each response, which seems problematic unless the annotator is a machine learning model and not humans (see weaknesses below)\n\n3. in SPO-abs - adding a term that interprets rewards as logits of a sigmoid and essentially tries to maximize the log probability of samples from the base model. This is meant to combat the problem of decreasing probability in DPO and is somewhat orthogonal to the other points made.\n\nResults show that when using ultrafeedback as a reward/preference dataset and evaluating on MT-Bench and Alpaca-eval one obtains some gains compared to DPO and some natural extensions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The authors claim that using a list of rewards is more efficient and convenient than using preferences. This runs in counter to past work even from the original summarization paper by Stiennon et al. (learing to summarize...) where the motivation to get preferences is that it is very hard to get scalar rewards from humans. And also counter to many other works (for example, self-play preference optimization, which is also coined SPO, where they show that scalar rewards are problematic because human preferences do not obey transitivity. So most empirical evidence points against using scalar rewards. The only reason to do this seems to be if you just have a model from which you distill the reward which is what happens in this work -- but the authors don't talk about this or argue for this. Is there claim that this approach is good assuming you are anyway going to use a teacher model to learn from? If the method is mostly applicable for model teachers that would be good to state precisely and as a limitation\n\n* The authors seem to not acknowledge/know relevant past work.\n(a) Using soft labels instead of hard labels - the paper \"robust preference optimization\" by fisch et al. from about 5 months ago already discusses at length use of soft labels. I think simple use of soft instead of hard lables was done even earlier in \"Human Alignment of Large Language Models through Online Preference Optimisation\" but I am not 100% sure.\n(b) There have been a few papers already addressing the problem of reducing likelihood - one is Pal et al. that the authors do cite but don't really mention the fact that they have a positive-DPO variant that adds a similar term for regularization as well as the Fisch et al. paper from above as well as Liu et al from 5 months ago (your SFT loss is implicity an adversarial regularizer)\n(c) Googling for a minute I found work on using lists of generations in alignment - LIRE -- https://arxiv.org/pdf/2405.13516 \n\n* The extension of the binary case to multiclass (when you don't consider softness) is somewhat incremental. Moreover, without softness it doesn't really exploit the full information in the list of generations - it only maximizes the probability of the single preferred response but doesn't take into account the relative preference of generations that are not the top-ranked ones. In assistant setting it is very hard to assume there is a single gold response, and thus modeling this as multiclass where there is a single correct class seems like a problematic modeling assumption.\n\n* The statement of what the authors are trying to solve is unclear - is it addressing multiple responses? is it addressing the case with scalar rewards? is it just the conjunction of both? is it the likelihood ratio decrease of DPO? It is hard to understand what the authors view as the key contribution.\n\n* Related work - the first paragraph in my humble opinion distracts from the flow - we don't need to go all they way back to BERT for this paper.\n\n* Experimentally - I did not understand the motivation for choosing the reasoning setup - is there any reason to think that SPO will be good for this setup? Is this an arbitrary choice? Also, there is a mismatch between the dataset used for training the the reward model / aligning the model and the actual benchmarks used for evalution and it is hard to reason about the results with the need to also generalize to out-of-distribution settings as part of the experiment.\n\n* minor: \"The soft label represents the important dark knowledge from the teacher LLM” I would encourage rephrasing - what does dark knowledge mean?\n\n* minor: The authors use the acronym LPO instead of SPO in the figures in the end, probably by mistake." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weaknesses above.\n\nThere are also many small grammatical errors throughout the paper. While most of these do not significantly affect readability, they may be worth addressing, e.g.:\n* Abstract: “a adaptive loss” -> “an adaptive loss”\n* Abstract: “that inject listwise” -> “that injects listwise”\n\nOther nits:\n* Should use \\citep vs. \\citet in several places, e.g. lines 43 and 47\n* “dark knowledge“ seems like an odd way to describe the information contained in the knowledge distillation loss\n* Table 2 - the underline indicating the second highest number appears to be in the wrong place for the AlpacaEval column" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The paper studies a generalization of the DPO objective to the multiclass setting, which could be a more efficient way to train models when there are multiple generations per prompt.\n* The paper evaluates the performance of the proposed objectives across multiple settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Broadly, this paper studies the problem of alignment with offline algorithms such as DPO. The paper is distinguished from prior work in two ways:\n* While most prior work has focused on the setting where only pairwise preference judgements are available, this paper focused on the setting with scalar rewards assigned to a set of generations.\n* The authors propose new algorithms based on DPO, termed SPO and SPO-abs. \n\nSPO can be seen as a generalization of DPO, with two modifications:\n* The objective considers >2 generations per prompt, generalizing from the binary (i.e. K=2) case to the multiclass (K>2) case.\n* The prediction target is “soft” (i.e. based on the distribution from some teacher model) as opposed to “hard” (i.e. one-hot label from preference dataset).\n\nSPO-abs adds an additional term to the objective function that incentivizes assigning higher likelihood to preferred generations.\n\nThe authors compare SPO and SPO-abs with DPO and other baselines across several settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think this paper could provide some interesting insights with some modifications. However, I think there are some serious weaknesses with the current version. To recap, the proposed algorithms vary from DPO in three ways:\n\n1. They generalize DPO from the binary to multiclass setting.\n2. They use a soft distribution from a teacher model as opposed to the “one-hot” labeled distribution.\n3. The `-abs` variant includes an additional term aimed at regularizing towards higher likelihood for preferred responses.\n\nI have concerns about each of these contributions individually.\n\n1. The generalization of DPO to the multiclass setting seems to be the most interesting contribution, as I am not aware of prior work studying this. However, it does not seem like this contribution was evaluated sufficiently on its own. Setting aside the other differences between SPO and DPO (i.e. “hard” vs. “soft” labels), when multiple generations per prompt are available, should we use the proposed multiclass objective or a binary objective over pairs of generations? What are the pros and cons? This is a very interesting question, but the DPO-1vs0 and DPO-pw baselines seem to conflate the other differences in SPO vs. DPO. It would also be good to consider the computational efficiency of such an approach. Is it more memory intensive to have an objective over K>2 generations?\n\n2. The use of a soft target distribution from a teacher vs. a hard target distribution has been proposed by prior work, which is not discussed here, e.g. the “Distilled DPO” (d-DPO) objective of Fisch et al. 2024 (https://arxiv.org/abs/2405.19316). I have not checked the math rigorously, but the proposed objectives seem to be equivalent for the K=2 case. It is still interesting to study a generalization of this objective to the K>2 case, but prior work should be discussed, and it feels misleading for the paper’s title to stress the *novelty* the proposed methods. The comparison between “hard” and “soft” labels also seems to be confounded by the fact that the “teacher model” is so much larger and more powerful than the “student” model being used as the policy. If we have the resources to train such a large “teacher” model, why not train an equally large “student”?\n\n3. For the additional objective in SPO-abs vs. SPO, this also seems to be lacking contextualization in prior work. For example, the authors say “We hypothesize the main cause of this decreasing likelihood is that SPO methods only adjust relative rewards among responses, rather than optimizing their absolute value.”, but this is more or less exactly the hypothesis proposed and studied by prior work (e.g. Pal et al. 2024 (https://arxiv.org/abs/2402.13228)). The proposed new term in the SPO-abs loss did not seem well motivated, i.e. why choose this specific formulation vs. some other? There was a mention of a connection to NCE, but this seemed underdeveloped and the connection was not clear to me. And, more importantly, it’s not clear why some approach from prior work, e.g. based on “DPO-Positive” from Pal et al. 2024 would not be sufficient? Minimally, this should be compared with empirically. Finally, some claims related to SPO-abs seemed confusing, e.g. the authors state “SPO-abs can also guarantee convergence to the optimal LM policy” but it’s not clear what guarantees are offered, or what evidence is provided to support such guarantees.\n\nTherefore, I think the paper would greatly benefit from a revision that more clearly establishes the connection to prior work and experiments that better disentangle the impact of the various aspects of the proposed methods. Proper contextualization with prior work and understanding the impact of the individual contributions is especially important given how crowded the space of proposed DPO variants has become.\n\nWhile some reviewers may take issue with the focus solely on the offline setting (and not comparing with online methods) or the limited model scales explored in the experiments, these seem like reasonable choices to me given the expense and complexity of experiments in this area." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "I find the initial related work to be too broad without sufficient coverage of recent work in offline alignment. TBH, both the first two paragraphs in related work seem too generic and unlike typical alignment papers. While this is not a problem in itself, the paper also does not sufficiently address more recent work in the direct-alignment/RLHF space and does not allude to any works in knowledge distillation. I also have concerns about claims that algorithms like KTO are only applicable to pairwise preference data since KTO clearly applies to pointwise preferences as a valid data point. I request the authors to provide more clarification in this: as this will help motivate their scalar reward learning with KD formulation and make it much clearer for the reader. \n\n\nLine 154: “Compared with constructing preference datasets, annotating each response with scalar rewards can\nbe more flexible and convenient.” —> are there any citations to back this claim? As far as preference annotations by humans of LLM responses are concerned, it is intuitively easier to get preferences/choices given as pair than to get exact scalar rewards for responses. This is because getting preferences only depends on the pair in focus while the annotator has to calibrate wrt the data distribution in assigning scalar rewards [1]. \n\n\nLine 469: The fig.1 plot seems to suggest expected rewards are plotted against steps and the numbers of the Y-axis are named as *rewards* in the legend. However, the prose in the paper claims y-axis represents likelihoods of the chosen and rejected responses. Are these rewards (as defined in the paper as log-ratios with the baseline policy) or average likelihoods of chosen and rejected responses? What is also concerning is that both fig.1 and 2 has typos in legends: SPO-abs is written as LPO> Can the authors provide some clarification on these issues and also for how many steps were these DPO and SPO-abs models trained since the x-axis in fig.1 represent percentage of total steps which does not clarify the total steps used for training.\n\n\nPresentation issues/typos;\n\n—“In order to\ndraws latent knowledge from the these powerful LLMs..”—> In order to draw..\n—Line 323: “We exam the following baseline methods:”—> We examine …\n\n—Line 467: “As shown in Figure 1. The likelihood of preferred responses” —> should be a comma with no caps on “the” likelihood…\n\n—Line 395: “Preferecne dataset” —> Preference dataset" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper clearly specifies its goal to devise an algorithm that learns from both preference labels as well as pointwise scalar rewards, that are both typically found in popular preference alignment from human feedback datasets. The benchmarks and baselines chosen are relevant and reasonable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes two methods SPO and SPO-abs that are specifically designed to learn from both preference as well as scalar reward datasets. They conduct experiments with SPO policies learning from both reward and preference datasets and evaluate on benchmarks like MT-Bench, AlpacaEval and Ultrainteract. Their experiments suggests that their methods surpass previous preference learning baselines like DPO as well as pointwise learning methods like KTO on many of these benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper has some crucial weaknesses (unless I am missing something): their motivation for choosing a soft-label based knowledge-distillation (KD) approach is not clear and their related work does not mention any KD-based or Plackett-Luce based works that their main contribution (eq. 9) is clearly based off of. Furthermore, there are non-trivial presentation issues, many typos, mislabelling in figures and inconsistencies in prose vs figures etc in the current manuscript. Please read the questions for more weaknesses." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Please see summary." }, "flag_for_ethics_review": { "value": [ "Yes, Research integrity issues (e.g., plagiarism, dual submission)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "N/A" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "**This paper is almost the same as [1].**\n\nSome Evidence:\n- The \"Preliminary\" section in this submission copies the \"Background\" in [1]\n- The \"Method\" section (section 4) is exactly the same as section 3 in [1], with the same derivation and objective.\n- Table 1 in this submission is almost the same as Table 1 in [1].\n- Baseline results in Table 2 in this submission is exactly the same number as in Table 2 in [1].\n\n**This is plagiarism and should be desk rejected.**\n\nReference:\n[1] Noise Contrastive Alignment of Language Models with Explicit Rewards. Chen et al. NeurIPS 2024. https://arxiv.org/abs/2402.05369." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "N/A" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\ndou2024a,\ntitle={A Novel Soft Alignment Approach for Language Models with Explicit Listwise Rewards},\nauthor={zhihao dou and Yi Zhao and Mige Zhu and Kaizhu Huang and Aaron Xuxiang Tian},\nyear={2024},\nurl={https://openreview.net/forum?id=28TLorTMnP}\n}" }, "abstract": { "value": "Existing alignment methods, such as Direct Preference Optimization (DPO), are mainly tailored for pairwise preference data where rewards are implicitly defined rather than explicitly given. In this paper, we introduce a general framework for large language model alignment, leveraging a novel optimization objective to bridge the gap in handling reward datasets with a list of responses explicitly annotated with scalar preferences scores.\n\nOur work comprise a novel algorithm, soft preference optimization, SPO, which enables the direct extraction of an LM policy from reward data as well as preference data. The core of SPO is a novel listwise preference optimization objective with the exponential-logarithm function form and a adaptive loss coefficient that inject listwise preference signals into the large language model. \n\nWe evaluate our methods in both reward and preference settings with Mistral models in different sizes. Experiments suggest that our method surpasses various preference baselines when reward datasets are available. We also find our method significantly outperforms DPO in complex reasoning tasks like math and coding." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~zhihao_dou2", "~Yi_Zhao18", "~Mige_Zhu1", "~Kaizhu_Huang1", "~Aaron_Xuxiang_Tian2" ] }, "authors": { "value": [ "zhihao dou", "Yi Zhao", "Mige Zhu", "Kaizhu Huang", "Aaron Xuxiang Tian" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "large language models", "preference alignment", "listwise optimization objective" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "dou|a_novel_soft_alignment_approach_for_language_models_with_explicit_listwise_rewards" }, "pdf": { "value": "/pdf/c399d9168ad17867d01f90a88bdd54068cb71d19.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "A Novel Soft Alignment Approach for Language Models with Explicit Listwise Rewards" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
28U5Olm32r
Understanding Model Ensemble in Transferable Adversarial Attack
main
Active
adversarial examples;adversarial transferability;model ensemble attack
alignment, fairness, safety, privacy, and societal considerations
5;6;6;6
4;3;3;3
2;4;2;3
2;4;2;3
3;3;3;3
5.75
3.25
2.75
2.75
3
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why is Line 1035 equal to Line 1038?\n2. Why is Line 1378 equal to Line 1381?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The theoretical results are solid and novel. Specifically, it is interesting to see the transferability error can be connected with the empirical Rademacher complexity in a similar form with generalization bound, and the Hellinger distance can be used to quantify the inter-dependency across surrogate models.\n\n- The theoretical results can have a broader impact, as the analysis tools, such as those for bounding dependent random variables and the empirical Rademacher complexity for ensemble, can be applied elsewhere.\n\n- The writing is clear and easy to follow in general." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This submission theoretically studies the transferability error - the chance of being successful if the attack is generated by transferring from an ensemble of models. The core result is an upper bound of transferability error involving a vulnerability term and a diversity term, which further boils down to empirical ensemble Rademacher complexity and the Hellinger distance between joint model distributions and i.i.d. model distribution. The key insight is that the transfer attack needs to involve both more and diverse models and reduce model complexity to be powerful. Results are empirically verified on multiple datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The studied problem and practical implications may be limited. The analysis is only applicable for ensemble-based transfer attacks and it can only directly guide the design of more powerful attacks of this kind. How to leverage the analysis to better defend the model, or how to generalize the results beyond L2-bounded attacks, are worth further exploration.\n\n- Some insights from the theory may not be justified enough. For example, in Line 333-335, the paper mentioned that we need to increase the diversity of parameters in surrogate models to reduce $H_\\alpha(\\cdot)$. It seems that surrogate models need to be independently trained to achieve a minimal $H_\\alpha(\\cdot)$. However, in practice, encouraging model diversity, e.g., introducing some diversity-promoting regularizers, can sometimes further improve the attack efficiency. As a result, encouraging model diversity introduces model-level dependency and increases $H_\\alpha(\\cdot)$ but reduces transferability error. From this point of view, the theory may not reflect the full picture of adversarial transferability.\n\n- The experiment part is relatively not clear. For example, in Figure 2, good to mention that $\\lambda$ is the weight decay, explain what the $x$-axis is, and discuss detail training parameters in the main text. \n\n\nMinor:\n1. Line 106: combine -> combines\n2. Line 153: the hypothesis space maps to a discrete label space, and then the loss function $\\ell: \\mathcal{Y} \\times \\mathcal{Y} \\mapsto \\mathbb{R}_0^+$ has a discrete domain $\\\\{-1,1\\\\} \\times \\\\{-1,1\\\\}$ which is weird, may need some fix.\n3. Line 279: the redundant phrase \"provided in Appendix\"\n4. Line 1061: please define $R$ beforehand.\n5. Line 1281 - 1290: seems that there is a missing $1/N$ coefficient before all $\\sum_{i=1}^N f(\\theta_i; x)$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Other than the concerns pointed out in the weaknesses I have some additional questions for the authors:\n\n1. I have some confusion about the presented plots in the experiments which are not well-explained. Regarding the experiments, are you using mini-batch SGD as the optimizer? By \"# step\" on the x-axis do you mean the number of epochs? For loss value, is this the loss value of the expectation of logits on a training sample or test sample? Isn't that supposed to be decreasing as all the models are being trained?\n\n2. In figure 4, the variance of the logits from the models in the ensemble is shown to be increasing for CIFAR-10, but the number of epochs is too small and it is not clear whether the same trend continues. Could authors plot them with a higher number of epochs?\n\n3. The plots with increasing values of the variance of the logits from the models of the ensemble seem contradictory to Lemma 5 of [1]. The authors also mention for some datasets they see a decreasing trend similar to what is expected from Lemma 5 of [1]. Could the authors comment on the potential reasons for their different observations for other datasets?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-written and well motivated with a good survey of related works.\n\nBy defining the transferability error, authors make a good analogy to generalization error and derive some corresponding results to provide a better understanding of model ensemble attacks.\n\nAuthors avoid the independence assumption that is used for studying generalization and derive an upper-bound is based on the divergence of the joint distribution of the parameters of the models from the case where they are independent." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a theoretical framework to explain the observations by prior empirical methods on increasing the effectiveness of model ensemble attacks. They define transferability error to measure the effectiveness of an adversarial example which is basically analogous to the generalization of the adversarial example to unseen trained models belonging to a specific function class. They also define an empirical version of Rademacher complexity as a measure of complexity for the input space for an ensemble of N classifiers and show that the transferability error is upper-bounded by a combination of this measure of input space complexity and the divergence of joint distribution of the model parameters of the ensemble from the product of their marginals which accounts for non-independence of the models of an ensemble." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Authors connect their theoretical results with empirical observations in prior work regarding the diversity of the models; however, their definition of diversity does not match with many of these prior works. For example in [1] and [2] the diversity is defined as having gradient vectors (with respect to the inputs) with low cosine similarity. What authors consider as diversity here actually is supposed to decrease naturally according to Lemma 5 in [1]. Could authors clarify how their definition of diversity relates to these previous definitions in the literature, particularly those based on gradient similarity. \n\n[1] Yang, Z., Li, L., Xu, X., Zuo, S., Chen, Q., Zhou, P., ... & Li, B. (2021). Trs: Transferability reduced ensemble via promoting gradient diversity and model smoothness. Advances in Neural Information Processing Systems, 34, 17642-17655.\n\n[2] Kariyappa, S., & Qureshi, M. K. (2019). Improving adversarial robustness of ensembles with diversity training. arXiv preprint arXiv:1901.09981.\n\n\n2) The complexity of the models in the ensemble and the complexity of the input space seem to be used interchangeably sometimes. Equation 12 shows the complexity of input space defined by the authors, but in the follow-up discussion (line 342) it is mentioned that the model complexity has to be controlled when using stronger and more diverse ensembles.\n\n3) The interpretation of input space Rademacher complexity defined by the authors does not seem clear! The presented results suggest decreasing this complexity to achieve a tighter upper bound on the transferability error. However, decreasing this complexity means achieving a state where the sample in the input space is not able to achieve a high loss value for the models in the ensemble. This basically means that the optimization in equation 3 will achieve a lower value for equation 1 which authors are seeking to increase. This seems contradictory and it would be great if authors could clarify that. \n\n4) The experiments do not seem to be comprehensive in evaluating the presented theoretical results. For example, there is no analysis with respect to the complexity of the input space or the trade-off of diversity and complexity." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Given the identified trade-off between vulnerability and diversity, could the authors suggest any criteria or metrics for balancing these components during ensemble model selection?\n\n- The experiments use standard datasets like MNIST and CIFAR-10, which may not fully represent the complexity encountered in real-world applications. Have the authors considered testing on more complex datasets (e.g. CIFAR-100, SVHN, ImageNet, etc.)\n\n- Can the author give the specific method of generating adversarial samples in the experiment and the specific meaning of \"steps\" in fig. 2, 3 and 4." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper demonstrates strong originality by addressing the theoretical gaps in model ensemble-based adversarial attacks, introducing the novel concepts of transferability error, vulnerability-diversity decomposition, providing well-founded upper bounds for transferability error." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides a theoretical foundation for model ensemble methods used to generate transferable adversarial examples. The authors introduce three new concepts: transferability error, diversity, and empirical Rademacher complexity, which together decompose transferability error into two primary components: vulnerability and diversity. Futhermore, the authors establish a bound on transferability error and propose practical guidelines to reduce it, such as increasing model diversity and managing complexity to prevent overfitting. Extensive experiments validate these findings​." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although this paper provides a strong theoretical foundation, some limitations affect its overall impact. \n\nWhile the experiments are broad in scope, they can be enhanced by testing on a wider range of real-world scenarios or datasets outside of standard benchmarks such as MNIST and CIFAR-10 to verify applicability in more diverse contexts (e.g. CIFAR-100, SVHN, etc.)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My main concerns are the implicit assumptions in Eq. (3), making the derivations much less interesting. Besides, the concluded practical guidelines are already widely applied in the literature, and there is also a lack of empirical comparisons to previous baselines for ensemble-based transfer attacks." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The Weaknesses of this paper include:\n\n- The writing is clear with intuitive explanations are in Figure 1.\n- Notations are clearly defined with neat formulations, the derivations are self-consistent.\n- The concluded practical guidelines are correct and already used in the literature." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents theoretical insights into model ensemble adversarial attacks. The authors define transferability error, which measures the error in adversarial transferability. They also discuss diversity and empirical model ensemble Rademacher complexity. The authors then decompose the transferability error to explain how it originated in the model ensemble attack. Furthermore, they derive bounds on the transferability error using complexity and generalization terms, and conclude three practical guidelines for reducing transferability error: (1) incorporating more surrogate models, (2) increasing their diversity, and (3) reducing their complexity in cases of overfitting. Some empirical evaluations are done on MNIST, CIFAR-10, and ImageNet." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The Weaknesses of this paper include:\n\n- **Implicit assumptions in Eq. (3).** The authors define the most transferable adversarial example $z^*$ in Eq. (3) as $z^*=\\textrm{argmax} L_P$, where $L_P$ in Eq. (1) is defined by taking expectation over $\\theta\\sim P_\\Theta$. This formulation has implicit assumptions that **1)** the target model share the same parameter space $\\Theta$ with the surrogate models, i.e., they have the same architectures; **2)** the target model follow the same distribution $P_\\Theta$ with the surrogate models, i.e., they apply the same (or same distribution) of training configurations. Both of these assumptions make the transfer problem overly simplistic, because in practice, the target model typically employs different model architectures and training configurations (including different datasets) than the surrogate models.\n\n- **Using Rademacher Complexity in deep cases.** First, I personally don't believe that Rademacher Complexity can convey reliable information when we are talking about deep networks. Second, Rademacher Complexity is more useful for asymptotic analysis, otherwise a lower upper bound of TE (i.e., Eq. (12)) does not indicate a lower value of TE.\n\n- **The three practical guidelines are already well-known.** While the authors demonstrated some theoretical bounds, the three guidelines they concluded—(1) incorporating more surrogate models, (2) increasing their diversity, and (3) reducing their complexity in cases of overfitting—are all well-known in literature. There is also a lack of empirical comparisons to previous baselines for ensemble-based transfer attacks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024understanding,\ntitle={Understanding Model Ensemble in Transferable Adversarial Attack},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=28U5Olm32r},\nnote={under review}\n}" }, "abstract": { "value": "Model ensemble adversarial attack has become a powerful method for generating transferable adversarial examples that can target even unknown models, but its theoretical foundation remains underexplored. To address this gap, we provide early theoretical insights that serve as a roadmap for advancing model ensemble adversarial attack.We first define transferability error to measure the error in adversarial transferability, alongside concepts of diversity and empirical model ensemble Rademacher complexity. We then decompose the transferability error into vulnerability, diversity, and a constant, which rigidly explains the origin of transferability error in model ensemble attack: the vulnerability of an adversarial example to ensemble components, and the diversity of ensemble components.Furthermore, we apply the latest mathematical tools in information theory to bound the transferability error using complexity and generalization terms, contributing to three practical guidelines for reducing transferability error: (1) incorporating more surrogate models, (2) increasing their diversity, and (3) reducing their complexity in cases of overfitting. Finally, extensive experiments with 54 models validate our theoretical framework, representing a significant step forward in understanding transferable model ensemble adversarial attacks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "adversarial examples", "adversarial transferability", "model ensemble attack" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9e5f5feff8a7dd02f34ea09ba97c335a0b026dd5.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/88e7d3355cd7bed0e8b253e8aa4d52b09970bc12.pdf" }, "title": { "value": "Understanding Model Ensemble in Transferable Adversarial Attack" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
28abpUEICJ
CREIMBO: Cross-Regional Ensemble Interactions in Multi-view Brain Observations
main
Active
computational neuroscience;multi-regional brain interactions;sparsity;cross-session variability;dynamical systems modeling;neural dynamics;non-simultaneous neural recordings
applications to neuroscience & cognitive science
3;6;8
4;4;4
3;3;4
2;3;3
4;3;3
5.666667
4
3.333333
2.666667
3.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Please provide details of how synthetic data was generated. \n\n2. In Fig. 2M and O, I observed the high inconsistency in reconstruction accuracy in other models, e.g., mp_rSLDS_Gauss (per tial). This is synthetic data and, I assume, each session contains similar level of noise. It is very unclear how other baseline models nearly completely fails but performed near perfectly in some sessions. My question is, does this inconsistency supposed to support the robustness of CRIEMBO? \n3. With real data, the authors demonstrated the validity of CRIEMBO. While I agree with the author’s claim, it does not necessarily lead to the strength of CRIEMBO. Are there any interesting differences or unique patterns that can be obtained from CRIEMBO vs. other baseline models? \n4. The authors checked the robustness of CRIEMBO over different noise levels, with real data. More common practice is evaluating the robustness of the models with synthetic data, one with grountruth. If there were any specific reasons why it was done with real data, please specify it." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The major strength of this study comes from its conceptual advances embedded in the proposed CRIEMBO. While there have been efforts to model the high-dimensional brain dynamics via deep learning, many of models failed to yield interpretable features, which is the most important aspect in the neuroscience field. Thus, the neuroscience field still tends to rely on relatively simpler yet interpretable models. In this regard, I believe CRIEMBO proposed in this study can be a good solution for this gap, clearly upholding the originality of this study. The authors thoroughly examined the validity of the proposed model using simulated- and experimental data, leading to the high quality of this work. Plus, the clarity of this paper is relatively high as the model was well described in the manuscript, whereas the reviewer believes there are some rooms requiring the attention of the authors. Altogether, the scientific significance of this paper is clear and can be interesting to the electrophysiology field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Through the study, the authors have proposed a novel analytical approach (named as “CREIMBO”) for learning dynamics of latent representations from high-dimensional electrophysiology. The major advances of CREIMBO comes from that compression of high-dimension and extraction of dynamics were conducted simultaneously in CREIMBO, while keeping the interpretability. Through experiments with synthesized- and real data, the authors have properly demonstrated the validity of CREIMBO in the study. As CREIMBO contains conceptual novelty and its validated effectiveness, I believe this model can be one of useful candidate models for studying high-dimensional, partially overlapped, data, such as intracranial EEG or multi-array spike data. Thus, CREIMBO will be useful to neuroscientists." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the overall strength of this study is obvious, there is a major weakness: Insufficient details in simulation study with synthetic data. It is unclear how the synthetic data was generated, e.g., what was the noise level, what kinds of parameters used for. Due to this uncertainty, it is nearly impossible to understand some of the intriguing findings in this study, especially with synthetic data. For example, there is very high inconsistency in performance across sessions in other methods, but not for CRIEMBO (Fig. 2M). While it can be interpreted as robustness of CRIEMBO, it is also possible the choice of benchmark models was not optimal for this type of synthetic data. Related to it, there was no comparison work with real data. Thus, the superiority of CRIEMBO needs further validation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Can this model be applied to other organisms, like mice or rats? Extending the application to neuroscience animal models would greatly increase the impact of the presented work.\n \nHow difficult would it be to extend this model to use a Poisson observation model to better capture neural spiking activity?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The manuscript is clearly presented, referenced in the context of the relevant work and technically sound. The method was tested in simulations coming from the generative model, where the model was able to recover the parameter settings, and was able to better capture the variability compared to existing models. They also tested the model in neural recordings identifying functional connectivity between and within brain regions. Moreover, they tested the robustness to increasing levels of noise in the data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The recent developments in neural recording technologies allow recording from large populations of neurons from multiple brain regions. Latent space models are often used to analyze these datasets, but they are generally limited to the study of single or few populations of neurons recorded simultaneously. To overcome these limitations, the presented work introduces a new algorithm that can capture variability across recording sessions and across and within brain regions. The method assumes a shared latent representation across areas and structured priors given each session. The authors validated the method on simulated data and neural data, showing that the model can capture variability across and within brain regions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Further work in needed to fully understand the impact of this work. The authors motivate their work from their ability to better understand behavioral tasks from asynchronous recordings of spiking neural activity. However, the authors only tested the model in a single dataset where they limit the analysis to functional connectivity. It would be relevant to assess behavioral variables, such as ability to decode the present image stimuli from the learnt representations. Since the emphasis of the model is brain functional connectivity, the method should be compared to alternative recording methods such as fMRI or the LFP present in the dataset. Moreover, the comparison to alternative models in the neural data is limited to one qualitative test. A comprehensive quantitative comparison, such as decoding or reconstruction performance, is needed to understand the capabilities of the proposed method. Along the same lines, it is not surprising that the model outperforms alternative methods when the simulated data is tailored to the given model. A more relevant comparison would be simulating neural data from a neural process with temporal, task and/or behavioral variability and fit the different models there. It would also be relevant to highlight the model strengths and weaknesses in simulated data. One of the motivations for this work is its application to spiking data, but the Gaussian assumption limits its application to this kind of observation model and must be handled in preprocessing. While the authors verbally list this and other limitations in the discussion, it would be informative to show the limitations and, more importantly, capabilities of the model as results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In line with my previous comment in weaknesses item 2, the authors show subject 10 in Fig. 3 which has a small number of available neurons compared to some other subjects (for Screening task). Are the example subcircuits and latent states extracted for the Screening task? Do the identified subcircuits and A matrices look denser for a higher number of neurons?\n- In the ablation study in Fig. 2 ‘All regions (sparse)’ A matrix, authors show that the CREIMBO cannot infer the true underlying connectivity matrix, which makes sense since the inductive bias on block-diagonal A matrix is removed. Do authors use the same A matrix initializations in this ablation study? If not, how would the results change if the same block-diagonal initialization is applied for ‘All regions (sparse)’ case? Can authors provide an intuition on why inferred latent factors deviate significantly from the true latents? Would the same hold in the K=1 case, in which, a similarity transform would exist between not block diagonal sparse, not block diagonal not sparse, and block-diagonal sparse A matrices? Also, are the authors showing trial and session averaged latent states in these figures? If so, how does single-session latents look like for CREIMBO?\n- The block structure imposed on A matrix seems like no regions have shared latent states and interregional interactions are captured by temporal dynamics. Did the authors try having some latent factors shared across all regions? How would it change the performance?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Authors provide a new perspective over existing models of multi-regional neural models by allowing their model to leverage multi-session recordings that can help extract global and robust interregional interactions. While doing that, they keep the interpretability in-tact unlike deep-learning approaches. \n- Authors performed exhaustive simulation experiments to show the importance of their model formulation. \n- The paper is well-written and easy to follow. The proposed model architecture and training framework are intuitive and effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose CREIMBO that can extract interpretable neural subcircuits underlying multiregional neural signals by utilizing multi-session recordings that can have a variable number of recording units, trials, and durations. Through simulations, they show that their model successfully uncovers the ground truth dynamics when multi-session recordings are modeled together. In real data analysis, authors show that identified subcircuits can be sparse indicating specialized functionality, and reveal across-region interactions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Even though the paper is well written, the subfigures are very small and too crowded, which makes it hard to understand. Also, I think captions/labels for some appendix figures such as Fig. 11 should be improved. \n- It seems like the number of ensembles per region ($p_j$) is an important hyperparameter for CREIMBO. I wonder if the model would reveal some consistent subcircuits with small $p_j$, such that these subcircuits explain most of the variance and are consistent across sessions and subjects. Also, how would training times vary with large $p_j$ values such that $p \\approx N$? Overall, I think the scalability of such a model is an important aspect since modern neural recordings can include hundreds of neurons from one region, in such case, max($p_j$) = 7, can limit the interpretability of the identified subcircuits. \n- In simulations (Fig. 2F), the authors show that the single-session model underperforms CREIMBO by a large performance gap. This can be caused by a small number of trials in each simulated session and short trials, but I could not find these details in the text (if it is in Fig. 11, I think it requires explanations in the caption). If this is indeed the case, I wonder how single-session models' performance would increase with longer sessions. \n- I think the biggest contribution of CREIMBO is its multisession modeling over other approaches like mDLAG. However, the modern recording session can have hundreds of trials of data that can be sufficient to train models to understand multiregional dynamics. Therefore, I think it would be nice to see how their model compares to mDLAG even if it operates on a single-session basis. Based on Fig. 22, for the real dataset considered in this study, using multiple sessions in modeling seems important, and a comparison to mDLAG would highlight the importance of multisession modeling. Even though mDLAG does not learn dynamic matrices for temporal evolution, its learned readout matrices and lag parameters would still indicate interregional interactions, and I wonder if interregional interactions learned by mDLAG would be as poor as 'Session # (SPARSE)' in Fig. 22. Also, did authors compare their model to SLDS variants in real data as done for simulations? Overall, I think this work would benefit from more baseline comparisons to existing approaches in real data." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a framework for uncovering latent multi-regional neural sub-circuits by leveraging the richness and variability of multi-session neural data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024creimbo,\ntitle={{CREIMBO}: Cross-Regional Ensemble Interactions in Multi-view Brain Observations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=28abpUEICJ},\nnote={under review}\n}" }, "abstract": { "value": "Modern recordings of neural activity provide diverse observations of neurons across brain areas, behavioral conditions, and subjects; presenting an exciting opportunity to reveal the fundamentals of brain-wide dynamics. Current analysis methods, however, often fail to fully harness the richness of such data, as they provide either uninterpretable representations (e.g., via deep networks) or oversimplify models (e.g., by assuming stationary dynamics or analyzing each session independently). Here, instead of regarding asynchronous neural recordings that lack alignment in neural identity or brain areas as a limitation, we leverage these diverse views into the brain to learn a unified model of neural dynamics. Specifically, we assume that brain activity is driven by multiple hidden global sub-circuits. These sub-circuits represent global basis interactions between neural ensembles---functional groups of neurons---such that the time-varying decomposition of these sub-circuits defines how the ensembles' interactions evolve over time non-stationarily and non-linearly.\nWe discover the neural ensembles underlying non-simultaneous observations, along with their non-stationary evolving interactions, with our new model, **CREIMBO** (Cross-Regional Ensemble Interactions in Multi-view Brain Observations). CREIMBO identifies the hidden composition of per-session neural ensembles through novel graph-driven dictionary learning and models the ensemble dynamics on a low-dimensional manifold spanned by a sparse time-varying composition of the global sub-circuits. Thus, CREIMBO disentangles overlapping temporal neural processes while preserving interpretability due to the use of a shared underlying sub-circuit basis. Moreover, CREIMBO distinguishes session-specific computations from global (session-invariant) ones by identifying session covariates and variations in sub-circuit activations. We demonstrate CREIMBO's ability to recover true components in synthetic data, and uncover meaningful brain dynamics in human high-density electrode recordings---capturing cross-subject neural mechanisms as well as inter- vs. intra-region dynamical motifs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "computational neuroscience", "multi-regional brain interactions", "sparsity", "cross-session variability", "dynamical systems modeling", "neural dynamics", "non-simultaneous neural recordings" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/223a42ca12add4f0fedc7a8476550ac941614a00.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/867767addb71089d7402d432375a4ca0b5d64609.zip" }, "title": { "value": "CREIMBO: Cross-Regional Ensemble Interactions in Multi-view Brain Observations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
28oMPC5bcE
UNComp: Uncertainty-Aware Long-context Compressor for Efficient Large Language Model Inference
main
Active
KV Cache;GQA;Matrix entropy;Uncertainty;Efficient Inference
foundation or frontier models, including LLMs
3;5;5;6
4;3;4;5
2;3;2;3
2;2;2;2
2;2;2;3
4.75
4
2.5
2
2.25
0.324443
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- **Q1**: Do you have any analysis of the dynamic sparsity ratio compared with PyramidKV?\n- **Q2**: Do you have any analysis of the dynamic approximated window size across different layers and heads?\n- **Q3**: Do you have results for other long-context benchmarks and longer context windows, such as RULER[1] and InfiniteBench[2]?\n- **Q4**: Typo corrections needed for quotation marks, e.g., #390, #511, #713-715. And incorrect references, e.g., in Figure 3's legend, \"Sec 4.2\" might be \"Sec 3.2\" (#294, #298)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper focuses on a practical and relevant topic.\n- The proposed matrix entropy-based method with dynamically allocated sparsity is intuitive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the high latency and memory cost associated with long-context LLM inference by proposing UNComp, a training-free method that combines model compression and KV cache compression with a matrix entropy-based, dynamically allocated compression ratio. Specifically, the approach involves identifying similar layers and heads through offline search for compression, while using SnapKV with dynamic compression ratio of the KV cache with an approximated dynamic window size during inference. This paper test their method on the LongBench and NIAH benchmarks across four LLMs (Llama-2-7B/13B, Llama-3-8B, Mistral-7B-v0.1). Results indicate that UNComp offers slight improvements over baselines such as SnapKV, PyramidKV, and CHAI at the same compression ratio, although performance loss occurs when applying model compression." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed approach is relatively straightforward and could be viewed as a combination of existing methods. For instance, the model compression component can function independently, yet no necessary baselines are provided for comparison in this area.\n2. The paper lacks sufficient ablation studies and analysis to demonstrate the contribution of each module in the proposed method. Specifically:\n - The improvement over PyramidKV appears to mainly derive from the dynamic approximated window size selection based on matrix entropy. However, there is no ablation study examining the effect of applying dynamic approximated window sizes to PyramidKV, or the performance impact of applying PyramidKV’s dynamic sparse ratio within UNComp. A comparison of dynamic sparsity ratios between this method and PyramidKV is also missing.\n - There is no analysis of the dynamic approximated window size across different layers and heads.\n3. The experiments are limited to LongBench and NIAH with a maximum context length of 32k, with no results for other state-of-the-art long-context benchmarks or longer contexts, such as RULER[1] and InfiniteBench[2].\n\n[1] RULER: What’s the Real Context Size of Your Long-Context Language Models? \n[2] InfiniteBench: Extending Long Context Evaluation Beyond 100K Tokens, ACL 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper introduces a \u001dmatrix entropy to quantify the amount of information in each layer across the token sequence, which is then effectively applied for compression.\n- Using the metric at both the layer and head levels, the authors propose customized inter-layer and inter-head compression strategies, allowing for a more targeted approach to model compression.\n- The method undergoes extensive evaluation on diverse benchmarks, consistently delivering superior performance at comparable compression ratios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces UNComp, an uncertainty-aware compression method designed to address memory and computational challenges associated with large language models (LLMs) during long-context inference. UNComp uses matrix entropy to estimate model uncertainty, applying selective compression across layers and attention heads based on these uncertainty levels. This approach preserves crucial information while enhancing efficiency in both memory and computational requirements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* While Figure 3 aims to illustrate the overall workflow of the proposed method, it presents too much information at once, which makes it difficult to follow. One suggestion to improve readability is to break it down into subfigures or add step-by-step numbering to guide the reader through each part of the process. This adjustment would make the method’s workflow clearer and easier to understand.\n* An essential aspect of evaluating compression methods is understanding the trade-off between accuracy and throughput (or latency). However, this paper separates these metrics: Table 1 presents only accuracy, while Table 3 focuses solely on latency, making it challenging to assess the accuracy-throughput balance across different methods at a glance. Adding a combined table or figure that displays both accuracy and throughput would better support comparisons of this trade-off.\n* The paper primarily addresses end-to-end accuracy and latency but lacks an analysis of the compression ratio at each layer or head level within a single model (e.g., Llama3-8B-Instruct). Including this breakdown would provide greater insight into the internal dynamics and behavior of the model when applying the proposed method.\n* Although the authors claim that the proposed method achieves faster performance than CHAI despite a lower compression ratio, the reasons for this improvement are not sufficiently explained. Offering more details on which specific aspects of the method contribute to greater hardware efficiency and speed, beyond just compression ratio, would make this claim more convincing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. This method has many hyperparameters; how did you select them?\n2. If different heads retain a different number of tokens, does it affect parallel computation? If padding is used, how can true acceleration be achieved?\n3. Why does a deeper layer necessarily retain fewer tokens? From the picture, it appears that the effective rank may fluctuate." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The writing is clear and easy to follow.\n2. The source code is provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents UNComp, an innovative uncertainty-aware compression scheme designed to enhance the efficiency of large language models (LLMs) during long-context inference." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The number of groups seems to have little impact on performance, and sometimes fewer groups even yield better results. So why the complex design? However, if a uniform compression rate is applied, it feels like the paper doesn't contribute anything new.\n2. Different layers have varying levels of attention to tokens, so \"using the attention scores of the current layer to predict the tokens to be evicted in the next layer\" may pose significant issues.\n3. Lack of some baselines: streamingLLM[1],Quest[2],doublesparse[3] \n\n[1] Efficient Streaming Language Models with Attention Sinks https://arxiv.org/abs/2309.17453\n[2] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference https://arxiv.org/abs/2406.10774\n[3] Post-Training Sparse Attention with Double Sparsity https://arxiv.org/abs/2408.07092" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The simplication of the calculation of importance score may be the fulture major direction." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The concept of matrix entropy and effective rank is novel and useful for determining the token redundancy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper finds that:\n1. higher-layer tokens gather more information, and a small number of tokens can represent the entire sequence\n2. For heads on the same layer, those with a higher effective rank should evict fewer tokens because this head is more informative\n3. Tokens of the same head in different layers gradually share information as the layers deepen, while tokens of different heads do not share information as the layers deepen. \n\nTherefore, based on the matrix entropy and effective rank, the KV cache and hidden states is compressed with training-free method, which achieves a compression rate of 4.74%, with a throughput increase of 6.4× and a 1.4× inference speedup in a single batch, incurring only a 1.41% performance loss." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The compression is based on the calculation of attention score and related accumulation, which introduces the onlince cost." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024uncomp,\ntitle={{UNC}omp: Uncertainty-Aware Long-context Compressor for Efficient Large Language Model Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=28oMPC5bcE},\nnote={under review}\n}" }, "abstract": { "value": "Deploying large language models (LLMs) is challenging due to their high memory and computational demands, especially during long-context inference. While key-value (KV) caching accelerates inference by reusing previously computed keys and values, it also introduces significant memory overhead. Existing KV cache compression methods—such as eviction and merging—typically compress the KV cache after it is generated and overlook the eviction of hidden states, failing to improve the speed of the prefilling stage. Additionally, applying a uniform compression rate across different attention heads can harm crucial retrieval heads in needle-in-a-haystack tasks due to excessive compression. In this paper, we propose UNComp, an uncertainty-aware compression scheme that leverages matrix entropy to estimate model uncertainty across layers and heads at the token sequence level. By grouping layers and heads based on their uncertainty, UNComp adaptively compresses both the hidden states and the KV cache. Our method achieves a 1.6x speedup in the prefilling stage and reduces the KV cache to 4.74% of its original size, resulting in a 6.4x increase in throughput and a 1.4x speedup in inference with only a 1.41% performance loss. Remarkably, in needle-in-a-haystack tasks, UNComp outperforms the full-size KV cache even when compressed to 9.38% of its original size. Our approach offers an efficient, training-free Grouped-Query Attention paradigm that can be seamlessly integrated into existing KV cache schemes." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "KV Cache", "GQA", "Matrix entropy", "Uncertainty", "Efficient Inference" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/22fe44bc81569da5032668e2194c25808b322186.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/958b8ed6cf78ac0777a1758b467cfad7be6a5781.zip" }, "title": { "value": "UNComp: Uncertainty-Aware Long-context Compressor for Efficient Large Language Model Inference" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
28qOQwjuma
Beyond Graphs: Can Large Language Models Comprehend Hypergraphs?
main
Active
LLMs;Hypergraph;Benchmark
datasets and benchmarks
3;5;8
4;4;5
2;2;4
1;3;3
2;3;3
5.333333
4.333333
2.666667
2.333333
2.666667
0.917663
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How do the prompting techniques generalize to other complex data structures beyond hypergraphs?\nCould the authors elaborate on the potential scalability issues of the prompting techniques with increasingly large and complex hypergraphs?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Originality: The paper proposes a new benchmark and prompting techniques tailored for hypergraphs, addressing a gap in the assessment of LLMs' capabilities.\nQuality: The benchmark is comprehensive, covering a wide range of tasks and hypergraph types, which strengthens the validity of the findings.\nClarity: The paper is well-organized, with clear explanations of the hypergraph languages and prompting frameworks.\nSignificance: The work is significant as it pushes the boundaries of LLMs' understanding of complex data structures, which has implications for various real-world applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces LLM4Hypergraph, a benchmark designed to evaluate large language models' (LLMs) understanding of hypergraphs, which can capture complex, multi-way relationships beyond pairwise correlations found in traditional graphs. The benchmark includes 21,500 problems across low-order, high-order, and isomorphism tasks using both synthetic and real-world hypergraphs. The study evaluates six prominent LLMs and introduces novel prompting techniques to enhance LLMs' performance on hypergraph tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper could benefit from a deeper analysis of the limitations of the current LLMs in handling hypergraphs, beyond performance metrics.\nWhile the benchmark is comprehensive, it may lack diversity in terms of the types of real-world hypergraphs used, which could affect the generalizability of the findings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why is prompting a promising research direction for hypergraph understanding in light of other techniques such as function calling?\n2. What are the empirical graphs used in the experiments? What are the selection criteria?\n3. Some tasks involve computing tasks whose answers are numbers. How does the accuracy is computed for these tasks? Is it an exact match? Or allow some error under a certain threshold?\n4. Does the BAG and CoT outperform beyond statistical variations attributed to the variations of individual graph data and stochatic behaviors of LLMs?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is easy to read and the experiments are comprehensive and thorough." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper provided a new benchmark to evaluate the LLM's ability to understand hypergraphs and developed a new prompting framework to improve the hypergraph comprehension. The prompting framework demonstrated that CoT and BAG, adapted to hypergraphs, can improve the LLM's performance on hypergraph tasks, especially for high-order tasks such as Vertex Set Connection Checks and Vertex-Set--in-Hypergraph Checks using synthetic hypergraphs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Main arguments**:\n1. The paper adapts existing benchmarks and prompting techniques for hypergraphs. While the results offer some insights into the extent to which LLMs understand hypergraphs, they largely mirror findings for simple graphs---specifically, that CoT and BAG can enhance LLM performance. The only notable point is that using suitable language to describe hypergraphs can aid LLM comprehension, which is novel but trivial.\nGiven that the proposed techniques are naive adaptations of existing techniques and new insights specific to hypergraphs are not found, the contribution of the paper is incremental and not significant.\n2. The paper lays out a main motivation by the underexploration of (i) investigating the LLM's ability to understand hypergraphs and (ii) developing prompting framework for hypergraph understanding and argue that they are promising research directions. This is not a strong motivation, i.e., \"underexploration\" alone does not justify the promising research directions. More specific question is: why is prompting a promissing research direction for hypergraph understanding in light of other techniques such as function calling?\n3. Unsupported claim 1: In abstract, ``our specialized prompting framework incorporates seven hypergraph languages and introduces two novel techniques, Hyper-BAG and Hyper-COT, which enhance high-order reasoning and achieve an average 4% (up to 9%) performance improvement on structure classification tasks.'' This is not sufficiently supported by the empirical results. The performance improvement for Hyper-COT is 4\\% on average. However, for the Hyper-BAG, it is 2\\% for low-order hypergraphs and 2.8\\% for high-order hypergraphs.\n\n**Minor arguments**:\n1. Give the stochastic nature of the LLMs and the graph data, it is crucial to report the variation of the results across different runs (e.g., confidence intervals, standard deviations), given the performance gain of the proposed prompting techniques (Hyper-BAG and Hyper-COT) is slim.\n2. Unsupported claim 2: The paper claimed in the supporting information (B.4) that the benchmark represents the first instance that includes isomorphism checks. This is not precise. Isomorphism checks are a special case of Maximum Common Subgraph (MCS) problem, which is included in the existing benchmark cited in the paper (GraphArena Tang et al. (2024)). The author used \"in this domain\" to limit the scope of their claim, and it is crucial to spell out the \"domain\" (e.g., general graphs, or hypergraphs specifically) to be more precise.\n3. The paper did not provide descriptions about real-world graphs used in the experiments and their selection criteria." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does the performance of LLMs depend on the hypergraph domains (e.g., emails, coauthorship)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper proposes the first benchmark for evaluating LLMs on hypergraphs.\n- The authors thoroughly address questions about hypergraphs.\n- The problems are well-structured and clearly categorized according to their objectives.\n- The code is released for reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces LLM4Hypergraph, the first benchmark aimed at evaluating the ability of LLMs to understand hypergraph data. The authors design a series of tasks of varying difficulty levels and evaluate six different LLMs. Then, they identify their strengths and weaknesses. While this work represents a first step and provides a comprehensive study, there are several areas where improvement is needed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The motivations for this research are not sufficiently discussed. Why is it important to enable LLMs to understand hypergraph structures? Are there potential practical use cases? Are there any motivations beyond the fact that similar research has been done with graphs?\n- The datasets used in the study are not comprehensive. To be specific:\n - The definition of \"hypergraph size\" is unclear. Is it referring to the number of nodes, the number of hyperedges, or the sum of hyperedge sizes?\n - The specific sizes of the hypergraphs (both real-world and synthetic) are not mentioned in the main content. How large are the synthetic hypergraphs used for evaluation?\n - According to the appendix, even the so-called \"large-scale hypergraphs\" only contain 15 to 20 vertices, which is too small to meaningfully capture higher-order structures typically expected in hypergraphs.\n - The synthetic hypergraphs are not sufficiently representative. There are other synthetic hypergraph models (e.g., configuration models) available.\n - It is unclear how the random walk approach for sampling sub-hypergraphs from real-world hypergraphs (Appendix A.2) ensures that the sampled hypergraphs \"retain the intricate and authentic correlations inherent in the original data.\"\n- The definition of task \"difficulty\" is unclear.\n- The authors may consider discussing/citing the recent work \"When LLM Meets Hypergraph: A Sociological Analysis on Personality via Online Social Networks\" (CIKM 2024) in the related work.\n\n**In summary**, this paper makes a valuable contribution to LLMs and hypergraph analysis. However, the benchmark datasets lack comprehensiveness and have room to consider additional synthetic hypergraph generators. Also, the paper lacks detailed statistics on real-world hypergraphs. Scalability is also a concern; if large-scale hypergraph handling poses challenges for LLMs, these limitations should be clearly discussed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024beyond,\ntitle={Beyond Graphs: Can Large Language Models Comprehend Hypergraphs?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=28qOQwjuma},\nnote={under review}\n}" }, "abstract": { "value": "Existing benchmarks like NLGraph and GraphQA evaluate LLMs on graphs by focusing mainly on pairwise relationships, overlooking the high-order correlations found in real-world data. Hypergraphs, which can model complex beyond-pairwise relationships, offer a more robust framework but are still underexplored in the context of LLMs. To address this gap, we introduce LLM4Hypergraph, the first comprehensive benchmark comprising 21,500 problems across eight low-order, five high-order, and two isomorphism tasks, utilizing both synthetic and real-world hypergraphs from citation networks and protein structures. We evaluate six prominent LLMs, including GPT-4o, demonstrating our benchmark’s effectiveness in identifying model strengths and weaknesses. Our specialized prompt- ing framework incorporates seven hypergraph languages and introduces two novel techniques, Hyper-BAG and Hyper-COT, which enhance high-order reasoning and achieve an average 4% (up to 9%) performance improvement on structure classification tasks. This work establishes a foundational testbed for integrating hypergraph computational capabilities into LLMs, advancing their comprehension." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLMs", "Hypergraph", "Benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2a703467541d1cda061b2bd22caa36eb2a606536.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/9ade7404203f8ba5113bbb35f0beeb8efdb09d67.zip" }, "title": { "value": "Beyond Graphs: Can Large Language Models Comprehend Hypergraphs?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
293V3bJbmE
HELMET: How to Evaluate Long-context Models Effectively and Thoroughly
main
Active
long-context language models;benchmarking
datasets and benchmarks
6;6;6;6
4;4;3;5
3;3;3;3
3;3;3;4
3;3;3;4
6
4
3
3.25
3.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1: Figure 2 is missing? \n\n2: What is the value for 'depth' in Figure 11? From top to the bottom, is the key information located at the beginning of the context to the tail of the context? \n\n3: Gemma series have a unique attention head dimension of 256 rather than 128. It might have interesting impact on the long context things. It would be better to have results with Gemma series as the tested models." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1: The benchmark is comprehensive. It covers most real-world long context use cases.\n\n2: The investigation of performance correlation among all task types are insightful. It provides a new perspective to understand LLMs' long context ability.\n\n3: The improvement to prompting strategy and evaluation method effectively stabilizes the evaluation results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper constructs a comprehensive benchmark to test LLMs' long context abilities. It covers various types of tasks such as RAG, ICL, LongQA, Retrieval, Re-rank and so on. The used prompts and evaluation metrics and carefully designed to ensure both IFT models and base models can give predictions. This benchmark also evaluates most commonly recognized LLMs and accordingly provides insights about LLMs' long context performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1: The so called \"expected\" ranking of LLMs is a bit subjective. \n\n2: Lack of some deep analysis to interesting results, such as why the json-kv task has higher correlation with re-rank than RAG or LongQA\n\n3: The RoPE scaling settings are not suitable for 128k/64k testing. With ABF, usually, the scaling factor should be at least 2x the target extension ratio. With 8k context, Llama3 should use at least a scaling factor of 32 for 128k testing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address the weaknesses in the previous section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\t**Diverse Task Design**: HELMET includes seven categories of tasks, enhancing the representativeness of LCLMs in real applications.\n\n2.\t**Support for Ultra-Long Inputs**: This benchmark accommodates input lengths over 128k tokens, making it suitable for evaluating the long-context capabilities of frontier models.\n\n3.\t**Reliable Model-Based Evaluation**: HELMET’s evaluation metrics reflect human judgment better than traditional n-gram matching, offering more reliable model ranking.\n\n4.\t**Compatibility with Base Models**: The benchmark allows evaluations of base models that haven’t undergone instruction fine-tuning, broadening LCLM applicability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a new benchmark called HELMET, which is designed to comprehensively evaluate the performance of long-context language models (LCLMs). Current LCLM evaluations largely rely on synthetic tasks, like Needle-in-a-Haystack (NIAH), or arbitrary subsets of some datasets. However, these methods present issues such as high noise, insufficient coverage of downstream applications, inadequate dataset lengths, and unreliable metrics. HELMET aims to address these shortcomings by expanding task diversity across seven application-centric categories (including long-document QA, citation-based generation, etc.), supporting controllable input lengths up to 128k tokens, and implementing model-based evaluations for more reliable results. Through testing 51 LCLMs, this study finds that synthetic tasks are poor predictors of downstream performance, open-source models fall behind closed-source models on complex long-context tasks, and there is low correlation among task categories, highlighting the need for multi-dimensional LCLM evaluation ." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\t**High Complexity**: With multiple tasks and model comparisons involved, HELMET’s setup and evaluation process is intricate and demands considerable effort from researchers.\n\n2.\t**Low Correlation Among Some Tasks**: The low correlation between different tasks may make it challenging to assess a model’s overall long-context handling ability if it performs exceptionally in only certain tasks.\n\n1. **High Resource Consumption**: Running the full suite of HELMET tasks is time-intensive. It would be beneficial to identify a few key subtasks that can maintain consistency with the results of full testing, allowing for time-saving evaluations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In your analysis, you conclude that existing benchmarks like RULER and ∞BENCH are unreliable because larger models sometimes perform worse than smaller ones, which contradicts human expectations. Could you elaborate on why you attribute these unexpected results to benchmark unreliability rather than potential issues with the larger models themselves? Did you investigate alternative explanations for the performance discrepancies?\n2. Do you have any results from human evaluation that validates the model-based evaluation metrics? What were the human-model agreement rates? Were there any notable discrepancies between the human judgments and model-based evaluations?\n3. Other than RAG, which types of tasks in HELMET are compatible with the base model without instruction following capabilities?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper attempts to provide a standardized, holistic benchmark for LCLMs, whose adoption can potentially improve consistency and reliability in model evaluation and comparison.\n* The evaluation is extensive -- 51 LCLMs across multiple dimensions, tasks, input lengths, and model types (open-, closed-source)\n* The paper provide some valuable findings and insights into the performance of LCLMs, e.g. the limitations of synthetic tasks as predictors of real-world performance and where the performance gaps are between open- and closed-source models. This can guide future research and model development." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents HELMET, a benchmark for evaluating long-context language models (LCLMs) that try to address limitations in existing evaluations, which often rely on synthetic tasks lacking real-world applicability. HELMET includes 7 diverse, application-centric tasks and supports input lengths up to 128k tokens. Through evaluating 51 LCLMs, the authors demonstrate that synthetic tasks are poor predictors of downstream performance, different task categories exhibit distinct trends, and open-source models significantly lag behind closed-source models on complex tasks requiring reasoning over long contexts. They advocate for holistic evaluation across diverse tasks to gain a comprehensive understanding of LCLM capabilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The authors observe that on existing benchmarks like RULER and ∞BENCH, smaller models (e.g., Llama-8B) sometimes outperform larger ones (e.g., Gemini Pro, Llama-70B), and they conclude that these benchmarks are unreliable because they do not reflect human expectations that larger models should perform better. This reasoning may be premature and somewhat biased. It's possible that the larger models genuinely underperform on these benchmarks due to specific issues, such as overfitting, architectural limitations, or difficulties in handling certain tasks. The benchmarks might be accurately capturing these performance discrepancies. Dismissing unexpected results as benchmark unreliability without thoroughly investigating the underlying causes undermines the validity of the authors' argument. More analysis considering both the possibility of model issues and benchmark limitations would strengthen the conclusions.\n* While the paper introduces model-based evaluation metrics using 4o to address the unreliability of traditional metrics like ROUGE, it provides limited details on how these metrics were validated against human judgments. Including more detailed results or analysis of human-model agreement would strengthen the validity of the evaluation methodology.\n* Although the paper critiques existing benchmarks, it could offer more in-depth analysis demonstrating how HELMET improves over them in practice. Figure 1 seems to be the only place where a direct comparison is shown. Conducting more direct comparisons of model rankings or performance differences on HELMET and existing benchmarks and providing concrete evidence of HELMET's advantages would strengthen the paper's arguments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How well does HELMET handle variations in domain-specific tasks, such as medical or financial documents?\n- Could open-source models trained on synthetic datasets achieve comparable results with additional tuning on HELMET's diverse tasks?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- HELMET covers diverse tasks such as retrieval-augmented generation, passage re-ranking, and long-document QA, providing a comprehensive test bed for evaluating the full capabilities of long-context models.\n- By introducing controllable length settings and using model-based metrics instead of n-gram matching, HELMET offers a better reflection of human judgments and real-world performance.\n- The authors evaluate 51 models, providing valuable insights into how different architectures and model sizes handle long-context tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes HELMET, a benchmark designed to evaluate long-context language models across seven application-focused categories, addressing issues such as inadequate dataset length, noisy evaluation metrics, and inconsistencies in current benchmarks. Through empirical evaluation on 51 models, the authors argue that HELMET offers better differentiation among models compared to traditional synthetic tasks and demonstrates the inadequacy of simple benchmarks in predicting real-world performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While HELMET’s application-oriented tasks are extensive, they may not fully capture long-context models’ capabilities in highly specific domains like legal or medical texts, limiting its applicability in niche areas.\n- The heavy reliance on closed models such as GPT-4 for comparison leaves open questions about the efficacy of HELMET in an entirely open-source setting, which may limit reproducibility for some researchers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024helmet,\ntitle={{HELMET}: How to Evaluate Long-context Models Effectively and Thoroughly},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=293V3bJbmE},\nnote={under review}\n}" }, "abstract": { "value": "Many benchmarks exist for evaluating long-context language models (LCLMs), but developers often rely on synthetic tasks like needle-in-a-haystack (NIAH) or arbitrarily selected subsets of datasets. It remains unclear whether these evaluations translate to the diverse downstream applications of LCLMs, and the inconsistency further complicates model comparison. We investigate the underlying reasons behind current practices and find that existing benchmarks often provide noisy signals due to low coverage of long-context applications, insufficient dataset lengths, unreliable metrics, and incompatibility with base models. In this work, we present HELMET (How to Evaluate Long-context Models Effectively and Thoroughly), a comprehensive benchmark encompassing seven diverse, application-centric categories. We also address many issues in previous benchmarks by adding controllable lengths up to 128k tokens, model-based evaluation for reliable metrics, and few-shot prompting in all tasks for evaluating base models. Consequently, we demonstrate that HELMET offers more reliable and distinct rankings of frontier LCLMs. Through a comprehensive study of 51 LCLMs, we find that (1) synthetic tasks like NIAH are not good predictors of downstream performance; (2) the diverse categories in HELMET exhibit distinct trends that do not correlate well with each other; and (3) while most LCLMs achieve perfect NIAH scores, open-source models significantly lag behind closed ones when the task requires full-context reasoning or following complex instructions---the gap widens with increased lengths. Finally, we recommend using our RAG tasks for fast model developments, as they are easy to run and more predictive of downstream applications than existing synthetic tasks; but ultimately, we advocate for a holistic evaluation across diverse tasks. We hope HELMET serves as a valuable resource for future long-context model development." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "long-context language models", "benchmarking" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/131ca2c1addef801223575e751b8ad41c31fb549.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "HELMET: How to Evaluate Long-context Models Effectively and Thoroughly" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
29JDZxRgPZ
EM-GANSim: Real-time and Accurate EM Simulation Using Conditional GANs for 3D Indoor Scenes
main
Active
Generative Adversarial Networks (GAN);Electromagnetic Propagation;Real-time Simulation;3D Indoor Environments
learning on time series and dynamical systems
3;5;6;8
3;2;2;3
3;2;3;4
3;3;3;3
2;2;3;2
5.5
2.5
3
3
2.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Proposing an exciting application for cGANs and generative models in simulating EM propagation\n- Providing a dataset of 64M simulated heatmaps with various indoor models\n- Adding physical inductive bias to the model so that the generations are physically plausible." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a generative framework for simulating Electro-Magnetic wave propagation, as a faster replacement for ray tracing approaches usually used in this application. Authors propose a method based on using cGAN and regularized by physical constrains to generate plausible propagation heatmaps given the structure of the scene. They show through experiments that although the performance is not on-par with Ray Tracing methods, this method allows for a faster simulation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Main:\n- The proposal to use random input to a GAN to avoid mode collapse is not very well-justified. The proposed set-up is very similar to the common conditional GANs which can easily have mode collapse. Further when adding regularizations, such as the physical regularizations proposed, the risk for mode collapse is increased. The authors mention building the model from simpler problem up to the aimed taks and this helps with fine-tuning and perhaps mode collapse. It would be great to have more experiments/analysis on what is the breaking point and why the model is stable in its final version.\n\nMinor:\n- What representation is used for the conditional geometry? A more thorough description of the modality in line 162 would be helpful. It is unclear how the 3D model is encoded and given to a GAN.\n- Figure 2 should be labled with yours vs baseline so its easier to read. The interpretation in the caption as what is the weakness vs strength of your method is not easily understandable from the heatmaps and would be great to highlight them visually." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could you clarify how the method handles true 3D propagation versus 2D layout information? The current results only show 2D heatmaps. Could you provide vertical propagation results at different heights? How does the network architecture specifically process and maintain height information?\n- Please describe in detail how the 2K+ room models were created/sourced. What is the distribution of room types, sizes, and configurations in your dataset? How to ensure the synthetic scenes are physically realistic? How are different materials modeled and validated?\n- How to determine the weights (α, β, γ) in the physics loss function? What measures are taken to ensure training stability? How to handle varying room sizes in the network?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents an interesting application of conditional GANs to EM simulation. While both GANs and EM simulation are established fields, their combination for real-time indoor propagation simulation represents a fresh approach to an important practical problem.\n- The method achieves notable acceleration (reported 5X speedup) compared to traditional ray tracing methods. If these results can be thoroughly validated, this could be valuable for real-time applications.\n- The attempt to incorporate electromagnetic principles through specialized loss terms (direct propagation, reflection, and diffraction) shows thoughtful consideration of the physics involved, though the theoretical guarantees need more examination.\n- While the dataset generation process needs better documentation, the collection of indoor scenes and EM simulation results could be useful for future research in this direction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents EM-GANSim, a learning-based approach for real-time electromagnetic (EM) propagation simulation in indoor environments. The core technical contribution is a modified conditional GAN architecture that incorporates both geometric information and transmitter location to predict power distribution heatmaps while adhering to electromagnetic propagation principles. The authors propose a physically-inspired learning framework that integrates direct propagation, reflection, and diffraction effects through specialized loss terms in the GAN's objective function.\n\nThe method claims to achieve comparable accuracy to traditional ray tracing-based simulators while offering significant speed improvements (reported as 5X faster). The authors evaluate their approach on 15 indoor scenes and provide ablation studies examining the impact of noise and physical constraints. They also introduce a dataset comprising over 2,000 indoor scene models with corresponding EM simulation heatmaps.\n\nWhile I am not an expert in electromagnetic propagation simulation and wireless communications, the paper appears to address an important practical challenge in real-time EM simulation. However, there is some ambiguity in how the method handles true 3D environments versus 2D representations, and the room generation and data preparation processes could benefit from clearer documentation. The paper presents an interesting application of deep learning to physics-based simulation, though both its theoretical foundations and physical accuracy need closer examination." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- A weakness is the unclear treatment of \"3D\" simulation. While the paper claims to handle \"3D indoor environments,\" the evidence presented is primarily 2D heatmaps. There's no clear explanation of how height information is processed in the network, no visualization of vertical propagation effects, and no analysis of height-dependent signal variations. Table 2 only specifies area (square meter) without height information. The paper needs to either demonstrate true 3D capability or clarify that it's a 2.5D approach.\n- Critical details about the \"2K+ models and 64M heatmaps\" are missing. The paper doesn't explain how these indoor scenes were generated, validated, or processed. Without this information, readers cannot assess data quality or reproduce the results.\n- The method description lacks important specifics. The GAN architecture details, training process, and hyperparameter selection are not fully described. The physics-based loss weights lack justification, and there's minimal discussion of training stability.\n- The experimental validation relies mainly on MSE comparisons. The performance measurements lack important context - hardware specifications, memory requirements, and preprocessing costs are not reported. The gap between training (3 dbm²) and testing (8.5 dbm²) MSE also needs explanation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The organization of this paper is pretty \n1. Why not show the quantative results in the manuscript?\n2. Why report the avg. mse?\n3. what is the method name for 'the traditional RT approach'?Please make this clear and cite." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The employment of cGAN itself is novel, though it seems to me not much significant change is made to the cGAN architecture and it's more like a domain adaptation. The equations in the paper are quite solid, showing the authors good understanding. The paper has strong experiments across 15 scenes with clear performance metrics showing 5X speedup. The ablation studies are well-structured and there is thorough comparison with established methods demonstrate robust methodology.\nThe proposed method achieves an impressive real-time speed while maintaining robustness, which suggests the method is of good quality. \nThe authors claim to release the code and data to benefit the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a novel approach to real-time electromagnetic (EM) propagation simulation in complex 3D indoor environments, utilizing a physics-inspired conditional generative adversarial network (cGAN) model. This research is positioned as the first real-time algorithm for EM simulation in these environments, and it holds potential value for applications such as 5G network planning, wireless communication system design, and dynamic indoor environments requiring rapid signal strength calculations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper lacks comparisons with state-of-the-art methods. It only presents scores in comparison to DCEM, and the detailed quantitative results are relegated to supplementary materials rather than the main manuscript. I've calculated the detailed score: GAN-based (dBm²): 8.69; DCEM (dBm²): 7.89.\nThis suggests that the GAN-based method performs slightly worse than the DCEM method introduced at VTC 2022. Given the recent methods proposed in the literature (Vaganova et al., 2023; Wang & Manocha, 2023; Haron et al., 2021; Gómez et al., 2023), incorporating more experimental results could lead to a fairer and more comprehensive evaluation.\n2. Though the proposed method is faster than traditional RT-based methods, it still takes 3-4 seconds to simulate one room. This is far from what the author frequently claimed to contribute to the 'real-time data analysis.' If the real-time is for per data point, then the traditional RT-based methods are real-time, too.\n3. The introduction could be strengthened by including a more detailed rationale for using cGAN in this context, specifically on how its features address the problem. Although section 5 provides some of this analysis, highlighting it earlier would improve the logical flow.\n4. The qualitative comparison is not sufficient, and the conclusion \"We see with GAN-based methods that the heatmaps show less MSE in general captures and exhibit more pronounced areas of both high and low signal strength, suggesting a finer granularity in the simulation of received powers.\" is ambiguous. How can the simulation judge by \"more pronounced areas of both high and low signal strength\"? Besides, the mean mse is higher!\n\nMinor:\n1. The plots and tables are not well designed, which makes them hard to understand. e.g. Figure 1 fails to demonstrate the overall method clearly. There is space to improve with regard to color design (Figure 3) and table format. In Figure 2, labels should be inserted into the plot instead of writing the first row: xxx, second row: xxx...in the caption.\n2. There are too many {enumerate} and {itemize}, which is not so common in papers. It would take a lot of space and make the paper look loose. \n3. Minor inconsistencies in grammar and terminology, such as a misplaced comma and inconsistent use of terms like \"ray tracing\" versus \"ray-tracing,\" which should be standardized.\n4. Table 3 is hard to understand. Why \"Generation time per data point (seconds)\" is a column?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Have authors tested the Wasserstein loss for GAN? If yes, were the results worse or better? What applications do authors supposed to test the GAN on? Is it possible to verify the trained model on tracking tasks? In the results section the different materials that objects are made of are mentioned. Did authors analyze the dependance of GAN performance on material? What kind of sensors were used for dataset collection? How do authors measure the sensors’ accuracy? What filters were used to process the raw data of sensors?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "In my opinion applying the GAN-based model to electromagnetic signal processing seems to be the strongest point of the paper: traditional approaches to this task were studied and comprehensively modified by many authors. For this reason, I find the GAN-based approach to be original and good scientific ground. Moreover, the presented method provides a robust result and achieves a 5X speed compared with other pipelines which is important in real-time applications. The Generator and Discriminator training process is described carefully.\n\nDespite the weaknesses, I recommend accepting this work because the authors used an interesting and difficult architecture in a complicated task, and the numerical results were provided. The main reason to accept this paper is strong description of the GAN training, which stresses that the authors comprehensively researched GAN opportunities in the EM propagation domain." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The new algorithm of real-time electromagnetic signal processing is presented in this paper. Authors trained a GAN-based neural network to predict electromagnetic power distributions in 3D indoor environments. The indoor electromagnetic signal processing is fundamental problem of indoor tracking, so the work is valuable and actual." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "However, several weak parts were figured out despite the strong points of the paper. The first question concerns the dataset that authors present in the paper. It seems that this dataset was used for training. This dataset is new so it is expected that there will be a pipeline of dataset generation. However, there is no description how this dataset was collected. This information allows us to judge how accurate the GT labels are. The absence of the source code and developed dataset deprives the opportunity to test the pipeline." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A novel GAN-based approach for real-time 3D indoor electromagnetic simulation, drastically reducing computation time while maintaining accuracy." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024emgansim,\ntitle={{EM}-{GANS}im: Real-time and Accurate {EM} Simulation Using Conditional {GAN}s for 3D Indoor Scenes},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=29JDZxRgPZ},\nnote={under review}\n}" }, "abstract": { "value": "We present a novel machine-learning (ML) approach (EM-GANSim) for real-time electromagnetic (EM) propagation that is used for wireless communication simulation in 3D indoor environments. Our approach uses a modified conditional Generative Adversarial Network (GAN) that incorporates encoded geometry and transmitter location while adhering to the electromagnetic propagation theory. The overall physically-inspired learning is able to predict the power distribution in 3D scenes, which is represented using heatmaps. Our overall accuracy is comparable to ray tracing-based EM simulation, as evidenced by lower mean squared error values. Furthermore, our GAN-based method drastically reduces the computation time, achieving a 5X speedup on complex benchmarks. In practice, it can compute the signal strength in a few milliseconds on any location in 3D indoor environments. We also present a large dataset of 3D models and EM ray tracing-simulated heatmaps. To the best of our knowledge, EM-GANSim is the first real-time algorithm for EM simulation in complex 3D indoor environments. We plan to release the code and the dataset." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Generative Adversarial Networks (GAN)", "Electromagnetic Propagation", "Real-time Simulation", "3D Indoor Environments" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/14594ce9613897746e50c37f5ce7df92ad457b46.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/4ba4ac8ffde4dcec176ff3b6f9fa566e06700166.zip" }, "title": { "value": "EM-GANSim: Real-time and Accurate EM Simulation Using Conditional GANs for 3D Indoor Scenes" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
29LC48aY3U
Backdoor Attacks for LLMs with Weak-To-Strong Knowledge Distillation
main
Active
Backdoor Attacks;Large Language Models;Knowledge Distillation
alignment, fairness, safety, privacy, and societal considerations
3;5;6;8
5;4;3;3
2;2;3;3
2;2;3;3
2;2;3;3
5.5
3.75
2.5
2.5
2.5
-0.919866
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The paper addresses injecting clean-label backdoors into LLMs under the assumption that the attacker has full control over the training process, making this setup counterintuitive and somewhat confusing. The main advantage of clean-label backdoor attacks is their stealthiness, as they can bypass human inspection. Most existing clean-label backdoor attacks operate under a data poisoning assumption, where the attacker only provides poisoned data without controlling the training process [1, 2, 3, 4]. In this scenario, model trainers may inspect the received data before using it for training. Due to the label consistency in clean-label backdoor attacks, simple human inspection cannot detect the poisoned samples, which makes them stealthy. However, in a training control setup [5, 6], the stealth advantage of a clean-label backdoor is irrelevant because the attacker will only release the poisoned model, without exposing the poisoned training data. This means there is no data inspector in such a scenario, and attackers can freely manipulate data to ensure successful backdoor injection while maintaining benign performance. Therefore, the motivation for studying clean-label backdoors in a training control setup is unclear.\n\n2. The paper claims that PEFT algorithms struggle to successfully inject backdoors into LLMs. According to Table I, even dirty-label attacks (e.g., BadNets) using PEFT only achieve a 15.51% ASR on the SST-2 dataset. This observation contradicts recent literature on LLM backdoors [7, 8, 9]. For example, [7] reports successful backdoor injection into LLMs using QLoRA, and [8] proposes a fine-tuning method similar to PEFT that achieves effective backdoor injection. Can the authors clarify the reasons behind these contradictory findings?\n\n3. One of the main advantages of W2SAttack is its ability to inject backdoors into models that cannot be trained using full-parameter fine-tuning due to computational constraints. Therefore, it would strengthen the paper if the authors included results from applying W2SAttack to larger open-source LLMs, such as Llama-2-70B or Mixtral-8x7B. This would further support the argument for the proposed attack.\n\n4. Another point of concern is that the paper focuses primarily on LLM discriminative tasks, such as sentiment classification, whereas LLMs are now predominantly used for generative tasks. Recent works have also explored backdoors in LLMs for generative tasks [10, 11]. It would be valuable if the authors extended their proposed attack to generative tasks to determine if the same observations hold in those contexts.\n\n---\n\nReference \n\n[1] Liu, Yunfei, et al. \"Reflection backdoor: A natural backdoor attack on deep neural networks.\" Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16. Springer International Publishing, 2020.\n\n[2] Barni, Mauro, Kassem Kallas, and Benedetta Tondi. \"A new backdoor attack in cnns by training set corruption without label poisoning.\" 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019.\n\n[3] Zeng, Yi, et al. \"Narcissus: A practical clean-label backdoor attack with limited information.\" Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 2023.\n\n[4] Turner, Alexander, Dimitris Tsipras, and Aleksander Madry. \"Clean-label backdoor attacks.\" (2018).\n\n[5] Cheng, Siyuan, et al. \"Deep feature space trojan attack of neural networks by controlled detoxification.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 2. 2021.\n\n[6] Doan, Khoa, et al. \"Lira: Learnable, imperceptible and robust backdoor attacks.\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\n\n[7] Huang, Hai, et al. \"Composite backdoor attacks against large language models.\" arXiv preprint arXiv:2310.07676 (2023).\n\n[8] Li, Yanzhou, et al. \"Badedit: Backdooring large language models by model editing.\" arXiv preprint arXiv:2403.13355(2024).\n\n[9] Li, Yige, et al. \"Backdoorllm: A comprehensive benchmark for backdoor attacks on large language models.\" arXiv preprint arXiv:2408.12798 (2024).\n\n[10] Rando, Javier, and Florian Tramèr. \"Universal jailbreak backdoors from poisoned human feedback.\" arXiv preprint arXiv:2311.14455 (2023).\n\n[11] Hubinger, Evan, et al. \"Sleeper agents: Training deceptive llms that persist through safety training.\" arXiv preprint arXiv:2401.05566 (2024)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The research topic is interesting and important to the community\n\n2. The idea is novel and intuitive.\n\n3. The paper is overall well-written and easy to follow" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces W2SAttack, a method for injecting clean-label backdoors into LLMs. The approach stems from the observation that successfully injecting clean-label backdoors during fine-tuning becomes challenging when using Parameter Efficient Fine-Tuning (PEFT) algorithms. The authors analyze the limitations of PEFT from the information theory perspective and propose Weak-to-Strong Attack (W2SAttack) to enhance attack effectiveness under PEFT. Inspired by teacher-student knowledge distillation, W2SAttack first injects backdoors into a smaller teacher model using full-parameter fine-tuning. It then transfers the backdoor knowledge to a larger student model through PEFT, incorporating feature alignment loss terms during the distillation process to support the backdoor learning. Evaluation results demonstrate that W2SAttack can effectively inject various types of backdoors into LLMs using PEFT." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The threat model combining clean-label backdoor attacks with training control is counter-intuitive and lacks practical value.\n\n2. The observation that PEFT cannot successfully inject backdoors is inconsistent with findings in recent literature.\n\n3. The paper lacks evaluation on larger LLMs to demonstrate the scalability and effectiveness of the proposed method.\n\n4. The paper lacks evaluation on generative tasks, which are a major use case for LLMs today." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The definition of ASR(f(x^' )_peft) in Obj. 1 needs further clarification.\n\tWhat is the definition of Z_t? Additionally, the author needs to further explain why I(Z_t;Y) is related to backdoor features.\n\tI didn’t find the implementation details for Eq. 9 and 10, particularly for Eq. 9. Thus I have a concern about their correctness. Please provide more details.\n\tThe author said they use the clean-label backdoor attack. Why don’t use the poison-label backdoor attack? Is there any difference between those two attacks in your method? Please clarify. Besides, the author should provide the details for attack, such as target label to solve my concern.\n\tI wonder if continuously increasing the number of poisoned samples would improve the attack success rate in the PEFT setting?\n\tThe method of inserting triggers also affects the attack success rates. The author needs to further explain the implementation details of BadNet and InSent, as well as the SynAttack algorithm.\n\tThe caption for Fig. 3 should provide a detailed description of the motivation for each subfigure.\n\tWhat is the meaning of ‘Efficient-tuning’ in Tab. 5?\n\tAlso a concern about reproducibility, in Tab. 9, there is few details provided in terms of defense. For example, which trigger you used?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe article proposes a counter-intuitive but effective framework, that is, using small models as teachers and large models as students. This makes me think it's quite novel\n2.\tThe writing is fluent and clear, easy to understand" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a method called W2SAttack. The author claims 1. that full-parameter fine-tune for achieving backdoor attack is not feasible due to high occupied VRAM 2. PEFT such as LoRA causes poor performance.\nTo address the posed problem, the author proposed the W2SAttack. They use the PEFT for smaller LLM first, and then set it as the teacher model to distill the larger LLM.\nThe results showed that the method can significantly reduce the computational cost." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Some weaknesses can be found in the Questions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "More Explanation:\n\tFigure 1's y-axis should have a label; I was confused about its unit and what it represents.\n\tCA and ASR should be clearly mentioned in the caption.\n\tDuring the process of poisoning the teacher model, the authors added an additional linear layer. Is this layer necessary? Equation 4 requires further modification. What are the impacts of the teacher model on backdoor attacks?\n\tThe expression of the attacker's Objective 1 indeed requires additional explanation. The authors have noted in their third stage pilot study that deploying effective backdoor attacks using the PEFT algorithm is challenging. However, Objective 1 suggests that ASR(f(x^' )_peft)≈ASR(f(x^' )_fpft), a statement that seems to contradict the earlier assessment, which requires further explanation.\nExperimental Section:\n\tIs the caption for Figure 4 correct? The authors discuss the impact of different trigger lengths on backdoor attacks in the experimental analysis section, therefore this part needs to be revised.\n\tCompared to the BadNets backdoor attack, the backdoor attack algorithms based on InSent or SynAttack seem to achieve more desirable effects. Could the authors provide further analysis of the reasons for this, or present a more detailed analysis?\n\tAlthough the W2SAttack algorithm can guide the model to learn backdoor features, it requires the design of an additional teacher model. Existing experiments have only analyzed the effectiveness of the backdoor attack, but lack necessary analyses of communication costs, such as the training costs induced by changes in updatable parameters, which are essential for assessing the feasibility of the algorithm." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tEnhancing the effectiveness of backdoor attacks targeting the PEFT algorithm is a worthwhile research problem.\n2.\tThe authors design an effective backdoor attack algorithm that saves computational resources compared to full-parameter fine-tuning.\n3.\tOverall, the presentation is clear, and the experiments are comprehensive. The details are clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The preliminary experiments in this paper discovered that the PEFT, which updates only a small number of model parameters, hardly implements backdoor attacks effectively. Based on these findings, the authors proposed a weak-to-strong backdoor attack algorithm targeting PEFT, named W2SAttack. They leverage a small-scale teacher model to facilitate the student model's learning of backdoor features, thereby enhancing the effectiveness of the backdoor attack." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Some aspects are not clear, see the questions section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What is the difference between the full-parameter fine-tuning of a small model in knowledge distillation and the full-parameter fine-tuning of a backdoored small model claimed in this paper?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. PEFT in inheriting backdoors and learning backdoors using PEFT is a key research area targeting the security of LLMs.\n\n2. Extensive experiments proved the feasibility of the attack." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a backdoor attack from weak to strong based on feature alignment-enhanced feature distillation. Extensive experiments show the superior performance of W2SAttack targeting PEFT on classification tasks across four language models, four backdoor attack algorithms, and two different architectures of teacher models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1: Motivation**\n- The authors claim that LLMs cannot learn the backdoor under PEFT, but as far as I know, a lot of work reveals the vulnerability of PEFT against LLMs, e.g., references [1-2]. In addition, using LoRA (e.g., r=4) to implant a backdoor on NLU and NLG tasks, the ASR is very easy to reach 100%. \n\n- knowledge distillation to enhance backdoor learning, defend against backdoors, and transfer backdoors needs to be discussed in depth. Therefore, related work is a crucial part of the main body. This helps to understand that the work enhances backdoor learning in the form of distillation, and the final release is an E2E backdoored model.\n\n**2: Overclamming and misleading statement**\n\n- The author claims to be the first to study the effectiveness of the PEFT backdoor. In fact, there are many works in this field, referring to references [1-3].\n\n- When using Onion against W2SAttack, the results barely drop. However, Onion's effectiveness on word-level attacks can make attacks drop to at least around ASR of 50%.\n\n**3: Presentation**\n\n- In the Introduction section, the author does not assert that it is a backdoor attack based on the clean label, which may confuse the reader. \n\n- The manuscript lacks an explanation of the attacker's goals and capabilities. As I understand it, despite being a backdoor to clean labels, it requires poisoning the training set. Therefore, this assumption must be clarified in knowledge distillation or it will become impractical.\n\n- Related work and experiment details are introduced in the appendix. The main body is not self-contained. \n\n- E should be corrected to $\\mathbb{E}$ in Equation 3, 5, and 6.\n\n**Reference**\n\n[1] Unleashing Cheapfakes through Trojan Plugins of Large Language Models.\n\n[2] A Gradient Control Method for Backdoor Attacks on Parameter-Efficient Tuning\n\n[3] PPT: Backdoor Attacks on Pre-trained Models via Poisoned Prompt Tuning." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Backdoor Attacks with Knowledge Distillation" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024backdoor,\ntitle={Backdoor Attacks for {LLM}s with Weak-To-Strong Knowledge Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=29LC48aY3U},\nnote={under review}\n}" }, "abstract": { "value": "Despite being widely applied due to their exceptional capabilities, Large Language Models (LLMs) have been proven to be vulnerable to backdoor attacks. These attacks introduce targeted vulnerabilities into LLMs by poisoning training samples and full-parameter fine-tuning. However, this kind of backdoor attack is limited since they require significant computational resources, especially as the size of LLMs increases. Besides, parameter-efficient fine-tuning (PEFT) offers an alternative but the restricted parameter updating may impede the alignment of triggers with target labels. In this study, we first verify that backdoor attacks with PEFT may encounter challenges in achieving feasible performance. To address these issues and improve the effectiveness of backdoor attacks with PEFT, we propose a novel backdoor attack algorithm from weak to strong based on feature alignment-enhanced knowledge distillation (W2SAttack). Specifically, we poison small-scale language models through full-parameter fine-tuning to serve as the teacher model. The teacher model then covertly transfers the backdoor to the large-scale student model through feature alignment-enhanced knowledge distillation, which employs PEFT. Theoretical analysis reveals that W2SAttack has the potential to augment the effectiveness of backdoor attacks. We demonstrate the superior performance of W2SAttack on classification tasks across four language models, four backdoor attack algorithms, and two different architectures of teacher models. Experimental results indicate success rates close to 100% for backdoor attacks targeting PEFT." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Backdoor Attacks", "Large Language Models", "Knowledge Distillation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/caaa9bf12a1cca2444e903263c94276c67523450.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/fdb532cb398071926b09947a1330895733d892db.zip" }, "title": { "value": "Backdoor Attacks for LLMs with Weak-To-Strong Knowledge Distillation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
29p13QihRM
Language-Guided Object-Centric World Models for Predictive Control
main
Active
Object-Centric Representation;World Model;Predictive Control
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;5;5
3;4;4;4
2;2;3;2
1;2;2;2
2;2;3;3
4
3.75
2.25
1.75
2.5
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Effectively uses SAVi to extract object-centric frame features, enhancing computational efficiency and model accuracy.\n- Compares against two baseline models (Seer and Susie), highlighting the advantages in efficiency and success rate of the proposed approach.\n- Demonstrates generalization capabilities to unseen tasks and objects, showing robustness in diverse environments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a language-guided, object-centric world model for predictive control, which is both computationally efficient and effective in robotic and autonomous tasks. Using slot attention for object-focused representation and language guidance, it outperforms diffusion-based models in task success, speed, and generalization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The contribution in terms of \"object-centric\" design feels limited, as it primarily substitute the encoder for SAVi without introducing distinct object-centric innovations.\n- The lack of an experiment comparing your proposed model with a VAE-based variant (ours + VAE in Tab.1) makes it difficult to conclusively justify the benefits of slot attention.\n- Comparison against video diffusion models would be more appropriate than models like instructpix2pix, as diffusion models are more aligned with the proposed model's multi-frame prediction capability.\n- The analysis suggesting that future state prediction alone suffices for action decoding is questionable; the low accuracy for \"instruction + 0 future steps\" (2.5%) compared to near-zero performance for Seer implies that baseline results may lack rigor, potentially outperforming when future states are not predicted.\n- The dataset used is overly simplistic, limiting the scope of validation for the world model. Testing across multiple, varied environments would better demonstrate the model’s general applicability and robustness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What is trained during \"Predict future dynamics\" Figure 1(b)? If nothing remome \"fire\" sign near the world model? \n\n- \"nature of slot attention not being robust to variable object number\" this could be clarified" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-written and mostly easy to follow. \n- The authors provide a comparison with several image generation baselines adapted for the robotics domain, showing large gap from them. \n- The authors study how robust their method to some changes in the environment, such as changing the block type or changing the task to unseen one." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to extend Slot Former to conditioned on language instruction object-centric dynamics prediction model. Such model could be used for decoding future actions for given state and instruction. Such predictions are in tern used for decoding the best action for the next time step. The paper showed that in the synthetic environment with large dataset, using such structured repression leads to better performance in comparison to using diffusion models for future state prediction. In addition, authors showed that such model is able to generalize to unseen tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Overall, the proposed method is a simple modification of the SlotFormer adding language goal conditioned predictions and trained on a large dataset of the demonstrations. On it its own it is not a big problem, if the proposed methods would be studied on diverse and challenging environments and compared with other methods that are state-of-the-art world models (e.g. [6, 7])\n\n- While the improved performance on the synthetic dataset is encouraging, it is still not clear how the method would perform in more realistic scenarios where both object-centric models and corresponding agents can struggle. As mentioned by authors, recently it was shown that object-centric methods are able to decompose much challenging images or videos (e.g. see DINOSAUR (Seitzer et al. (2023)) for images or VideoSAUR [5] /SOLV (Aydemir et al., 2024) for videos). Thus, it would be important to test how object-centric world models perform in more realistic environments with visual more complex scenarios, e.g. by training LSlotFormer on VideoSAUR or SOLV slots on environments like ManiSkill2). \n\n- It is not clear how the methods compare to the standard baselines on this task: while outperforming diffusion models for video prediction, it is not clear if usage of world model with object-centric representations are comparable or not with state-of-the-art algorithms using the same data for training. \n\n- Some experimental results would benefit from further analysis: for example, it not clear why using language conditioning for the agent itself is decreasing success rate. \n\n- Some potentially missing related work in video-based object-centric learning, control with object-centric representation and world models based on the object-centric representations: \n\n1. Focus: Object-centric world models for robotics manipulation - also proposed a world model using object-centric representations. (https://arxiv.org/abs/2307.02427)\n2. Learning Dynamic Attribute-factored World Models for Efficient Multi-object Reinforcement Learning, NeurIPS 2023 (https://arxiv.org/abs/2307.09205) - learns dynamics graph for more efficient policies. \n3. Self-Supervised Visual Reinforcement Learning with Object-Centric Representations, ICLR 2020 - proposed a goal-conditioned transformer based policy (or action decoder in authors notation), https://arxiv.org/abs/2011.14381\n4. Entity-Centric Reinforcement Learning for Object Manipulation from Pixels (https://arxiv.org/pdf/2404.01220)\n5. Object-Centric Learning for Real-World Videos by Predicting Temporal Feature Similarities (extention of SAVi to more complex real-world videos using DINOSAUR), https://arxiv.org/abs/2306.04829\n\n\n6. TD-MPC2: Scalable, Robust World Models for Continuous Control\n7. PWM: Policy Learning with Large World Models" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "A few questions (some overlapping with Cons. section above):\n- Is language table the de-facto setting for studying object-centric control? It seems fairly limited and biased towards object centric approaches since it is clearly possible to discard the background information quite easily. Studying it in cases of ambiguity, where sometimes the background is obvious to ignore and sometimes not would bring more of the community to investigate this topic. \n- In section 4.6, Is a world model really necessary? - Have the authors reported a pixel2action baseline, basically that does the same learning procedure, and except for learning from images directly, extract from some off the shelf network. The current results only ablate the absence of future slots, which makes sense, but that doesn't answer the question generally about needing a world model or not." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well written and easy to understand. \n- The problem of building a world model for predictive control is a useful and relevant one to solve. \n- The authors have ablated the components of their approach fairly well, including how to do the best action decoding, how many past steps to use in the world model etc." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- The work proposes to inject language control in object centric world models and show its effectiveness in control. \n- It argues that object centric models, specifically based on slots as studied in the paper, are more efficient and performant than large scale video generation models based on diffusion. \n- They conduct experiments on a simulated table top manipulation benchmark to justify their method and various design choices.\n- They present an analysis on how to tune these world models in terms of action decoding, look ahead steps, access to past states to achieve good performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The authors do not have a SlotFormer baseline, which does not use any language conditioning. Given that one of the key claims of the paper is that language conditioned object centric world models help downstream tasks, checking the importance of being language centric is critical. Adding that baseline would be helpful. \n- For the evaluation of this approach, the authors have used the language table simulation environment, which involves some objects to be manipulated on a table top setting. This makes sense since there is a clear distinction between foreground (the objects) and background, which favors object centric approaches over general video generative models. However, showcasing some other scenarios or evaluation setups where maybe lets say intuitively a video generative model would have an edge, would have been interesting and more convincing to see. \n- Minor: The qualitatives in figure 3 are not the easiest to parse, if the author’s method works well, but having a video to see the predictions would make the difference much clearer, I couldn’t find anything in the suppmat." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- It’s interesting that Seer gets basically 0% on all tasks. What are the qualitative failure cases there? \n- Since SuSiE was already evaluated on CALVIN, which is simulated, why not evaluate your approach in that setting?\n- In what qualitative settings would object-centered world models be more or less effective than image ones? Do the authors have any intuitive examples of this, and is there any experimental evidence to back that up?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is written quite clearly, I think the authors presented their ideas quite well\n- I appreciate the ablation discussions in 4.6." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose to train a language-conditioned latent dynamics model whose state representations are object-centric “slots” provided by a frozen pre-trained model. They then train an inverse dynamics model that predicts the actions corresponding to the transitions of the autoregressively-generated latent slot representations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Despite the paper being relatively clearly written, I would highly recommend using ICLR 2025’s extended page limit to increase the size and quality of your visualizations. For instance, for Figure 3, it’s very hard to see where the cube and moon are in the scene. I likewise cannot see any empirical quality differences between your approach’s generations and SuSiE’s.\n- The approach is not tested on a wide range of tasks – only the simulated LanguageTable benchmark. It’s not at all clear to me that it would generalize to the real world. Given that SuSiE was evaluated both in sim (CALVIN) and on real-world robots on Bridge-like tasks and showed good performance (compared to strong real-world baselines like RT-2), it is unclear if the present paper’s approach would similarly scale to such more complex tasks.\n- Similarly, the authors claim: “However, the major drawback of language-guided video-generation models is the requirement of large-scale labeled language-video datasets and the corresponding high computational cost. Therefore, latent predictive models, which abstract video to predict forward in compact latent state spaces, can serve as an alternative from a computational efficiency perspective.” \n - If this is true, it seems more sensible to evaluate in the real world, where data is more limited than in sim.\n - Additionally, SuSiE does show that an off-the-shelf image generation model pre-trained on general image-language data can be fine-tuned to work well with robot data just on existing relatively limited robot datasets. If that’s the case, it seems highly unclear that sample efficiency is a problem.\n- “The task is to move a block to another block based on the description of the colors or shapes … which are the red moon, blue cube, green star, and yellow pentagon.” This likewise seems very limited – I understand that there are generalization experiments, but the Bridge dataset used for SuSiE’s real-world experiments contain a much wider range of actions and objects, and thus also a much wider range of language (including many noisy labels). It has thus demonstrated to be scalable to a wider range of language and visual entities, which I think would similarly benefit this approach (as it stands, being able to generate latent state trajectories for such a limited number of objects and actions does not say much about its scalability).\n- As it stands, given that the approach was only evaluated on a single task setting and said setting is not that representative of real-world language-conditioned visuo-motor robotics tasks, I do not think that this approach has sufficiently demonstrated its general applicability. I think more experiments in a wider variety of domains would be very helpful, especially in real world experiments.\n- Finally, I think it would be important to include results that showcase in what settings visual and object-centric world models each excel or break down. I can imagine some cases wherein image or video generation is bad: for example, if I ask a robot to fetch me something from a closed cabinet, the image generator would have to effectively “imagine” what the inside of that cabinet looks like. However, I do not have corresponding intuition for object-centered world models (though it seems like their weaknesses might be quite similar). See last question for more." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a language-guided object-centric world models to predict future states and corresponding actions." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024languageguided,\ntitle={Language-Guided Object-Centric World Models for Predictive Control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=29p13QihRM},\nnote={under review}\n}" }, "abstract": { "value": "A world model is essential for an agent to predict the future and plan in domains such as autonomous driving and robotics. To achieve this, recent advancements have focused on video generation, which has gained significant attention due to the impressive success of diffusion models. However, these models require substantial computational resources. To address these challenges, we propose a world model leveraging object-centric representation space using slot attention, guided by language instructions. Our model perceives the current state as an object-centric representation and predicts future states in this representation space conditioned on natural language instructions. This approach results in a more compact and computationally efficient model compared to diffusion-based generative alternatives. Furthermore, it flexibly predicts future states based on language instructions, and offers a significant advantage in manipulation tasks where object recognition is crucial. In this paper, we demonstrate that our latent predictive world model surpasses generative world models in visuo-linguo-motor control tasks, achieving superior sample and computation efficiency. We also investigate the generalization performance of the proposed method and explore various strategies for predicting actions using object-centric representations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Object-Centric Representation", "World Model", "Predictive Control" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3f173253f745a65762ca596eada7280c88284504.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b5866674110a97042a07991a5b09e30eb0f8ca2e.zip" }, "title": { "value": "Language-Guided Object-Centric World Models for Predictive Control" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
29sul3tAEa
HyperAdapter: Generating Adapters for Pre-Trained Model-Based Continual Learning
main
Active
hypernetworks;adapter tuning;class-incremental learning
transfer learning, meta learning, and lifelong learning
3;5;5;5;6
4;5;4;4;4
3;2;3;2;3
2;3;2;2;3
1;3;3;3;3
4.8
4.2
2.6
2.4
2.6
0.102062
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- what are the closest existing approaches to the proposed HyperAdapters, only multiple adapters?\n- what are the goals of the experiments?\n- is it established that CL-100 is a better benchmark / good enough to be able to obtain conclusive empirical evidence (wrt the main goal of the experiments)?\n- why the hyperparameters in Equations 3 and 10 are both set to 0.1\n- how good/representative the initial pre-trained model should be for the main conclusions to hold?\n- what is the expected efficiency gain compared to learning more adapters? does it depend on how similar/dissimilar different tasks are?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- An intuitive high-level connection to the Complementary Learning Systems theory.\n- A simple approach with promising performance wrt accuracy.\n- Promising approach wrt scalability - HyperAdapter allows to avoid the excessive number of adapters.\n- Overall, the paper is well-written and is comprehendable to a wide audience." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel rehearsal-free approach for continual learning based on hypernetworks that generate so-called adapters to adapt the pre-trained model to different tasks.\nThe idea is intuitive and at high level is brain-inspired:\n- the task dictionary is sort of an episodic memory in the hippocampus\n- the hypernetwork is sort of the neocortex, storing past knowledge. \n- task-specific embeddings are updated rapidly; and the general hypernetwork is updated slowly \nThe empirical results on the introduced by the authors CL-100 benchmark look promising." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It is not fully clear how the technical novelty is positioned, as well as what baselines should be included for demonstrating what ideas actually work (e.g. other hypernetworks-basd CL? other rehearsal-free CL approaches?). \n- Overall, the technical novelty is rather modest: hypernetworks have been used for CL in the past. The ideas of exploring a combination of faster and slower adapting models have been explored in the past. The idea of recognizing already observed/recurrent tasks/concepts have been studies in the past too. (however, the studied combination of ideas in the context of adapting pre-trained models is novel to the best of my knowledge).\n- The code and the experimental workbench is not available yet. Hence it is not easy to reproduce the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What fundamental differences exist between hyperadapters and existing methods that utilize hypernetworks for continual learning [1-6], aside from variations in application scenarios? Please clarify the key innovations.\n- Is it possible to extend this approach to other continual learning settings, such as class-incremental, domain-incremental, or task-incremental learning?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed HyperAdapter leverages hypernetworks to generate task-specific adapters for pre-trained models, addressing data privacy concerns and enabling effective knowledge transfer.\n- HyperAdapter requires fewer additional parameters as the number of tasks increases, making it suitable for long-sequence continual learning.\n- Experiments demonstrate that HyperAdapter consistently outperforms some methods in rehearsal-free continual learning.\n- The paper is clearly written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles the problem of catastrophic forgetting in continual learning, highlighting the limitations of traditional rehearsal buffer methods in data-sensitive contexts. The authors introduce HyperAdapter that employs hypernetworks to generate task-specific adapters for pre-trained models, thereby requiring fewer additional parameters as the number of tasks increases and promoting positive knowledge transfer across tasks. Comprehensive experiments demonstrate that HyperAdapter consistently outperforms existing methods on benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The core idea proposed in this paper, using hypernetworks to generate model parameters (whether for all parameters, some parameters, or even applied to pre-trained models) to tackle the continual learning problem, has already been extensively explored in the literature [1-6]. This paper merely applies these existing methods in the context of prompting-based continual learning with pre-trained models, which significantly limits its novelty and contribution.\n- Several of the innovative designs introduced, such as block-wise hyper-adapters, bear strong similarities in motivation and methodology to chunk embeddings and network partitioning discussed in [1]. This further constrains the novelty of the work.\n- One of the claimed main advantages, \"eliminating the necessity of knowing the task identities during inference,\" was previously addressed in [1] under the concept of unknown task identity inference. Additionally, the query-key matching mechanism commonly used in prompt-based continual learning to address this issue is a well-established practice [7-9].\n\n[1] Continual learning with hypernetworks. ArXiv:1906.00695 2019.\n\n[2] Continual learning with dependency preserving hypernetworks. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2023.\n\n[3] Continual model-based reinforcement learning with hypernetworks. 2021 IEEE International Conference on Robotics and Automation.\n\n[4] Hypernetworks for continual semi-supervised learning. ArXiv:2110.01856 2021.\n\n[5] Partial hypernetworks for continual learning. Conference on Lifelong Learning Agents 2023.\n\n[6] Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.\n\n[7] Bridging pre-trained models to continual learning: A hypernetwork based framework with parameter-efficient fine-tuning techniques. Information Sciences, 2024.\n\n[8] Learning to Prompt for Continual Learning. CVPR 2022.\n\n[9] DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning. ECCV 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How is the task dictionary initialized? The definition of $q(x)$ in Section 4.1 requires more clarity: is [CLS] a one-hot vector, or does it represent the class token embedding of ViT (which is then multiplied by f(x))? \n\n- Does the size of the task dictionary correspond to the number of training tasks (i.e., classification tasks) or the total number of classes across these tasks?\n\n- How is Equation 10 optimized? Including the training algorithm or detailed description of the training process of different model parts would be beneficial.\n\n- What is the parameter scale unit in Figure 3? Does it measure the parameters of the hypernetwork or the generated model parameters (e.g., U,W) during inference?\n\n- How does the proposed method compare with other hypernetwork-based continual learning approaches?\ne.g. \n * Ding, F., Xu, C., Liu, H., Zhou, B., & Zhou, H. (2024). Bridging pre-trained models to continual learning: A hypernetwork based framework with parameter-efficient fine-tuning techniques. Information Sciences, 674, 120710.\n * Hemati, Hamed, Vincenzo Lomonaco, Davide Bacciu, and Damian Borth. \"Partial hypernetworks for continual learning.\" In Conference on Lifelong Learning Agents, pp. 318-336. PMLR, 2023." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Generates parameters specifically for pre-trained model adapters rather than the entire network, enhancing training efficiency.\n- Introduces a task embedding dictionary for efficient retrieval of task embeddings for incoming tasks.\n- Provides a thorough and detailed experimental analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a class-incremental continual learning framework HyperAdapter, which uses hypernetworks to generate adapters to adapt a pre-trained model to different tasks. Specifically, it uses the pre-trained model's class embedding as a key to query the task embedding that generates the adapter parameters. Extensive experiments are performed on image classification benchmarks, showing improved performance over other regularization-based and prompt-based CL frameworks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The approach appears to be a direct adaptation of a hypernet-based continual learning framework to adapter-based fine-tuning. Task relationships rely solely on the pre-trained model’s class token embedding for input, making it resemble a method of training separate models for distinct classes, with conditional parameter generation handled by the hypernet. This setup may not effectively handle scenarios where new classes can not be easily matched with labels of the pre-trained model, such as domain-specific tasks. e.g. for complex facial emotion classification tasks, the pre-trained model would give similar class embeddings (e.g. human, face, eye glasses, etc) regardless of which emotion class the image belongs to. \n\n- The paper draws an analogy between the proposed method and the brain’s Complementary learning system (CLS). However, unlike the human brain, which can dynamically adjust its internal representations, such as merging or acquiring new concepts, the task dictionary here has keys fixed by the pre-trained classes, lacking the true flexibility of dictionary learning to adapt and integrate new concepts. It's suggested to consider ways to make the task dictionary more dynamic or adaptive over time." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "The values of the hyperparameters are stated on page 7. How does varying the hyperparameter values affect performance?\n\nWhat are the memory requirements, and what do they depend on?\n\nWhat are the time requirements, and what do they depend on?\n\n“Improving this selection mechanism is left as a direction for future work.” – can you suggest some possible directions for improvement?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The research will be of interest to the ICLR community.\n\nOriginality: There is currently a lot of interest in avoiding catastrophic forgetting in the continual learning setting. The authors have summarised and categorised the main approaches. \n\nExperimentation: The experimentation is carried out on the standard data sets.\n\nReproducibility: I believe that the details are sufficient for reproducibility of the experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the problem of catastrophic forgetting in continual learning. The authors introduce a pre-trained model-based continual learning framework, HyperAdapter, which utilizes a hypernetwork to generate adapters based on the current input, adapting the pre-trained model to the corresponding task. A key to the method is that HyperAdapter uses representative features from pre-trained models, eliminating the necessity to know the task identities during inference or the dependence on any rehearsal buffers. Experimentation shows that it outperforms other methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Clarity: The paper discusses the main approaches at a high level and fails to clearly describe the key novelty of the proposed approach. Additionally, the comparison of the proposed method with how people learn is repeated in a number of places. The points around this are well known and widely documented. Removing the replication provides space to describe the novelty in more detail and room to discuss the implications of the results.\n\nTypos: Please check your paper carefully for typos including the repeated use of “regularation” on page 3. A grammar checker will pick up some of the errors.\n\nDiscussion of results: The paper is missing a discussion of the results. Adding this will provide a deeper understanding of the advantages of the approach.\n\nDiscussion of limitations: The paper is missing a discussion of the limitations of the approach and potential ways to address them. Adding this will provide a more balanced presentation of the research.\n\nDiscussion of braider impact: What are the open problems or future directions related to your work? Adding this to the paper would improve the paper's discussion of broader impact and potential future work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper is essentially well-organized and easy to follow.\n\n2. The proposed hypernetwork seems to be a simple but effective strategy, applicable to both adapter-based and LoRA-based parameter-efficient tuning.\n\n3. Less requirement for additional parameters is a good feature for continual learning that ensures scalability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed pre-trained model-based continual learning method that employs a hypernetwork to generate adapters based on the current input. The proposed method features positive transfer and fewer additional parameters as the number of tasks increases. It outperforms a variety of continual learning methods in many representative benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As acknowledged by the authors, the hypernetwork itself is a large linear layer, and the use of a separate hyper parameter for each layer results in much more parameter cost. For fairness, it is therefore desirable to compare the total parameter cost with other baselines methods.\n\n2. Does the hypernetwork need to be updated in continual learning? If so, how does it overcome catastrophic forgetting?\n\n3. The authors only considered one particular pre-trained checkpoint of supervised ImageNet-21K. Does the proposed method apply to other pre-trained checkpoints, especially for self-supervised pre-training?\n\n4. The authors compared only a representative selection of pre-trained model-based continual learning methods. It would be more informative to consider other concurrent competitors, such as SLCA (ICCV’23), LAE (ICCV’23), RanPAC (NeurIPS’23), HiDe (NeurIPS’23), etc." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel hypernetwork-based framework to generate task-oriented adapters for pre-trained model-based continual learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024hyperadapter,\ntitle={HyperAdapter: Generating Adapters for Pre-Trained Model-Based Continual Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=29sul3tAEa},\nnote={under review}\n}" }, "abstract": { "value": "Humans excel at leveraging past experiences to learn new skills, while artificial neural networks suffer from the phenomenon of catastrophic forgetting during sequential learning. Efforts have been made to alleviate forgetting by introducing a rehearsal buffer into the model, but this way is impractical in real-world scenarios with data privacy. Recently, pre-trained model-based continual learning methods have provided new insights into addressing this issue by effectively utilizing the powerful representational capabilities of pre-trained models to avoid catastrophic forgetting without a rehearsal buffer. In this work, we propose a novel pre-trained model-based continual learning framework, HyperAdapter, which utilizes a hypernetwork to generate adapters based on the current input, adapting the pre-trained model to the corresponding task. This paradigm requires fewer additional parameters as the number of tasks increases, which is a critical advantage for scaling to long sequences continual learning. Unlike methods that partition task-related knowledge into relatively independent subspaces, it promotes positive knowledge transfer across tasks. Comprehensive experiments across various datasets demonstrate that HyperAdapter consistently outperforms all existing methods and even exceeds the upper bounds of multi-task learning, establishing a new state-of-the-art for pre-trained model-based continual learning. Our code will be released." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "hypernetworks", "adapter tuning", "class-incremental learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9e8aad06895211e44d2347bffb84b9e13c03e911.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/92801106eefb87d7445f58f7ba94e631e0cebc27.pdf" }, "title": { "value": "HyperAdapter: Generating Adapters for Pre-Trained Model-Based Continual Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ATD8a8P3C
Conformal Structured Prediction
main
Active
Conformal Prediction;Structured Prediction;Integer Programming
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
5;5;6
4;3;5
3;3;3
2;2;2
3;2;3
5.333333
4
3
2
2.666667
0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "How should $m$ be selected in practice? From the experiments, this selection appears as an important choice for the quality of prediction sets, however the paper lacks discussion on this aspect." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-organized for the most part.\n2. The paper is technically sound in its description of problem formulation and the marginal and PAC guarantees.\n3. Construction of prediction sets in the structured prediction setting and in the context of nodes in a directed acyclic graph is an important problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework for conformal structured prediction i.e., conformal prediction in the structured prediction setting. The proposed framework outputs structured prediction sets that achieve marginal or PAC coverage guarantees while minimizing prediction set size. In the context of a set of nodes in a directed acyclic graph, the prediction set as a small subset of coarse labels corresponds to the prediction set of fine-grained descendants of the coarse labels. The paper presents empirical analysis of the approach in three domains to demonstrate the performance of their approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Missing discussion of important related work [1, 2]: I believe the paper misses citing and comparison with important related work on conformal risk control [1]. [1] considers hierarchical image classification in ImageNet similar to the paper and controls the graph distance between nodes. Additionally, RAPS method in [2] is a conformal prediction method that introduces regularization to encourage smaller and stable sets, and is worth comparing to given the focus of the paper on reducing average set size.\n2. The empirical evaluation can certainly benefit from more analysis. In the current form, the contribution and significance of the method are not demonstrated very clearly:\n - It is hard to understand the utility of the method without comparison with more baselines. I believe doing this is especially possible for the marginal guarantees. Qualitative comparison of the prediction sets will also help demonstrate the utility of structured prediction sets. I see the paper discusses one example in the main text, however there is certainly value in adding more examples in this case (also mentioning the error level used for standard conformal prediction and other details for fair comparison).\n - Following from above, I appreciate Table 1 in the paper as it helps understand the influence of hyperparameters better. I would suggest adding similar examples for other datasets as well.\n\n3. The motivation of the paper is not very clear in the beginning and only becomes clearer as the method and examples are discussed later in the paper. While the introduction has sufficient details about the method, I would suggest making the motivation for structured prediction sets clearer early on.\n\n\n\n\n**Minor comments:**\n1. L60: parameter $\\tau$ has not been defined and referenced early I believe without sufficient context here.\n2. Similar comment for Figure 2. The caption makes reference to $\\tau$, whereas the notation has not been introduced earlier in text or in the caption.\n3. L306: typo/incomplete -> (2)\n4. L416-417: possibly missing word after ‘values’; “ in contrast, for the PAC guarantee, coverage for all values within one standard deviation...”\n\n[1] Anastasios Nikolas Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, and Tal Schuster. Conformal Risk Control. International Conference on Learning Representations, 2024.\n\n[2] Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. Uncertainty Sets for Image Classifiers using Conformal Prediction. International Conference on Learning Representations, 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(see above)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* It is an interesting problem, particularly how best to use the external structure of the labels to generate a better 'curve', i.e. recall at given output size.\n* The experimental setups were quite interesting, e.g. MNIST with number ranges.\n* The proposed method seems to extend well to DAG spaces (beyond trees). Though I suppose it is still restricted to DAG instead of Graphs to sum up the probs of final leaf nodes." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose conformal structured predictions which addresses the interpretability issue of conformal predictions when the output set is complex. Their algorithm also differs from existing algorithms as in conformal structured predictions the search space over $\\tau$ is not monotone anymore. This approach can be applied to tasks where the output space can be represented as a directed acyclic graph and has a hierarchical structure. The authors provide formal coverage guarantees using PAC and marginal coverage and evaluate their approach in numbers predictions with MNIST digits, ImageNet classification and temporal Q&A." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I would love to see a baseline where we dont use the structure at all and instead rely on regular P/R curve characteristics. Does the AUC of this model behave better? It is not clear to me as such.\n\n* Even if we do use the external structure and forced to only predict internal nodes in the DAG (as opposed to arbitrary set of leaf nodes), it would still be useful to understand the P/R curve look significantly different with the proposed models. There are plenty of baselines where we can do prediction on internal nodes in addition to leaf nodes." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does the proposed method for conformal structured prediction fundamentally differ from or improve upon prior hierarchical prediction approaches? \n\n2. The framework assumes that the label space is represented by a DAG. How does this assumption impact generalizability to label structures that are non-hierarchical, cyclical, or have overlapping dependencies?\n\n3. Integer programming (IP) can be computationally intensive, particularly for large DAGs. Have you measured or benchmarked the runtime performance and scalability of the IP formulation, especially in tasks with larger label hierarchies?\n\n4. Could you elaborate on practical scenarios or domains where marginal coverage versus PAC coverage would be preferable? How should a practitioner decide between the two guarantees in real-world settings?\n\n5. Have you considered comparing this approach to other recent extensions of conformal prediction tailored to structured outputs or complex tasks (e.g., those applied to natural language or image data)?\n\n6. How sensitive is the framework to the hyperparameter m\n(the maximum number of nodes in the prediction set)? Is there a recommended method for tuning \nm based on the domain or task?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Extension of conformal prediction to structured outputs using DAGs, combining conformal prediction with hierarchical representations.\n2. Rigorous theoretical development with both marginal and PAC coverage guarantees, validated through experiments in diverse domains.\n3. Generally well-organized with clear explanations and helpful visual aids, making complex concepts accessible.\n4. Addresses an important gap, potentially impacting applications that require structu" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper \"Conformal Structured Prediction\" introduces a novel framework to extend conformal prediction methods to complex, structured output spaces. Conformal prediction typically provides sets of labels with a high probability of containing the true label, giving a quantified uncertainty. However, for tasks involving structured or hierarchical outputs—such as text generation, image classification with hierarchical categories, or question answering with date ranges—traditional conformal methods produce prediction sets that can be large and hard to interpret.\n\nThe authors propose a method to generate interpretable structured prediction sets using a conformal predictor that works within a directed acyclic graph (DAG) representing the label hierarchy. This approach maintains coverage guarantees while reducing prediction set complexity. The paper introduces algorithms for efficiently determining the smallest structured prediction set that meets a specified confidence level, and it adapts these methods to provide both marginal and Probably Approximately Correct (PAC) guarantees.\n\nThe authors demonstrate the utility of their approach through experiments on MNIST digits, hierarchical ImageNet classification, and a question answering dataset focused on years as answers. The results show that their framework achieves desired coverage rates while keeping prediction sets concise and interpretable." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper’s application of conformal prediction to structured outputs is valuable, similar approaches have been explored in hierarchical classification and structured prediction. For instance, previous works have used hierarchical structures (e.g., DAGs or trees) to improve interpretability in label prediction. The paper could benefit from a more thorough comparison to these existing methods, as well as a deeper explanation of what sets its approach apart. Highlighting any unique technical advancements in how the proposed framework constructs prediction sets in hierarchical settings (e.g., advantages of the DAG-based approach) would further clarify its contributions.\n\nSet-Valued Prediction in Hierarchical Classification with Constrained\nRepresentation Complexity. Mortier et al. (2022): This work focuses on hierarchical classification with constrained set-valued predictions, utilizing hierarchical label spaces in which classes are structured in a directed acyclic graph (DAG) or tree. It emphasizes creating interpretable predictions that adhere to hierarchical constraints, much like structured conformal prediction but without formal coverage guarantees.\n\n\n2. The experiments are limited to specific domains (MNIST digits, ImageNet, and SQuAD). Although these domains represent a variety of structured prediction tasks, they are relatively controlled environments and may not fully reflect the challenges of deploying conformal structured prediction in more complex real-world applications. For instance, the framework’s performance on prediction sets for multi-label classification or in contexts with high label ambiguity (e.g., complex multi-class categorization in medical or legal documents) remains untested.\n\nDeploying conformal structured prediction in a healthcare setting, particularly for diagnostic tasks with hierarchical or multi-label structures (e.g., identifying conditions or diseases from imaging data or lab results), would offer insights into the model's reliability and interpretability under more variable, high-stakes conditions. This field often requires nuanced coverage guarantees and interpretability to support clinical decision-making.\n\nReal-world document classification often involves hierarchical categories (e.g., legal documents, financial reports) and multi-label classifications. Testing in this setting could reveal how well the model scales with complex, unbalanced label hierarchies, providing additional insights into its generalizability to larger, noisy datasets that are typical in business and legal contexts.\n\n3. The paper assumes that the label hierarchy can be represented by a DAG, which works well for hierarchical classification but may be restrictive for tasks with overlapping or cyclical dependencies. In complex scenarios where relationships between classes are non-hierarchical or not acyclic, this assumption may not hold, potentially limiting the framework’s applicability to structured outputs with more intricate dependencies.\n\n4. The inclusion of PAC guarantees is a strong point, but the differences between the marginal and PAC guarantees could be better explored. The PAC guarantee is inherently conservative, and the experiments demonstrate that it leads to larger prediction sets in some cases. However, there is little analysis of scenarios where a PAC guarantee might be more beneficial than a marginal guarantee, or vice versa, depending on the task or risk tolerance." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We construct structured conformal prediction sets via integer programming." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024conformal,\ntitle={Conformal Structured Prediction},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ATD8a8P3C},\nnote={under review}\n}" }, "abstract": { "value": "Conformal prediction has recently emerged as a promising strategy for quantifying the uncertainty of a predictive model; these algorithms modify the model to output sets of labels that are guaranteed to contain the true label with high probability. However, existing conformal prediction algorithms have largely targeted classification and regression settings, where the structure of the prediction set has a simple form as a level set of the scoring function. However, for complex structured outputs such as text generation, these prediction sets might include a large number of labels and therefore be hard for users to interpret. In this paper, we propose a general framework for conformal prediction in the structured prediction setting, that modifies existing conformal prediction algorithms to output structured prediction sets that implicitly represent sets of labels. In addition, we demonstrate how our approach can be applied in domains where the prediction sets can be represented as a set of nodes in a directed acyclic graph; for instance, for hierarchical labels such as image classification, a prediction set might be a small subset of coarse labels implicitly representing the prediction set of all their more fine-descendants. We demonstrate how our algorithm can be used to construct prediction sets that satisfy a desired coverage guarantee in several domains." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Conformal Prediction", "Structured Prediction", "Integer Programming" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/647fd0bf0755077a9bfeb957543a959cc1412d25.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Conformal Structured Prediction" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2AWZTv6kgV
Projected Neural Differential Equations for Learning Constrained Dynamics
main
Active
neural differential equations;neural ordinary differential equations;constraints;dynamics;scientific machine learning;ai for science
learning on time series and dynamical systems
1;5;5;8
5;4;3;4
2;2;2;3
1;3;1;3
2;4;2;4
4.75
4
2.25
2
3
-0.568535
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Can the authors add some \"cartoons\" to ensure that readers less inclined with differential geometry can still understand the main (pictorially easy to be fair) intuition behind the paper? \n\nCan the authors expand the discussion on relevant literature (maybe with some table comparison?) as per the \"Weaknesses\" section? \n\nCan you provide insights into the computational complexity of the projection operation and discuss potential scalability issues (needs not be FOCS-style theory). Include analysis on the numerical stability and error propagation introduced by the projection maybe if possible?\n\nRe the comment above \"The experiments focus on systems where the constraint manifold is relatively straightforward to compute. It would be valuable to test PNDEs on \"less trivial\" systems with high-dimensional constraints or where the constraint manifold has nontrivial topology maybe? \" can you maybe design such a larger instance and maybe harder topology problem? I would not decline the paper based on this but I do think it would massively strengthen the paper. \n\nNote the unique typos I found was that no $ sign was used in a couple of T_uM instances, just make sure you fix this. \n\nBy addressing these points, the paper would be strengthened in terms of positioning within the existing literature, theoretical rigor, empirical validation, and clarity, ultimately enhancing its significance and impact on the field and would make it a super strong paper for ICLR 2025." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents a substantial advancement in the field of neural differential equationss given that the authors address a, indeed, crucial limitation of standard NDEs—the inability to enforce known constraints in the learned dynamical systems—which often leads to poor generalization and numerical instability. While many solutions to this have been discussed, e.g. within Hamiltonian neural nets, the strengths of the paper are several and go beyond the literature, to the best of my knowledge. c\n\nThe introduction of PNDEs is novel contribution and the authors provide a principled method to incorporate hard constraints directly into the learning process. Their approach differs from e.g., Stabilized Neural Differential Equations by ensuring that the constraints are satisfied exactly, rather than asymptotically or approximately. Using projection operators is creative and less common within \"deep learning\" as opposed to more traditional convex optimization.\n\nThe paper demonstrates rigorous, but bried, theoretical development and provides clear mathematical derivations. The authors derive the projection operator using the Jacobian of the defining constraint functions, and in detail explain and ensure that the projected vector field remains within the tangent space of M and they further go to propose in detail that solutions to the PNDE remain on the constraint manifold is well-founded, and the proof is succinct yet thorough. Some extra discussion as well as graphical illustration would be beneficial here! The experimental section is robust, and covers a range of systems.\n\nIn terms of clarity, I am happy to see a quite clear and well-written paper which manages to convey complex ideas effectively. That said, certain sections would be harder to follow for people with more ML/DL background but I won't count this as a limitation. Note that the motivation behind enforcing hard constraints in NDEs is very clearly articulated, and the limitations of existing methods are adequately discussed. The derivation of the projection operator is presented step-by-step (again, a graphical illustration would do miracles here), making it accessible to mathematically include readers with a background in differential geometry and dynamical systems. The experimental figures and tables are informative and enhance the understanding of the results. The experimental setup is described in sufficient detail, allowing for reproducibility (although I could not locate a link with a repo).\n\n\nAs mentioned earlier, incorporating hard constraints into NDEs has significant implications for modeling realistic dynamical systems that inherently possess constraints, such as conservation laws and algebraic relationships. The ability of PNDE to enforce these constraints exactly enhances the reliability and accuracy of the models, which is crucial in safety-critical applications like power grid management. However, many practicioners, especially for this example, would claim that the lack of rigorous guarantees is a problem. Returnign to the main ideas of the paper, when suitable examples are considered, improving generalization and numerical stability, PNDEs contribute to advancing the state-of-the-art in data-driven modeling of dynamical systems. This work arguably opens up new possibilities for applying NDEs to a broader class of problems where constraints play a vital role." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the challenge of learning constrained dynamical systems in the context of neural differential equations (i.e., NDEs herafter). This term of NDEs includes the 2018 class of NODEs and generalization therefore such UDEs and bagging them together the authors are interested since indeed they allow for flexible modeling of dynamical systems by parameterizing the vector field f_\\theta with neural networks. The authors observe however, that they do not inherently enforce possible known constraints that the system may have, such as conservation laws, holonomic constraints (both applying quite well in Hamiltonian systems for example), or algebraic relationships and this can lead to learned models that to different levels of severity may violate essential properties of the system, resulting in poor generalization and numerical instability.\n\nTo overcome this limitation, the authors introduce the \"Projected Neural Differential Equations\" (PNDEs) whose key idea is to enforce constraints by projecting the neural network vector field f_\\theta onto the tangent space T_M of the constraint manifold M, which is defined by (algebraic equations) g(u) = 0. Specifically, they define the projected vector field as Proj_u (f_\\theta) \\in T_uM\nwhere \\mathrm{Proj}_u is the orthogonal projection operator of from T_uE onto T_uM. By integrating this projected vector field, the solutions remain on the manifold M, ensuring that the constraints are satisfied for all time. So in some sense, they try to \"get rid\" of the components of the vector field/neural net that would be learned but would live outside the constrained submanifold the physical system actually lives on.\n\nThe authors provide a detailed derivation of the projection operator using common decomposition techniques. They demonstrate that for an embedded submanifold M defined by smooth constraint functions g(u) the projection can be explicitly computed using the Jacobian of the latter, i.e., the constraints. This allows for efficient computation of the projected vector field during numerical integration.\n\nTo validate their approach, the authors conduct experiments on several challenging dynamical systems: the Fermi–Pasta–Ulam–Tsingou lattice system, a damped pendulum system with holonomic constraints and power grid models that incorporate power flow equations. \n\nCompared to various existing methods they cite, such as SNDEs, the experiments seem to verify claims that the proposed method offers exact constraint enforcement without introducing additional hyperparameters and/or suffering from stiffness issues in numerical integration which arguably distinguishes PNDEs from penalty-based methods or those that incorporate constraints as soft losses during training, which may not guarantee constraint satisfaction during inference.\n\nOverall, the paper presents a principled and general framework for incorporating hard constraints into neural differential equations by projecting the neural network vector field onto the tangent space of the constraint manifold. This approach enhances the modeling of constrained dynamical systems, improving accuracy, generalizability, and numerical stability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper, as discussed, presents a novel and effective method for incorporating hard constraints into neural differential equations there are several areas where the work could be improved.\n\nOne of the (few) main weaknesses lies in the discussion of related work and positioning of the proposed method within the existing literature. The paper focuses primarily on comparing PNDEs to SNDEs. However, there is a rich body of research on incorporating physical constraints and conservation laws into neural network models of dynamical systems that is not adequately addressed.\n\nFor instance, the (indeed) cited HNNs [Greydanus et al., 2019] and Symplectic ODE-Net [Zhong et al., 2020] (since the author do mention inductive bias in the intro) are significant contributions that leverage the symplectic structure of Hamiltonian systems to enforce conservation of energy and other invariants. These methods learn the Hamiltonian function directly and ensure that the learned dynamics preserve the symplectic form, inherently satisfying certain physical constraints. Therefore, it's not clear to as wether the PNDEs would be relevant in systems where HNNs seem to perform very well. As a matter of fact, in recent work on learning generalized Hamiltonians using fully symplectic mappings [Choudhary et al. 2024], addresses the challenge of modeling non-separable Hamiltonian systems using implicit symplectic integrators within neural networks which should be a class of problems where previously I would had assumed PNDEs to be prime candidates to work on but its just not clear to me as to what the best approach would be in such situations. So, overall, I would prefer a more thoorough discussion here. Finally, I would be keen for the authors to portray further unerstanding of the literature of projections. For example, it is known that such projections, introduce certain symmetries. These symmetries ideally can be quotioned out in order to fascillitate easier training, see for example a similar construction in convex optimization and SDPs where the tangential projection symmetries need be addressed [Bellon et. al. 2210.08387].\n\n\nWhile the paper provides a clear derivation of the projection operator and proves that solutions to the PNDE remain on the constraint manifold, the theoretical analysis could be strengthened. Specifically, the paper lacks a discussion on the computational complexity and scalability of the projection operation in high-dimensional systems or with complex constraints. Maybe it's too hard? From the practicioner's point of view this is important too. Given the experiments discuss power grid (we would use BnC methods normally and not gradient based methods for a number of reasons) this is important. Also, computing the projection onto the tangent space requires solving a system involving the Jacobian of the constraints, which can be computationally intensive for large-scale systems.\n\nMoreover, the paper does not provide theoretical guarantees on the convergence or stability of the PNDEs beyond the preservation of the constraints. Are there some assumptions that can be made that would allow for an analysis of the numerical errors introduced by the projection and their impact on the overall solution accuracy? Additionally, insights into how the method performs under approximate constraints or in the presence of noise would enhance the understanding of its robustness.\n\n\nRe the experimental section, while demonstrating the effectiveness of PNDEs on several systems, could be expanded to provide a more comprehensive evaluation. The experiments focus on systems where the constraint manifold is relatively straightforward to compute. It would be valuable to test PNDEs on \"less trivial\" systems with high-dimensional constraints or where the constraint manifold has nontrivial topology maybe? \n\nFurthermore, the comparison is primarily with SNDEs and unconstrained NDEs. Including additional baseline methods, such as HNNs, Symplectic Neural Networks, or other constraint-enforcing techniques, would strengthen the empirical evaluation. This would provide a clearer picture of the advantages and limitations of PNDEs relative to existing approaches.\n\nWhile the paper is generally well-written, certain sections could be clarified for better accessibility. The derivation of the projection operator, although mathematically rigorous, might be challenging for readers not deeply familiar with differential geometry. Providing more intuitive explanations or illustrative examples could help bridge this gap.\n\nAdditionally, the notation used in some equations, such as the use of adjoints and pseudoinverses, could be explained in more detail. Ensuring that all symbols and operations are clearly defined would improve the readability of the paper.\n\n\nAnother point is that the proposed method assumes that the constraints can be expressed as explicit algebraic equations and that the Jacobian of the constraints is full rank. In practice, many systems might have constraints that are implicit, differential-algebraic, or have singular Jacobians. What happens then? Discussing how PNDEs could be extended or adapted to handle such cases would enhance the significance and applicability of the work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "My concerns regarding this paper are as explained above." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-written and easy to read. Some experiments were conducted to support the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, a method for learning differential equations while preserving conservation laws is proposed. Specifically, to preserve conservation laws, the authors project the learned vector field onto the tangent bundle of the manifold defined by the conserved quantities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The proposed method is not novel because this method has already been proposed; the continuous-time model is shown in [1], and also the discrete-time model is shown in [2]. \n\n[1] Kasim, M.F. and Lim, Y.H. (2022) Constants of motion network, NeurIPS2022\n\n[2] Matsubara, T. and Yaguchi, T. (2023) FINDE: Neural Differential Equations for Finding and Preserving Invariant Quantities, ICLR2023\n\nIn [1], the learned vector field is designed to be orthogonal to the gradient vectors of the conserved quantities. Precisely, the learned vector field is projected onto the tangent space at each point of the manifold defined by the conserved quantities, which is the same as the approach proposed in this paper. In [1], the QR decomposition is used for orthogonalization and hence the method of computing the projection operator is a little different from that of this paper, which uses the pseudo inverse.\n\nIn [2], exactly the same approach as this paper is proposed; in [2], the manifold defined by conserved quantities is first introduced. Then, they consider tangent bundles of this manifold, and project the leaned vector field onto the tangent space at each point. More precisely, in [2], a continuous-time model is first considered. Equation (6) in [2], which represents the continuous-time model, is completely identical to (7) in this paper. The pseudo-inverse matrix is used for projection in [2], the pseudo-inverse matrix is specifically computed though (so the model looks a little different.) In addition, in [2] a discrete-time model is also discussed. In the discrete-time model, the discrete gradient, which is a discrete version of the gradient, is considered, and the discrete tangent space is defined using the discrete gradient. The discrete-time model is essentially the projection onto this discrete tangent space.\n\nIn addition, it seems that the conserved quantities are assumed to be given in this paper; however, the methods shown in the above papers can handle cases where these quantities are unknown. \n\nConsidering the above, the contributions of this paper is quite limited." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please address the concerns mentioned in the Weaknesses above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Constrained dynamics are present in real-world problems, and existing NODEs that do not consider these constraints run the risk of failing to satisfy them. In contrast, the proposed method achieves hard constraint satisfaction through projection." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a projection-based approach to ensure hard constraint satisfaction in constrained dynamics, which is important for several real-world applications. However, several concerns outlined in the weaknesses section remain, and further investigation of these limitations is needed to improve its practical applicability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe methodology is based on the assumption that the analytic form of the constraint function $g$ is known, which seems impractical. In real situations where the dynamics are unknown and the states $u$ are given only by the data, there are often many cases where the analytic form of $g$ is also unknown.\n2.\tWhat is the difference and advantage of the proposed method of projecting the forcing function $f$ onto the tangent space compared to projecting the predicted state $u$ onto the constraint manifold? Projecting $u$ onto the constraint manifold seems simpler than projecting $f$ onto the tangent space, while still satisfying the constraints.\n3.\tThe authors suggest restricting $f$ to the tangent space to satisfy the constraints. This leads to a question related to the one above: Is the original problem Eq. (1) with constraints on the state equivalent to the constrained problem Eq. (8) considered by the authors with the forcing term in the tangent space? Eq. (8) may limit the range of expressible dynamics.\n4.\tThere is a concern that the computation of the adjoint differential and pseudoinverse in Eq. (7) would be quite difficult for general $g$.\n5.\tInstead of enforcing hard constraints, constraints could be incorporated into the loss function by penalizing it. For instance, this could involve adding the $L^2$ norm of $g(u)$=$(g(u)-0)$ as a regularization term to the existing NODE loss. An experimental comparison with this approach seems necessary.\n6.\tWhile hard constraints ensure that the constraints are satisfied, they are not necessarily superior to soft constraints. Hard constraints can limit the representational power of the network and may negatively impact training because of their complex computational structure. It is crucial to understand and experimentally verify the trade-off between satisfying constraints and the model's capacity." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Could the paper provide a detailed formula of computing the \"Relative State Error\" and \"Constraint Error\"?\n\n2. Could the paper explain in detail how the trajectory of the figure was selected?\n\n3. What does the vertical axis of the leftmost subgraph in Figure 2 mean?\n\n4. Section 4.2: Why does NDE using generalized coordinates that satisfy constraints perform poorly, and does this mean that preserving constraints cannot directly indicate better predictions? Can the results highlight the importance of constraints?\n\n5. Section 4.3: Do we know the governing function for this example, or are the dynamics learned from the given data? Additionally, the statement 'apply random perturbations to each grid (see Appendix A) and learn the dynamics from the resulting transients' feels unclear. Could the paper clarify this? Also, what are the sizes of the training and test datasets used for this example?\n\n6. For all examples, the paper only presents the state error for a single test trajectory. Could the paper provide more comprehensive quantitative evaluations? \n\n7. The assumption of known constraints may be too restrictive. It would be helpful to discuss scenarios where the constraints are known, but the governing function is unknown and data is provided." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper presents an approach to integrating known constraints into neural differential equations, which is a relatively unexplored area in the field of machine learning and dynamical systems. The introduction of the projection method to enforce constraints on the vector field is innovative.\n\n2. The empirical results demonstrate that PNDEs outperform existing methods in terms of accuracy and stability. The experiments conducted on challenging examples, including chaotic systems and power grid models, further validate the robustness of the proposed method.\n\n3. The paper is well structured and clearly written, making complex concepts accessible to readers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Projected Neural Differential Equations (PNDEs), a novel method designed to incorporate known constraints into neural differential equations. Specifically, PNDEs project the vectors of the vector field onto the tangent spaces of the manifold defined by known constraints. This allows for the enforcement of various constraints without the need for specific coordinate systems. The paper provides empirical evidence through experiments showing that PNDEs satisfy constraints more accurately than state-of-the-art baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is a potential ambiguity in the notation used in the paper. Specifically, the definition of $\\mathcal{E}$ is lacking; is it $\\mathbb{R}^n$? The notations $f_{\\theta}$ and $\\bar{f}_{\\theta}$ in equations 1, 3, and 8 are not entirely consistent.\n\n2. The paper primarily focuses on known constraints. In many cases—such as the first example presented—it shows that if the constraints are known, the corresponding total differential equation can be determined.\n\n3. The effect of the ODE solver is not discussed in this paper. Ideally, the constraints should be preserved exactly, as stated by Proposition 1 in the paper. However, the numerical results indicate that they are only preserved approximately, albeit with small errors. These discrepancies may arise from numerical errors introduced by the ODE solver.\n\n4. The setup of the first example differs from that in Section 2. Is it possible to explicitly write out the manifold for the example?\n\n5. The empirical results demonstrate that the proposed PNDEs outperform existing methods in terms of the consistent error. However, the improvement in terms of prediction error is less pronounced. I am not certain whether the ability to preserve the given constraints is the most critical indicator. If we aim to improve this, the simplest and most straightforward approach would be to project the predicted state onto the known manifold during the prediction phase." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024projected,\ntitle={Projected Neural Differential Equations for Learning Constrained Dynamics},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2AWZTv6kgV},\nnote={under review}\n}" }, "abstract": { "value": "Neural differential equations offer a powerful approach for learning dynamics from data.\n However, they do not impose known constraints that should be obeyed by the learned model.\n It is well-known that enforcing constraints in surrogate models can enhance their generalizability and numerical stability.\n In this paper, we introduce projected neural differential equations (PNDEs), a new method for constraining neural differential equations based on projection of the learned vector field to the tangent space of the constraint manifold.\n In tests on several challenging examples, including chaotic dynamical systems and state-of-the-art power grid models, PNDEs outperform existing methods while requiring fewer hyperparameters.\n The proposed approach demonstrates significant potential for enhancing the modeling of constrained dynamical systems, particularly in complex domains where accuracy and reliability are essential." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "neural differential equations", "neural ordinary differential equations", "constraints", "dynamics", "scientific machine learning", "ai for science" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6f4494dd3c83939c7d33f4ba6198bf960496e7b9.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Projected Neural Differential Equations for Learning Constrained Dynamics" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2Akf4BBCKo
KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing
main
Active
Large Language Model;KV Cache;KVSharer
foundation or frontier models, including LLMs
3;3;3;5;5
4;4;4;3;4
1;2;1;3;2
1;2;2;3;3
2;3;3;3;3
3.8
3.8
1.8
2.2
2.8
-0.612372
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I think the paper will be much more ready if the authors could address the following questions (from high to low priority):\n\n1. Could the authors provide comparisons with other layer-wise compression strategies in terms of accuracy and system performances?\n\n2. Did the authors investigate the relationship between dis-similarity ranking and the acceptance rate by the thresholding condition? It's possible that the cosine similarity check, rather than dissimilarity ranking, plays a primary role. In principle, if the \"higher dis-similarity --> inter-layer KV cache sharing gives better performance\" hypothesis holds, then a higher rank should correspond to a higher acceptance rate. Could the authors provide additional results and justification on this point?\n\n3. There is an important threshold in this work: cos-similarity (representation similarity) threshold that determines whether to accept a KV cache pair, Can the authors provide explanations on how the value is determined/searched? Moreover, the number of target shared KV cache layers is also an important hyper-parameter, and this is discussed in the paper an ablation study on in Table 1. But can the authors provide some guidance/calculation on how this number translate to memory saving and inference speedup?\n\n4. For KV cache dissimilarity distance, why did the authors choose Euclidean distance? Could the authors ablate on other distance metrics? Similarly, for cosine similarity from the final layer hidden states, what if some other metrics like angular distance is used (less important, just wondering)?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "This paper and the technique introduced have the following strengths:\n\n1. Paper writing is easy to follow with good figures and illustrations.\n2. The experiment sections demonstrate KVSharer can be used in orthogonal with other intra-layer KV compression techniques like H2O and PyramidInfer to achieve higher memory saving and more significant speedup.\n3. The paper brings up a new angle" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new inter-layer KV cache compression technique through layer-wise KV cache dis-similarity search and sharing. The layers are ranked pairwise in accordance with their dis-similarity score. For each pair, an earlier layer's KV will be shared and reused by an later layer for efficient pre-filling and generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have several concerns about the paper:\n\n1. Even though layer pairs are ranked from high dis-similarity to low dis-similarity, whether to use the pair still depends on the cosine similarity between the the KV-cache compression model and the original model. There is a possibility that the cosine similarity check, rather than dis-similarity ranking, plays a major role.\n\n2. A major claim in the paper is dis-similarity metrics is better than similarity metrics when it comes to inter-layer KV cache sharing. Empirical evidences are provided in Section 5.1 and Figure 6 when changing the Euclidean-distance based ranking from descending order (dis-similarity) to ascending order (similarity). However, I didn't find any theoretical and empirical evidence that \"Euclidean distance for KV cache is a sufficient good metrics\" in comparison with the other SOTAs. More specifically, how does KVSharer compare with other layer-wise compression strategies, for example miniCache [1], LCA [2], CLLA [3] and simpleLayerKV [4]? Without the experiment results, I don't think the paper is ready at this stage for publication.\n\n[1] Liu, Akide, et al. \"MiniCache: KV Cache Compression in Depth Dimension for Large Language Models.\" arXiv preprint arXiv:2405.14366 (2024).\n\n[2] Brandon, William, et al. \"Reducing Transformer Key-Value Cache Size with Cross-Layer Attention.\" arXiv preprint arXiv:2405.12981 (2024).\n\n[3] Yang, Zhen, et al. \"Lossless KV Cache Compression to 2%.\" arXiv preprint arXiv:2410.15252 (2024).\n\n[4] Zhang, Xuan, et al. \"SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.\" arXiv preprint arXiv:2410.13846 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Not applicable." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address the corresponding concern listed in the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- S1. This paper explores an important problem of improving the efficiency in utilizing KV cache in LLM generative inference. \n\n- S2. The related work and research context are well summarized." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper first presents a counterintuitive phenomenon when attempting to leverage the cross-layer pattern to improve the efficiency of the LLM generative inference computation, where sharing dissimilar KV caches better preserves the model performance. Based on this observation, this paper introduces a method named KVSharer, which integrates this observation to implement efficient cross-layer KV cache sharing. An empirical study has been conducted to verify the effectiveness of the proposed methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- W1. Heuristic-based on aggregated information. As enumerated in Section 3.1.2, the proposed method uses the averaged value of the KV-cache to consider the similarity between different layers -- it is a little confusing why such highly integrated information could guide the sharing policy, considering lots of recent work has been exploring the KV-cache utilization at token, layer, and head level jointly. My concern is whether such a highly aggregated metric is informative or not.\n\n- W2. My main concern is about the experimental setup. There is a significant mismatch between the motivation example in the introduction, e.g., \"During the LLM inference phase, the KV cache typically accounts for 80% of the total memory usage.\" and the benchmarked settings, where the context window is set to just a few thousand, e.g., up to 1024+4096 in Table-2. Unless batching to an extremely large value (not mentioned in the paper), there is a significant gap between the motivation and the experiments. I think it would be critical to evaluate the performance of the proposed method over long-context benchmarks (e.g., Infiniti-bench) where the model's context window should be from 32K to 128K (or even longer). Otherwise, the truly useful scenario is not evaluated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. This paper addresses a good research topic: efficient LLM inference.\n \n2. The paper is well-organized.\n \n3. The proposed method is clearly presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces KVSharer, a plug-and-play method for compressing the key-value (KV) cache of large language models (LLMs) during inference. Unlike the intuitive approach of sharing similar KV caches, KVSharer is based on a counterintuitive observation: sharing different KV caches across layers does not significantly degrade model performance. KVSharer employs a search strategy to identify the optimal KV cache sharing policy across different layers, substantially reducing GPU memory usage while retaining most of the model’s performance. Additionally, KVSharer is compatible with existing intra-layer KV cache compression methods, offering a complementary approach to memory optimization for LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Lack of novelty and research depth:** This main technique is to share the dissimilar KV cache for efficient inference, which is quite simple. Although authors claim that this originates from a counterintuitive observation, there is no motivation provided in the methodology section. Therefore, both of the novelty and the research depth of this paper are not qualified for the top AI conference.\n \n2. **Unreasonable observation without further analysis:** The observation that the sharing the dissimilar KV cache brings in better accuracy than sharing the similar one sounds unreasonable, the dissimilar KV states output different attention scores, making the LLM attend to different part of the query token. It is more convincing that the obtained conclusion is just a coincidence and varies across the models and datasets, considering that no in-depth analysis has been provided.\n \n3. Lack Needle-in-a-Haystack experiment." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Why is it better to share dissimilar KV caches? Since the authors themselves describe this as counterintuitive, providing an explanation for this phenomenon would be highly valuable for the community.\n\n* What happens if KVSharer is unable to find $C$ pairs of layers to share KV caches while satisfying the threshold $T$? It would be helpful to include a guideline on setting this threshold and any evaluation showing its impact on search performance.\n\n* In Table 2, why does the memory usage reduction exceed the compression rate? Additionally, what is the source of the observed increase in generation throughput? Since KV cache sharing reduces memory usage but likely not memory bandwidth, it is unclear how this improves inference throughput." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* Does not require training\n* Provides an interesting and novel insight that sharing dissimilar KV caches yields better performance.\n* Offers diverse and insightful evaluation results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces KVSharer, a post-training method for layerwise KV cache sharing. Based on the counterintuitive observation that sharing KV caches between layers with dissimilar, rather than similar, KV caches leads to less performance degradation, KVSharer employs a systematic search strategy for KV sharing. As a result, KVSharer reduces GPU memory consumption while maintaining model performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Results show a noticeable performance drop even at low compression rates (e.g., 12.5%, 25%), which may limit the practicality of the method.\n* Lacks an explanation for why sharing dissimilar KV caches yields better performance, leaving an essential aspect of the method's effectiveness rather unclear." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What is the time scalability of the proposed approach? Will the inference time remain acceptable when scaling up to models with over 400 billion parameters? It would be valuable to provide an estimation or analysis to address this concern." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This idea offers new insights into how memory size can be further reduced, potentially leading to more efficient model deployments and optimized hardware utilization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel approach to sharing the key-value (KV) cache across different layers in a new dimension, which can lead to more efficient memory usage and improved performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The paper lacks a comparison with other cache-sharing methods, which would provide a clearer understanding of its advantages.\n\n2) It should consider the scenario when the KV cache is quantized, as quantization is often used during inference to save energy.\n\n3) The paper also lacks a scalability analysis, which is crucial for evaluating how well the proposed method performs as model size and complexity increase." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024kvsharer,\ntitle={{KVS}harer: Efficient Inference via Layer-Wise Dissimilar {KV} Cache Sharing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2Akf4BBCKo},\nnote={under review}\n}" }, "abstract": { "value": "The development of large language models (LLMs) has significantly expanded model sizes, resulting in substantial GPU memory requirements during inference. The key and value storage of the attention map in the KV (key-value) cache accounts for more than 80\\% of this memory consumption. Nowadays, most existing KV cache compression methods focus on intra-layer compression within a single Transformer layer but few works consider layer-wise compression. In this paper, we propose a plug-and-play method called \\textit{KVSharer}, which shares the KV cache between layers to achieve layer-wise compression. Rather than intuitively sharing based on higher similarity, we discover a counterintuitive phenomenon: sharing dissimilar KV caches better preserves the model performance. Experiments show that \\textit{KVSharer} can reduce KV cache computation by 30\\%, thereby lowering memory consumption without significantly impacting model performance and it can also achieve at least 1.3 times generation acceleration. Additionally, we verify that \\textit{KVSharer} is compatible with existing intra-layer KV cache compression methods, and combining both can further save memory." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Model", "KV Cache", "KVSharer" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/89b8a1b38e6cf82dd43280011e39cb6cefe942b9.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f460ce31466303cf912af801d075b73567417681.zip" }, "title": { "value": "KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2BtFKEeMGo
Learning from weak labelers as constraints
main
Active
unsupervised learning;weak supervision;learning theory
learning theory
5;6;6;8
4;4;2;3
2;3;3;4
2;2;3;4
2;3;2;3
6.25
3.25
3
2.75
2.5
-0.345857
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see weaknesses" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The idea is novel and the theory is rigorous. The proposed algorithms lead to significant improvements in empirical evaluations on some datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores programmatic weak supervision by treating weak labelers as constraints in a classification task.The authors propose a constrained optimization approach that integrates weak labeler error bounds directly into the learning objective. This forms a complex optimization problem and is solved with a novel alternating minimization algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper misses a conclusion and future extensions paragraph.\n2. On some datasets, the margin of the proposed methods and competing methods are small. Would it be helpful to run some statistical tests to compare their performances?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I suggest the authors use different markers and line styles for different datasets instead of only using color to differentiate different lines.\n2. There are several ?? in the paper. For example, on line 1357, 1359, 1418: ??\n3. On line 491, the author mentioned that they implemented Algorithm 1 with an L2 regularization. I wonder what are the impacts of other regularization." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. Novel approach: This work presents a novel constraint-based objective that specifically considers accuracy bounds.\n2. Scalable: A linear program for classifier constraints and a convex optimization problem for distribution constraints can effectively execute the paper's efficient alternating minimization approach.\n3. Thorough theoretical analysis: The paper offers a thorough theoretical examination of the suggested approach. These analyses offer assurances on the trained classifier's inaccuracy and draw attention to the denoising impacts.\n4. Excellent empirical performance: According to an experimental evaluation on a well-known weak supervision benchmark, the suggested approach outperforms current baselines, proving its efficacy and resilience." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "By using accuracy restrictions on weak labelers as learning constraints, this work introduces a novel method for programmatic weak supervision. The paper makes three primary contributions:\n\n1. create a constraint-based method for aggregating weak labelers; \n2. present a scalable optimization problem; and \n3. offer a theoretical analysis of the suggested constrained estimator.\n\nThe suggested method is technically sound and well-motivated, and the paper is well-written. The empirical evaluation shows the efficacy of the suggested approach, while the theoretical analysis sheds light on the denoising consequences of several weak labelers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors admitted that in the case of learning on classifier solving the ILP can still be slow even with LP relaxation. Additionally, because the stochastic gradient descent relies on the population means of the weak labeler accuracies, the method is unable to use a small batch size." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses above. And,\n1. I do not find the empirical results convincing. Ours(C) and Ours(V) rely on either hand tuning $\\eta_j$ or estimating from validation data. Did you estimate source quality in other WS setups using the same validation data? \n\n2. Can you provide some simulations with different $\\eta_j$ and labeling sources outputs, clearly showing how the method works in different scenarios?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper presents an interesting alternative to traditional generative models for weak supervision. By viewing weak labelers as error-bound constraints, the approach avoids common assumptions about label independence or probabilistic structure, which may not hold always. \n\n2. The authors provide an upper bound on error (in the union of covered region by all weak labelers) of any predictor satisfying all the constraints. The upper bound is summation of upper bounds on the errors in each weak labelers and the probability of region where weak labelers have a conflict. Implying better bound with more conflict.\n\n3. The method is evaluated on weak supervision benchmarks, where it demonstrates improved accuracy over other weak supervision methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method for learning from programmatically generated weak labels by treating weak labelers as constraints rather than relying on traditional generative models. This approach uses side information in the form of bounds on weak labelers’ error rates, which are applied as constraints within a constrained optimization framework. The authors introduce an alternating minimization algorithm to iteratively project model predictions onto the feasible region defined by these constraints. They evaluate the method on multiple weak supervision benchmarks and demonstrate that it improves upon traditional weak supervision techniques, such as Snorkel, by incorporating this constraint-based learning approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The model relies on accurate estimates of weak labelers’ error bounds to define constraints. However, obtaining these estimates is challenging, and inaccurate bounds could lead to suboptimal model performance. In the experiments these are estimated using validation data, which could also be hard to obtain. In contrast several baselines in weak supervision (those based on generative modeling) estimate 'labelers quality' using only the unlabeled samples. \n\n2. Assumptions made on the weak labelers are not clear. In particular, what are the assumptions that the labelers have to satisfy to ensure the method will work as expected and the theoretical results will hold. Naively putting an upper bound of $\\eta_j$ on each labeler could lead to several scenarios e.g. all could have $\\eta_j$ error in different parts of the input space (~independence) or highly overlapping parts (~highly correlated). Could you explain how the method and results will turn out in these two extremes? \n\n3. Theoretical results do not explain how learning in the proposed setup leads to a classifier with good generalization error. It naively depends on the summation of errors of individual labelers. On the other hand several of the baselines provide results showing how the labelers cancel their noises and eventually lead to a classifier with comparable generalization error to a model trained on clean labels. They do make certain assumptions on labelers to get there. What can be said more specifically in this setup with similar assumptions? Even a naive majority vote with labelers with random noise of $\\eta_j$ could be shown to give good generalization error going down with the number of samples and the number of weak labelers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see above" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper moves away from traditional generative models, avoiding the often unrealistic assumption of conditional independence between weak labelers, making it more flexible for real-world applications.\n\n2. The theoretical analysis is quite thorough, especially with the introduction of projection techniques and alternating minimization, showing how to effectively build a classifier without labeled data.\n\n3. Good writting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel approach for learning from weak labelers by framing the problem as a constrained optimization task. Instead of relying on generative models or conditional independence assumptions, the paper proposes using known upper bounds on the error rates of weak labelers to guide the learning process. The paper develops an alternating minimization algorithm to iteratively generate soft pseudo-labels that satisfy the constraints and train the model accordingly. Theoretical analysis is provided to explain the denoising effects of the method, and experiments on benchmark datasets demonstrate its effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The problem addressed in this paper is certainly interesting, but as the authors themselves mention, it has strong connections to areas like crowdsourcing, noisy label learning, semi-supervised learning, and ensemble learning. Each of these fields already has well-established techniques that could be adapted, with only minor modifications, to solve the problem presented here. However, the paper dismisses these connections too quickly, with phrases like \"not directly applicable to our setting\" and \"relies on the implicit inductive bias.\" I find this explanation insufficient, as it limits the paper's significance and impact. A deeper exploration of these connections, along with additional comparative experiments, would have been much more convincing.\n\nThe authors propose two objectives and frame the problem as a constrained optimization task, introducing corresponding optimization methods. While the paper's main contribution is centered on optimization through projection, I have to admit that I'm not an expert in optimization, this approach feels somewhat intuitive. It doesn't strike me as a particularly novel or non-intuitive solution.\n\nAdditionally, regarding the problem setup and experiments, I would like to see more details about the coverage rate and noisy rate of each weak labeler and the collective coverage of all labelers. This seems crucial to the model’s performance and yet isn’t discussed in enough detail." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A new method for learning from weak labelers given information about their average errors." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning from weak labelers as constraints},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2BtFKEeMGo},\nnote={under review}\n}" }, "abstract": { "value": "We study programmatic weak supervision, where in contrast to labeled data, we have access to \\emph{weak labelers}, each of which either abstains or provides noisy labels corresponding to any input. Most previous approaches typically employ latent generative models that model the joint distribution of the weak labels and the latent ``true'' label. The caveats are that this relies on assumptions that may not always hold in practice such as conditional independence assumptions over the joint distribution of the weak labelers and the latent true label, and more general implicit inductive biases in the latent generative models. In this work, we consider a more explicit form of side-information that can be leveraged to denoise the weak labeler, namely the bounds on the average error of the weak labelers. We then propose a novel but natural weak supervision objective that minimizes a regularization functional subject to satisfying these bounds. This turns out to be a difficult constrained optimization problem due to discontinuous accuracy bound constraints. We provide a continuous optimization formulation for this objective through an alternating minimization algorithm that iteratively computes soft pseudo labels on the unlabeled data satisfying the constraints while being close to the model, and then updates the model on these labels until all the constraints are satisfied. We follow this with a theoretical analysis of this approach and provide insights into its denoising effects in training discriminative models given multiple weak labelers. Finally, we demonstrate the superior performance and robustness of our method on a popular weak supervision benchmark." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "unsupervised learning", "weak supervision", "learning theory" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c5e6d8ada8fc16373eb250584016316253b1e229.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning from weak labelers as constraints" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2CQa1VgO52
Enhancing Deep Symbolic Regression via Reasoning Equivalent Expressions
main
Active
symbolic regression;deep reinforcement learning;symbolic reasoning
other topics in machine learning (i.e., none of the above)
3;3;3;5;5
4;3;4;2;3
2;2;2;2;2
2;2;2;3;2
3;1;3;2;3
3.8
3.2
2
2.2
2.4
-0.763763
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "**1)** Can you include more types of SR models in benchmarking, or explain the advantages of DSR-Rex over AI Feynman 2.0 in capturing equivalent expressions?\n\n**2)** In equation (4), You mentioned that $\\mathbb{I}$ \\{ $\\cdot$ } $=1$ if $\\tau$ can be converted to $\\phi$, however, according to the definition in line 85, $\\phi$ is one specific expression. How do you obtain the probability of the equivalent group? Do you mean $\\phi$ represents all equivalent expressions to $\\phi$ here? In line 181, \"all possible sequences\" refers to all the sequences in the same equivalent group, or all the expressions have been sampled?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "**1)** The paper is well written with clear notations, concrete technical details, and illustrative figures to explain the problem.\n\n**2)** The paper is well-motivated by an interesting topic of expression equivalency in the symbolic regression (SR) area, which is promising to attain better performance with existing SR models and develop new SR models.\n\n**3)** Theoretical analysis provides the performance lower-bound as DSR." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper identifies a problem of deep symbolic regression (DSR) for symbolic regression problems, that failure to capture equivalent expressions results in high variance of gradients and unstable training for the policy gradient estimator. The author proposed to address the problem by appending the symbolic reasoning module to a batch sampling of DSR to capture the equivalent expressions and adopting a new policy gradient method based on the group of equivalent expressions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1)** Expression equivalency problems exist in nearly all SR methods. Compared with the large landscape of SR model families, the baseline model DSR is a little bit out-of-date. For example, GPMeld, the successor of DSR in Figure 2, exhibits better performance than DSR, and a similar performance to DSR-REX. Besides, the benchmarking models adopted in the experiments only encompass Reinforcement Learning (RL) based methods and one RL and genetic programming hybrid method GPMeld. To make stronger conclusions, more types of SR models should be considered, such as AI Feynman 2.0 as cited in the paper which studies similar expression equivalency problems.\n\n**2)** Figure 3 only compares the efficiency between the steps within the DSR-REX with different architectures. The comparison of efficiency between DSR and DSR-REX would bring in more insights." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The theoretical guarantees assume fair sampling of all equivalent sequences for each expression, but in practice this may not hold. Consider two expressions φ₁ and φ₂, where φ₁ finds only two equivalent forms, while φ₂ finds N>>2 equivalent forms through the designed reasoning rules. This could lead to q(φ₂) > q(φ₁) simply due to having more discoverable equivalent forms (e.g., there are lots of trigonometric equivalences compared to other operations), rather than actual learning preference. How does this potential bias affect the training process?\n\n2. What is the value of max group size parameter, and how sensitive is the method to this parameter?\n\n3. Could you clarify if the results shown in Fig. 2 (right) are averaged across all benchmark datasets or specific ones?\n\n4. How are the 10 Feynman datasets selected? Why not evaluate on standard benchmarks like SRBench and compare against more recent SOTA methods?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* Interesting approach to leveraging equivalent expressions for variance reduction in symbolic regression, with supporting theoretical analysis\n* The methods to reason and find equivalent expressions are straightforward and fast, making them easy-to-use for future works." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Deep Symbolic Regression via Reasoning Equivalent Expressions (DSR-REX), an enhancement to deep reinforcement learning-based symbolic regression (DSR). The key innovation is leveraging numerically equivalent mathematical expressions to reduce policy gradient estimate variance while maintaining unbiasedness. The method incorporates a symbolic reasoning module that generates equivalent expressions through mathematical transformations, leading to improved convergence and performance compared to baseline deep RL methods. The authors provide theoretical guarantees for their approach and demonstrate empirical improvements on several datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Limited evaluation scope using primarily trigonometric datasets and a small subset of Feynman equations, rather than standard benchmarks like SRBench (all Feynman equation, black-box datasets)\n* Comparison against outdated baselines (DSR, neural guided GP) rather than current SOTA methods like PySR, uDSR, E2E, TPSR, and SPL\n* Insufficient analysis of how the theoretical guarantees translate to practical scenarios, particularly regarding the sampling distribution of equivalent expressions\n* Lack of ablation studies on the impact of different group sizes and reasoning rules" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Details/notations about problem setup need to be more precise. For instance, \n- for $\\tau=(\\tau_1,\\dots,\\tau_k)$, what is $k$? How is it determined? \n- What is each $\\tau_i$ -- is each of them a math operator/variable/coefficient? \n- In equations (1) and (2), the reward is defined for each sequence $\\tau$, but right after that, the notations override previous ones, where $\\{\\tau_1,\\dots,\\tau_N\\}$ represents multiple sequences, so here each $\\tau_i$ is a sequence, instead of an element in a sequence? \n\nPlease revise the notations and be rigorous about their meanings. \n\n2. I understand that numerically equivalent but symbolically different expressions exist, and it is reasonable to try to avoid them. However, for the motivation of this work, I was wondering how this might negatively affect DSR. Why does it make it less stable or less efficient, as the authors claim? \n\n3. Line 196, the sentence \"Since we cannot directly use the probability distribution qθ to sample a group of sequences with the\nsame reward. Instead,...\" seems to be grammatically incorrect. \n\n4. More details of the method for equivalent expressions are needed for clarity: It is claimed that \"In practice, equation 7 is not computed by enumerating every expression in Φ (as indicated by the inner summation).\" and the details are in Section 3.2. However, Section 3.2 seems difficult to understand. What is the generated equivalent expressions for? How are they used in equation 7? Or is there an equivalent way to compute equation 7 after generating the equivalent expressions? \n\n5. What if the equivalent expressions in Section 3.2 cannot enumerate all possible choices? What is the consequence, and how would limiting the number of them impact the results? \n\n6. Setup for Section 5.2: Is Figure 2 the result for one dataset, or aggregating results from multiple datasets? Please consider showing the results for all 10 datasets. \n\n7. How are the 10 tasks selected from the Feymann dataset? It would also be helpful to consider larger benchmarks like SRBench [1]. \n\n[1] La Cava, William, et al. \"Contemporary symbolic regression methods and their relative performance.\" Advances in neural information processing systems 2021.DB1 (2021): 1.\n\n8. The high-level idea of addressing numerically equivalent expressions seems widely applicable. Would similar ideas be useful beyond the context of DSR? It would be helpful to have some discussion on the broader scope." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Improving the performance of symbolic regression is an important problem, and this paper is likely to have impact for the important method of DSR. \n\n2. The motivating point of addressing equivalent symbolic equations is interesting and insightful. \n\n3. The paper is clearly structured." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes DSR-Rex, which adds a mathematical equivalence reasoning module to deep symbolic regression to improve the efficiency and stability in the training process (variance reduction). Based on a re-expression of the objective after grouping mathematically equivalent but symbolically different equations, the algorithm uses standard encoding/decoding modules of DSR plus a novel reasoning module that enumerates equivalent expressions of the generated equations, and then modifies the training objective of DSR. It is proved the equivalence of objective functions of DSR-Rex and DSR and reduced variance of the estimated objective using DSR. The performance of DSR-Rex is evaluated on Feymann datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The presentation clarity of the paper needs to be improved, including notations and method details. Please see my detailed questions below. \n\n2. The motivation needs to be strengthened to justify why numerically equivalent but symbolically different equations will pose challenges to DSR training and why the proposed methods. \n\n3. The experiments can be enhanced by adding more benchmark comparisons." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "##### Questions\n\n1. The third innovation point of the paper, 'Encours-ages RL exploration of different symbolic forms in the search space of all expres- sions' Is to make the probability of model sampling more random? Like adding entropy loss?\n2. Article line 151, additional sequences generated by a symbolic expression reasoning module. How does symbolic expression reasoning module generate additional sequences and what is their role?\n3. In Figure 1, the **Reasoned expressions** can improve the performance of the algorithm. Please analyze the reasons for the improvement in the performance of the algorithm more carefully in the article.\n4. Although your idea is good, I think it is inappropriate for the words \"high-level idea\" to appear in an academic paper." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "##### Strengths\n\n1. In this paper, DSR-REX has achieved good results in comparison with other baselines.\n2. Achieving variance reduction of the gradient estimator with a theoretical guarantee." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes DSR-REX, which improves the performance of the algorithm by embedding mathematical laws and equalities into deep models. Moreover, the variance of the gradient estimator is theoretically guaranteed to decrease. Finally, in various experimental tests, DSR-REX shows good performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "##### Weaknesses\n\n1. I think the chapter arrangement of this article is unreasonable. For example, the related work is actually behind the method, which makes the article very messy. I spent an hour not understanding what the author was doing. I think The **Related work** can be put behind the **Introduction**. The **Motivation ** part of **METHODOLOGY ** can be appropriately deleted and put into the **introduction ** part...\n2. Many related works are not mentioned.\n**Reinforcement Learning for Scientific Discovery.** such as TPSR(Discovering mathematical formulas from data via gpt-guided Monte Carlo tree search), SR-GPT(Discovering mathematical formulas from data via gpt-guided monte carlo tree search), RSRM(Reinforcement Symbolic Regression Machine)...\n\n**Symbolic Regression with Domain Knowledge:** NSRwH(Controllable neural symbolic\nregression), MLLM-SR(MLLM-SR: Conversational Symbolic Regression base Multi-Modal Large Language Models\n), LLM-\nSR(LLM-SR: Scientific Equation Discovery via Programming with Large Language Models)...\n3. This article only mentioned the symbolic regression method using reinforcement learning, but symbolic regression is not the only one, other methods should appear in the comparison method, e.g. SNIP,(https://doi.org/10.48550/arXiv.2310.02227) MMSR(https://doi.org/10.1016/j.inffus.2024.102681), DSO(NGGP)(https://doi.org/10.48550/arXiv.2111.00053), TPSR(Transformer-based Planning for Symbolic Regression), and so on \n4. The author should test your algorithm on the SRBench dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you provide DSR-REX’s results on SRBench[4] or SRSD-Feynman[5] to assess the model's stability under varying noise levels and complexities with scientific implications?\n\n2. What level of improvement might your method bring if applied to other models like SPL[1], TPSR[2], and uDSR[3] for training?\n\n3. Could you share the recovery rate for each expression in Chapter 5.2, Experimental Analysis?\n\n4. Could you include an ablation study on parameters in Appendix Section D?\n\n5. Could you compare DSR-REX with models like SPL, TPSR, and uDSR [1, 2, 3] in the experiments in Chapter 5?\n\n[1]Sun F, Liu Y, Wang J X, et al. Symbolic physics learner: Discovering governing equations via monte carlo tree search[J]. arXiv preprint arXiv:2205.13134, 2022.\n\n[2]Shojaee P, Meidani K, Barati Farimani A, et al. Transformer-based planning for symbolic regression[J]. Advances in Neural Information Processing Systems, 2023, 36: 45907-45919. \n\n[3]Landajuela M, Lee C S, Yang J, et al. A unified framework for deep symbolic regression[J]. Advances in Neural Information Processing Systems, 2022, 35: 33985-33998. \n\n[4]La Cava W, Orzechowski P, Burlacu B, et al. Contemporary symbolic regression methods and their relative performance[J]. arXiv preprint arXiv:2107.14351, 2021. \n\n[5]Matsubara Y, Chiba N, Igarashi R, et al. Rethinking symbolic regression datasets and benchmarks for scientific discovery[J]. arXiv preprint arXiv:2206.10540, 2022." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The performance comparison between DSR-REX and previous models like DSR and NGGP highlights its superiority. The complexity of the case study equations effectively showcases the model’s symbolic regression capabilities. Additionally, the paper provides clear details of the algorithm and experimental processes." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The performance comparison between DSR-REX and previous models like DSR and NGGP highlights its superiority. The complexity of the case study equations effectively showcases the model’s symbolic regression capabilities. Additionally, the paper provides clear details of the algorithm and experimental processes.\n\nWeaknesses:\nQuestions:" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper lacks comparisons with other tasks beyond DSR, such as SPL[1], TPSR[2], and uDSR[3], across different benchmarks like SRbench[4]. It also does not discuss how this method could be applied to these models.\n\n[1]Sun F, Liu Y, Wang J X, et al. Symbolic physics learner: Discovering governing equations via monte carlo tree search[J]. arXiv preprint arXiv:2205.13134, 2022. \n\n[2]Shojaee P, Meidani K, Barati Farimani A, et al. Transformer-based planning for symbolic regression[J]. Advances in Neural Information Processing Systems, 2023, 36: 45907-45919. \n\n[3]Landajuela M, Lee C S, Yang J, et al. A unified framework for deep symbolic regression[J]. Advances in Neural Information Processing Systems, 2022, 35: 33985-33998. \n\n[4]La Cava W, Orzechowski P, Burlacu B, et al. Contemporary symbolic regression methods and their relative performance[J]. arXiv preprint arXiv:2107.14351, 2021." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a deep reinforcement learning approach for symbolic regression that reduces the instability of policy gradients by using numerically equivalent but symbolically distinct expressions." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing Deep Symbolic Regression via Reasoning Equivalent Expressions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2CQa1VgO52},\nnote={under review}\n}" }, "abstract": { "value": "Symbolic regression seeks to uncover physical knowledge from experimental data. Recently a line of work on deep reinforcement learning (DRL) formulated the search for optimal expressions as a sequential decision-making problem. However, training these models is challenging due to the inherent instability of the policy gradient estimator.\nWe observe that many numerically equivalent yet symbolically distinct expressions exist, such as $\\log(x_1^2 x_2^3)$ and $2\\log(x_1) + 3\\log(x_2)$. \nBuilding on this, we propose Deep Symbolic Regression via Reasoning Equivalent eXpressions (DSR-Rex). The high-level idea is to enhance policy gradient estimation by leveraging both expressions sampled from the DRL and their numerically identical counterparts generated via an expression reasoning module. \nOur DSR-Rex (1) embeds mathematical laws and equalities into the deep model, (2) reduces gradient estimator variance with theoretical justification and (3) encourages RL exploration of different symbolic forms in the search space of all expressions.\nIn our experiments, DSR-Rex is evaluated on several challenging scientific datasets, demonstrating superior performance in discovering equations with lower Normalized MSE scores. Additionally, DSR-Rex computes gradients with smaller empirical standard deviation, compared to the previous DSR method." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "symbolic regression", "deep reinforcement learning", "symbolic reasoning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6d7ba65b73dac30efbf1b0454b8adda02b05c37f.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/d21977842d820cb9bb9b97bebe4647c76ce57fa0.zip" }, "title": { "value": "Enhancing Deep Symbolic Regression via Reasoning Equivalent Expressions" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2CYZkawsmz
MDTREE: A Masked Dynamic Autoregressive Model for Phylogenetic Inference
main
Active
Phylogenetic Inference;Genome Language Model;Transformer;Graph Structure Generation;DNA;Large Language Models
applications to physical sciences (physics, chemistry, biology, etc.)
3;5;5;6;6;8
2;2;3;5;3;5
2;3;3;2;3;3
3;3;3;3;3;4
1;2;1;3;4;3
5.5
3.333333
2.666667
3.166667
2.333333
0.801784
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper improves on existing methods, notably ARTree, for deep learning for phylogenetic inference.\n- The proposed central problem, that of finding a proper taxon insertion order is an important piece of any phylogenetic inference algorithm and deserves more highlights in deep learning-based approaches.\n- The use of language models to extra biological priors is quite novel and general.\n- Experiments are relatively extensive, at the base line of deep learning-based approaches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper provides a new deep learning based method that incorporates language model to extract biological priors to find a node insertion ordering. They improve on state of the art methods using autoregressive models and provide comprehensive experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper's main contribution is methodological but compared to the most closely related method, ARTree, there are only marginal improvements across datasets. This is also in light of the fact that the proposed methodology is extremely more computational intensive, both in terms of runtime and carbon footprint. With so much more computation, it is not too unfair to expect a more pronounced difference. Perhaps it is advisable to find conditions where dynamic node ordering strongly affects the tree reconstruction methods. If it is too hard to find such conditions, perhaps it is not as bad a problem as stated in the paper.\n- The paper's key insights (compared to literature) is a method to learn an insertion ordering of the taxa. However, it is not clear that the proposed methodology to find such an ordering is clearly advantageous compared to other different orderings. The baseline considered against use a lexicographical ordering, which is just arbitrary. What happens when a different ordering is used? \n- Related to the topic of choosing the right taxa ordering: theoretically, given just one correct planar ordering of the taxa (draw the true tree onto the plane and number leaves from left to right, there is a trivial greedy algorithm to find the correct tree structure and branch length from tree distance approximated from DNA sequences. As a result, find the correct order is one of the hardest subproblem of tree inference. \n- There are other line of work that uses Prim ordering of the distance matrix between taxa as the ordering to add taxa into the tree and implemented with maximum likelihood heuristics (Zhang, Rao, Warnow 2019 Constrained incremental tree building: new absolute fast converging phylogeny estimation methods with improved scalability and accuracy; Le et al. 2021 Using Constrained-INC for Large-Scale Gene Tree and Species Tree Estimation). These are not deep learning-based methods so it's not directly comparable, but at least a discussion on the existing orders that have been considered is warranted. It would also be interesting to see how the Prim ordering works in these experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1.\tHow does node count measure runtime efficiency in figure 1 and why a lower node count is preferred?\n2.\tHow do we get the initial graph structure that is used as input to DON?\n3.\tIs DON a completely separate and preceding step from the dynamic AR tree construction? Or the order determination step is roll out iterative after each node insertion step?\n4.\tSee other questions in Weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper studies an important problem in bioinformatics concerning phylogenetic tree inference. The main highlight is the introduction of DON which infers node orders and enables sampling multiple nodes for parallel computation, which makes use of the strong prior in pretrained GLM and might be more flexible than the fixed order AR method. \n\nThe method is novel and adds a significant improvement to AR method. Additional techniques are introduced to further improve efficiency, generation consistency, and optimization. The perspective of studying the influence of node orders on phylogenetic accuracy is also novel.\n\nThe authors conduct extensive experiments across multiple tasks and datasets related to phylogenetic tree inference and consistently outperform the baselines including ARTree which is likely the previous SOTA. They also showed strong metrics on computational efficiency, and a thorough ablation study showing the importance of DON." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new method for phylogenetic tree inference which extends beyond autoregressive models and is claimed to improve both computational efficiency and accuracy. Specifically, it focuses on learning a better ordering strategy for adding a new node into the phylogenetic tree from GLM priors as opposed to using fixed orders (lexicographical order) in autoregressive models. The authors framed the problem as masked dynamic autoregressive tree generation and introduced a Dynamic Ordering Network (DON), which is an absorbing diffusion model for learning the node adding order from pre-trained genomic language models (GLMs) to better leverage the biological and evolutional information. They further introduced several techniques for efficiency improvement, including dynamic masking and parallel processing, dual-pass tree traversal for branch length estimation, and LAX model for variance reduction. Extensive experiments show improved accuracy and efficiency of the proposed framework." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper studies a very specific task, phylogenetic tree generation problem, within the bioinformatics domain. Although the task might be important in the domain, the introduced methodology seems to be highly specified for this problem alone, which might limit its significance in the general graph generation area. \n\nThe biggest concern lies in the writing clarity, particularly the DON description. Multiple key information is missing and several notations are inconsistent across the main text. Firstly, the DON module seems to assume some graph structure already known among the sequence (e.g., figure 2.A has a ring structure). How is this graph constructed? It cannot be the tree structure as the phylogenetic tree has not been generated yet at this stage.\n\nThe presentation of DON in 3.1 can be largely improved, with many key notations and parameters unexplained. The biggest gap is the lack of a proper definition of the forward and backward diffusion process, with clear correspondence to time t. It starts with directly “updating node features $h_t$” without defining what t means. It is also not clear what positional encoding $PE_t(g_i)$ means, it is used with subscription $t$ but isn’t the position of node i fixed? How does PE vary with time t and why is it varying with t? It is not clear whether the transition probability in (2) defines a forward corruption process from t=N to 0 or t=0 to N? How can we make sure only a single node is selected to be absorbed at each time step? The notation of $h_t$ is also confusing, is it a single node embedding or embedding matrix for all nodes? There is a mixed use of $h_t$ and $h_i$.\n\nThe node order generation process after the entire graph is absorbed is also not explained well. Equation (3) defines a conditional probability between node embeddings $q(h_t|h_0, h_{(<t))}$, how can this be used for order determination? Shouldn’t one predict the probability of unmasking a node in a diffusion setting? It seems the transition matrix Qt only allows jumping from a non-masking state to a mask. How can this be reused for computing a cumulative transition matrix in the opposite direction (i.e. from masked to unmasked)?\n\nFinally, it is not clear whether the DON is trained (e.g., with a certain score matching loss, and if so what is the training target given that optimal order is not available ahead of time?), or it is just a hand-crafted discrete forward diffusion process which is completely determined by the hyperparameters $\\beta_{t,i}$. There is no description regarding how network parameters of the relational graph convolutional network used for node feature computation are trained either. There is a large discrepancy between what is described in 3.1 and training loss (10) in 3.4, where $q_\\sigma(\\sigma_t|G_0,\\sigma_{(<t)})$ suddenly appears without definition. \n\nIn section 3.2 tree construction, a multi-head attention block with a query matrix Q is introduced MHA($Q, h_i, h_i$), what is the goal of Q here? It is initialized to an Identity matrix with size (N-3)*100, but was not mentioned later. \n \nThere are several typos and inconsistencies in naming terminology. E.g., the DON is sometimes referred as Diffusion Ordering Network and sometimes Dynamic Ordering Network." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Why is G_t a graph? I would have thought it is a DAG.\n- How do you define \\tau and B_\\tau mathematically?\n- The mapping F takes a single species sequence to a tree topology, but in the text it states that F depends on G, which is not reflected in the notation. In addition, why is each species sequence sent to a separate tree topology? There is only a single evolutionary timeline we want to analyse.\n- Line 186 beta increases montonically as what value goes from 0 to 1?\n- What positional encoding function PE is being used?\n- Is the categorical distribution in Eq 3 the forward process of the diffusion process?\n- What is the MLL metric? Please either expand the acronym or give a citation.\n-Table 3: What was the hardware used?\n- I presume alpha in Equation 5 is highly sensitive to N? How come the authors choose to take the softmax of a softmax rather than directly adding the alpha term to L_i?\n- What is the importance of branch lengths?\n- You have 3 distinct components, could you clarify how the gradient flows? I presume that at the boundary between the modules the gradient is stopped due to the discrete decision boundary? If so, how does the ordering network for instance get any signal.\n- What modules are pretrained? Which are trained from scratch? How do you initialise the weights?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Several new ideas on the technical side.\n- (Marginally? hard to judge with the units and lack of error bars) better results.\n- Improvements in run-time." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new node ordering network to be able to better utilise autoregressive models to generate phylogenetic trees." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "TLDR: I am willing to raise my score if the results are clarified and presented more convincingly (1. error bars 2. a clearer results table and figures 3. a better presentation of the baselines); the method is presented in a way that is much much clearer (I think starting from the problem then objective/loss function then the 3 main components, etc might help); and the method is better placed in the related work with a better discussion of the limitations. From my limited understanding of the paper it seems the technical contribution is there (albeit a little hard for me to judge), but the presentation is still far from the quality necessary for acceptance.\n\nIt should be said that I cannot judge whether the baselines are chosen appropriately.\n\nWeaknesses:\n- There are no statistical error margins (e.g. standard deviation) for the results. This is okay if the computational cost is huge (e.g. often the case in LLMs) if so please state this clearly as well as the running cost in GPU hours, the GPUs used, etc.\n- Table 2 confuses me. For instance for DS1: three are numbers are bold, but not even the 3 highest ones (when MLL higher is better according to the caption), e.g. VBPI-GNN also has the third highest score -7108.41. Also the difference between some of the results seem incredibly marginal. Furthermore, negative MLL might be clearer rather than having a minus sign in front of every number.\n- The method has many components, which on the hand is impressive that the authors managed to build this system and make it work, but also comes with limitations that aren't adequately discussed in my opinion. There are a great number of hyperparameters, but far too little space is dedicated to ablating them or acknowledging the difficulty of choosing them.\n- The related work is quite brief given that the model borrows many techniques related work it compares against. A clearer delineation would be helpful to the reader.\n\nClarity:\n- The way indices i and t are re-used is confusing.\n- It is imperative to give the citation of each baseline method considered. In the paper they are merely named, but not cited. This allows for confusion if two methods share the same name for instance and is generally poor practice.\n- The Table captions could be improved, what are the numbers in paranthesis in Table 2? What is the grey background mean? Why are multiple numbers in bold per dataset?\n- The y-axis range in Figure 3 makes the results really hard to discern.\n\nMinor:\n- Line 164 \"As discussed, ...\" please add a link to where this is discussed, this helps non-linear reading, which is standard for papers.\n- Equation 1 LHS says h_i, the text says h_t" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Why is it better that \"closely related species should be placed earlier in the tree\" (175) versus, e.g. simply clustering together? Is this robust to all topologies? For instance, what happens if you have two very distantly related subtrees, each of which has many species who are closely related to another?\n * Similarly, I am interested in worst-case performance of the clustering algorithm. For instance, if you had a linear tree, would you still be able to parallelize your algorithm?\n* What should the reader take away from the Angiosperm tree in Figure 8? How does this compare to/improve on the trees generated by other models?\n* Bayesian phylogenetics methods will typically include a proof that their estimators are consistent and unbiased. Is it possible to do something similar in the case of this method? If not, the authors should justify why it is worth abandoning such guarantees in favor of their model\n* Will the model be made publicly available? If so, is it easy to use and install? Does it work on a variety of machines and operating systems?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* **Strong benchmark performance** is the main strength of this paper. Across almost all dataset + task benchmarks, the authors claim state-of-the-art performance. This is shown in Tables 1--4 and Figures 3--4.\n* **Extensive secondary benchmarks** characterize MDTree's runtime reduction, parsimony, topological diversity, and bipartition frequencies compared to ARTree (and sometimes other models).\n* **Biological ordering** is a desirable addition to autoregressive models, ensuring that the phylogenetic tree construction task can aggregate as much information across nodes of the tree as possible. This addresses a key limitation of previous autoregressive methods.\n* **Parallelism**, enabled by the biological ordering, is a desirable property for a computational method and appears to improve processing speeds substantially (as shown in Figure 1).\n* **Use of embeddings** eliminates the restriction that all sequences are the same length common in other models; it is also likely to unlock improvements in MDTree \"for free\" as better genomic foundation models are trained." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present MDTree, a technique for inferring phylogenetic trees (topology + branch lengths) from a set of genomic sequences. To motivate their model, the authors reframe phylogenetic tree construction from the perspective of DART: dynamic autoregressive tree generation, which differs from its autoregressive predecessors by incorporating a node order learning step. To this end, MDTree uses a Diffusion Ordering Network (DON) using genomic language model embeddings to sort sequences. This enables better autoregressive generation and even makes it possible to add nodes in parallel. The authors benchmark MDTree on 8 classic phylogenetics datasets, comparing it to classical MCMC, structure generation, and autoregressive methods. In almost all benchmarks, they show state-of-the-art performance as well as improvements in secondary properties like computation speed, parsimony, and diversity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* **Relationship to ARTree** is somewhat unclear, and although comparisons favor MDTree, the ARTree score is oftentimes quite close.\n * The authors should make it explicit what distinguishes MDTree from ARTree.\n * The ablations should make it clear which ablated forms of MDTree (if any) are worse than base ARTree\n * Since the benchmark scores of MDTree and ARTree are often quite close together, I am less impressed by \"state of the art\" results. If the authors could convince me why this position is mistaken, and their method is a *significant* improvement over ARTree, I would be amenable to improving my score.\n* **Lack of motivation** for specific architectural choices. Most notably, the diffusion ordering network (DON) is justified in terms of the limitations of other autoregressive methods like ARTree; however, the specific choice of architecture is presented as arbitrary/self-evident. To this end, I have several questions:\n * What other options have the authors considered/tested? Why was the DON ultimately chosen?\n * How does the DON compare to a simple baseline that produces biologically meaningful orders without relying on deep learning? The authors may have a better sense of what a good baseline might be. However, I propose the following baseline as a reasonable starting point:\n 1. Compute pairwise edit distances between genomic sequences (e.g. Levenshtein distances)\n 2. Perform agglomerative clustering on the pairwise distances to get a crude tree\n 3. Use an inorder traversal of the resulting tree to sort the leaves. This is your input order.\n * You cite \"evidence of robustness across different taxa orders\" in ARTree (line 163), but here you simply say \"the influence of node orders on phylogenetic accuracy has not been thoroughly examined.\" The ablation-based evidence presented in Table 7 suggests that node order has a weak influence on model performance, but it would be more convincing to see a non ablation-based characterization (e.g. what is the variance in MLLs for random permutations of node order?)\n* **Unintuitive choice of representations** to seed the DON. It is not apparent that genomic LM representations are the best candidates here, as the LMs are not actually trained to estimate evolutionary distances. Moreover, vector space representations of genetic sequences will always incur distortion, as the geometry of phylogenetic trees is inherently non-Euclidean as a result of the four-point condition.\n* **The DART formulation** seems unnecessary. What is the advantage of reformulating phylogenetic tree construction (which we have a perfectly good description of already: learning a topology and a set of branch lengths), besides that it attempts to justify the use of a DON? If that is all, I would argue that \"proper node orders improve phylogenetic inference\" is a sufficient claim.\n * If other problems in phylogenetics are better viewed from the DART perspective, I would be interested in such an example. This would go a long way towards changing my mind on the value of this part of the paper.\n* **Presentation** is unrefined throughout:\n * Figures are often cramped, and combined into figures with no clear logic (e.g. Figure 1 includes a cartoon and a runtime comparison)\n * Model details are crammed in pages 4 and 5. It is unclear without substantial cross-referencing and careful reading how all of the pieces fit together. While I understand the need to fit within the page limit, I would be interested in seeing the full architecture described in the Appendix.\n * \"Mask rate modulated by a cosine function\" (200) seems to be an essential detail of the autoregressive tree, but the equation is not given anywhere\n* **Related work** does not discuss Bayesian methods since VBPI (except for VaiPhy). There have been many developments in this field since then.\n* **Missing experiments**: oftentimes, certain models are missing from evaluations. For instance, MrBayes and structure representation models are missing from Table 1; many models are missing from Table 4; comparisons in terms of runtime, diversity, bipartition, etc., are only run for 2-3 models at a time. It is possible that these results are infeasible to generate, but the authors should make this explicit." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does DON determine the node order based on genomic embeddings? How much does it impact the final inference if the species sequence order differs?\n2. How does the mask rate selection impact the parallel computation of nodes insertion and overall model running efficiency? \n3. There is no summarization of of the sequence divergence and evolutionary relationship distances about the dataset used in this study. It is necessary to evaluate the impact of sequence divergence on the model performance. The author can also consider adding simulated datasets experiments to better control the sequence divergence. \n4. What is the purpose of generating highly diverse tree topologies in biological research? What type of practical application needs such diverse tree topology instead of a highly confident and accurate phylogenetic tree? \n5. Consider adding bootstrap analysis for phylogenetic support estimation to better indicate how confident is the inferred phylogenies." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This is a quite impressive work that solved multiple pain points in phylogenetic inference. \nThe idea proposed by this paper is innovated and effective, improves the phylogenetic tree inference on both efficiency and accuracy. \nThe paper is well organized with a clear writing style. It is easy to follow the author’s idea. \nThe experiment design is comprehensive. The author considered about multiple aspects of phylogenetic inference, such as running time, tree quality, model robustness, empirical study. The results are convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new framework for phylogenetic inference MDTree. Traditional methods like MCMC and previous deep learning methods are limited by their high computational cost, low efficiency and low inference accuracy. The new framework uses multiple techniques to effectively resolve these limitations, including a diffusion ordering network (DON) to generate node orders according to evolutionary relationships, autoregressive constriction module with dynamic masking mechanism to generate tree structures in parallel. The model uses a dual-pass traversal to estimate the tree branch lengths. \nThis study includes an extensive evaluation which indicate that MDTree has a robust performance on datasets with variant taxa number and sequence length. Its computational cost and running time outperformed the state-of-the-art methods. This study also includes a comprehensive ablations study on models and hyper parameters to demonstrate the contribution and robustness of the modules." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are certain weakness about this study. The complex architecture and multi-layered optimization requirements may limit the practical application. It is worth to consider about pack the framework into a user friendly package or online service. This will not only help people who are interested in this study, but also increase the impact of this impressive work. \nThere are some details about the method and the evaluation metrics are ignored in the paper, such as how does DON determine the node order based on genomic embeddings? What is the impact of sequence divergence and species evolutionary relationship distance to the node order and inferred phylogenies? Why generating highly diverse tree topology is necessary, especially in biological analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My suggestion is to address the weaknesses above. \n1. describe baseline method settings\n2. provide code availability to reproduce the result\n3. compare all methods in each metric or explain why a certain method is not included\n4. review the related work and discuss existing method and gap in a more clear way\n5. proofread the paper" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors proposed a novel method and conducted a comprehensive evaluation by comparing MDTree with several baseline methods across various datasets and metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript introduces MDTree, a novel approach to phylogenetic tree inference. MDTree addresses the issues of model complexity and computational efficiency. By leveraging a Diffusion Ordering Network, MDTree dynamically learns the optimal node order, enabling more accurate and efficient tree construction. This approach incorporates graph neural networks (GNNs) and language modeling techniques to capture complex evolutionary relationships. Additionally, a dynamic masking mechanism allows for parallel node processing, further accelerating the inference process. The authors benchmark the performance in several aspects to show the effectiveness of MDTree." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper's experimental evaluation is hindered by several aspects. Firstly, the parameter settings for baseline methods are not well-documented, potentially impacting the strength of reported performance. Secondly, the absence of publicly available code limits reproducibility and hinders independent verification of the results. Additionally, the methods compared in each table are inconsistent, lacking clear explanations for these choices. For example, while MrBayes is included in Table 2, it is absent from Table 1, raising questions about the rationale behind these decisions.\n\nWhile the paper introduces a novel approach to phylogenetic tree inference, the literature review in the introduction appears to conflate different concepts. For instance, the discussion of Solis-Lemus & Ané, 2016 and Zhang et al., 2018, which focus on network inference from gene trees/multiple sequence alignments under the multispecies network coalescent model, seems to be mixed with the concept of gene tree inference from sequence data, the primary focus of the proposed MDTree method. A clearer distinction between these approaches would enhance the paper's clarity and contextual understanding.\n\nBesides the major concerns, below are some minor concerns.\n\nFigure 1: left and right are opposite. Run time unit is missing.\n\nThe name of the proposed method is not consistent in tables. E.g., Table 1: MDTree, Table 2: Ours." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mdtree,\ntitle={{MDTREE}: A Masked Dynamic Autoregressive Model for Phylogenetic Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2CYZkawsmz},\nnote={under review}\n}" }, "abstract": { "value": "Phylogenetic tree inference, crucial for understanding species evolution, presents challenges in jointly optimizing continuous branch lengths and discrete tree topologies. Traditional Markov Chain Monte Carlo methods, though widely adopted, suffer from slow convergence and high computational costs. Deep learning methods have introduced more scalable solutions but still face limitations. Bayesian generative models struggle with computational complexity, autoregressive models are constrained by predefined species orders, and generative flow networks still fail to fully leverage evolutionary signals from genomic sequences. In this paper, we introduce MDTree, a novel framework that redefines phylogenetic tree generation from the perspective of dynamically learning node orders based on biological priors embedded in genomic sequences. By leveraging a Diffusion Ordering Network to learn evolutionarily meaningful node orders, MDTree autoregressively positions nodes to construct biologically coherent trees. To further push its limits, we propose a dynamic masking mechanism that accelerates tree generation through parallel node processing. Extensive experiments show that MDTree outperforms existing methods on standard phylogenetic benchmarks, offering biologically interpretable and computationally efficient solutions for tree generation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Phylogenetic Inference", "Genome Language Model", "Transformer", "Graph Structure Generation", "DNA", "Large Language Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b44e9c9a6d482336ccdca08b6f4426b933cbae47.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MDTREE: A Masked Dynamic Autoregressive Model for Phylogenetic Inference" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2CflgSMLoK
Data-Efficient Training by Evolved Sampling
main
Active
learning efficiency;evolved sampling;data selection;loss dynamics
other topics in machine learning (i.e., none of the above)
3;5;5;6
3;4;4;4
2;3;3;3
2;2;2;2
3;3;3;4
4.75
3.75
2.75
2
3.25
0.927173
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- ES shows a reduction in training time without loss in performance, which is promising for computationally expensive tasks.\n- The use of loss evolution for sampling is an interesting approach that addresses the shortcomings of previous static and simple dynamic sampling methods.\n- The results on datasets with noisy labels are interesting.\n- Evaluation is sufficiently complete." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes \"Evolved Sampling\" (ES), a dynamic sampling method aimed at improving data efficiency during training. The method selects informative samples based on the loss values during training using a decoupled Exponential Moving Average (EMA) scheme. This reduces the number of samples needed for backpropagation, saving up to 40% in wall-clock time while maintaining model performance. The method was tested on a thorough evaluation across many different models (ResNet, ViT, ALBERT) and datasets (CIFAR, ImageNet, GLUE)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Limited novelty: the paper largely builds on existing sampling concepts with incremental improvements.\n- The description of the method can be simplified considerably.\n\n- While the method helps reducing the number of backpropagation steps performed during training, it still requires feedforward running of all samples through the network, which is still computationally expensive. Indeed, while the results are positive, the measured gains are not particularly game-changing.\n\n- Minor: I am not sure \"evolved\" is the right term here; \"evolved\" and \"ES\" remind strongly of evolutionary optimization and \"Evolution Strategies\", which can introduce confusion.\n\n- It would be interesting to read more about the increased robustness to label noise; I might have expected the proposed method to perform worse, since samples with wrong labels would yield higher losses (unless/until the network memorizes the whole training set)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could the authors provide more insight into the sensitivity of the hyperparameters $(\\beta_1, \\beta_2)$ across different datasets and architectures?\n\n2. ES appears computationally feasible for single-machine training, but would its performance gains hold up in distributed training settings?\n\n3. ES with Pruning (ESWP) combines batch and set-level selection, but it is not entirely clear how this combination impacts overall performance in practice.\n\n4. How can ES be used for self-supervised training?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Novelty - The paper introduces decoupled exponential moving averages, which leverage first-order loss differences for more stable and robust sampling, effectively combining ideas from loss and gradient-based sampling with robust optimization principles.\n\n2. Quality - The paper provides theoretical proofs and experiments across models and datasets, demonstrating consistent gains in efficiency and robustness, especially under noisy labels.\n\n3. Writing - The paper is clearly structured, with well-organized sections and visual aids that clarify ES’s advantages over traditional methods, though some theoretical sections may be dense for general readers.\n\n4. Relevance - ES offers practical relevance for reducing computational costs without accuracy loss, making it impactful for both research and industry applications in large-scale ML." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a method called Evolved Sampling (ES) for efficient data selection in training machine learning models. The core contribution is a dynamic sampling framework that identifies informative data samples based on the evolution of loss values throughout the training process. By adjusting the selection of data at the batch level according to changes in loss values, ES significantly reduces the required training time while maintaining model accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Significance - Much of the computation cost of foundation models occurs during pre-training, which is mostly self-supervised (auto-regressive, contrastive learning, auto-encoders). All the experiments in the paper are for labeled datasets, which represent fine-tuning use cases where the computation cost is not a major concern. Thus, the significance of the method is not clearly demonstrated.\n\n2. Scalability - The paper claims that ES has only modest overheads, but lacks an in-depth analysis of computational and memory costs associated with the decoupled EMA calculations, especially in large-scale tasks or datasets.\n\n3. Assumptions - Some assumptions in theoretical analysis may not hold in practice, e.g., smoothness of loss functions, especially for complex architectures and non-convex losses. A discussion of how the method performs when assumptions deviate from theory, or empirical analysis on non-smooth tasks, would help clarify the applicability.\n\n4. Hyperparameter Sensitivity - Introducing 2 hyperparameters could be a major concern for the proposed method. The current analysis (Figure 5) is too limited, e.g., what's the impact of hyperparameters on efficiency? Besides, it does seem that hyperparameters introduce a large variance in performance. For fair comparisons, the cost of searching hyperparameters should also be considered in the overall task (e.g., on a smaller dataset to test hyperparameters and then apply to a large dataset.)\n\n5. Lack of Baselines for Noise - In the experiments on label noise, ES performs well, but the comparison is limited mainly to non-specialized sampling methods. \n\nnit - ES in this literature often refers to 'Evolution Strategy', so would be nice to have a different abbreviation for the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I would like to get a clarification regarding Eq. 3.8. We have access to the current loss of an example to decide whether or not we want to sample it for that epoch. I interpret this as doing the forward pass on an example that we later deselect to be part of the backward pass calculation. This means that we still maintain the gradient of that example until we deselect it. The main cost saved then is the amount of bwd passes. In Algorithm 1, the necessity for forward passes seems to be mitigated in Line 284 at least during the pruning by taking the historically weighed score s instead of the weight function. This seemingly implies that to select examples, only historic losses are considered. But this poses yet another question: How do we adjust an example’s loss if the example is no longer selected? Because then we yet again will need a fwd pass and we could have calculated the full weight. This seems to be what is done in 289; i.e. only the loss over the batch examples is calculated. The only thing to mitigate the issue of disregarding bad losses (almost) completely is in Remark 1 and discounting the existing values. Either way, this introduces non-trivial and dead-lock-ish dynamics I would like to see investigated." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "### Originality:\n\nThe main proposition lies in the recursive definition of an Exponentially Moving Average over the losses of individual examples to deselect them from the training process to gain speedups and improved; i.e. stable, learning dynamics. The single-level EMA itself is a well-known approach that is applied to this setting with a recursive definition. The other techniques, i.e. annealing and pruning, are mere adaptations from prior work and are only a minor contribution to the originality. The bridge between batch and set-level data selection, which their method allows them to do is a nice feature, but not the main contribution. The theoretic analysis is interesting overall. But insights like decoupled EMA is in fact a convolution over hyperparameters’ powers of historical losses – so their results are not really surprising.\n\n### Quality: \n\nQuite a few experimental issues are present, which I will detail in the weaknesses section.\n\n### Clarity:\n\nOverall the paper is clearly and concisely written. With the main exception of when exactly we are collecting the loss values of pruned examples; which might bias the calculation of their weight.\n\n### Significance: \n\nThe efficiency of modern machine learning algorithms and neural networks is a great issue, as it results in huge energy demand. Reducing the footprint is a critical point. One angle of attack pursued in this paper is being selective about the order and the subset of consumed examples. This is indeed an important and interesting avenue." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel framework called Evolved Sampling (ES) (and with Pruning ES-WP) aimed at enhancing data efficiency in machine learning. The authors propose a dynamic sampling method that selects informative data samples based on the evolution of losses during training. This approach aims to reduce backpropagation time while maintaining model performance across various architectures (ResNet, ViT, ALBERT) and datasets (CIFAR, ImageNet, GLUE). Key contributions include: (i) Dynamic Sampling: ES utilizes historical and current loss differences to inform data selection, allowing for batch-level sampling without the need for pre-trained models. (ii)Efficiency Gains: The method achieves up to 40% reduction in wall-clock time during training and shows improved accuracy (approximately 20%) in scenarios with noisy labels; and (iii) Theoretical Justifications: The authors provide theoretical insights into how their method alleviates loss oscillations and can be viewed through the lens of distributionally robust optimization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Besides the weak overall originality, my main criticism is connected to the empirical evaluation:\n\nThe necessity for a burn-in period, where standard training must occur to initialize the loss adequately before applying the Exponential Moving Average (EMA) scheme, points to a limitation in the approach. This dependency on a specific loss initialization suggests that the method might not be entirely robust across various starting conditions. It would benefit the study to explore a more systematic ablation of this burn-in period as a hyperparameter. Additionally, understanding whether variations in the burn-in length affect performance could provide insight into the model's dependency on initialization stability and might even reveal opportunities to shorten or eliminate this requirement.\n\nAnother area where clarity is needed is the reporting of statistical measures. The number of seeds used for evaluation and averaging remains unspecified, and no standard deviations are provided. This omission raises questions about whether noise rather than true performance gains might influence observed differences in performance between the proposed method and baseline competitors. Including standard deviations would allow readers to assess the consistency of the results, providing a clearer understanding of the variability in performance.\n\nThe use of wall-clock time as a measure of speedup also presents challenges. Since wall-clock time is influenced by multiple factors, including the specific point of reference and the extent to which reference performance is met or exceeded, this metric is not straightforward. No details are provided on the variability of wall-clock measurements, which could make these results more challenging to interpret. An additional, complementary metric—such as the number of examples seen (similar to token counts in LLM training)—could yield a more direct and comparable measurement of processing efficiency, especially since the baseline approach involves higher computational requirements.\n\nRegarding robustness to label noise, Figure 3a indicates that while the method outperforms the baseline, the speedup advantage is lost under noisy conditions. This finding implies that the method may benefit from integrating the baseline up to its peak performance before switching to the proposed scheme. Such a hybrid approach could potentially leverage the best of both methods, maintaining efficiency without sacrificing performance under challenging conditions.\n\nIn Figure 3b, the gradients under comparison lack clarity. It is uncertain whether the gradients displayed encompass all examples (both corrupted and uncorrupted), necessitating additional forward passes and potentially affecting wall-clock measurements, or if the results only include corrupted examples selected by the method. The latter case would introduce a selection bias, affecting the integrity of the reported results. A more informative and balanced approach would be to calculate the proportion of non-informative examples selected per epoch, providing a relative measure of their influence on learning. This would give a clearer picture of how these less useful samples affect training efficiency and could allow for more balanced comparisons.\n\nIn Table 5, the ground-truth results are presented without a corresponding baseline for corruption-free performance. Including such a baseline would clarify the upper bound achievable in the absence of noise, providing a benchmark against which the \"superior\" performance in noisy conditions could be assessed. \n\n\nFurther minor Issues:\n\n* Ablations: \n * choices of \\beta. The presented heatmap tables are way too broad. I suggest using some Sobol or Latin Hypercube design and then reporting the heat surfaces. This way, we get a far more fine-grained perspective on the hyperparameters’ behavior.\n * Pruning is not ablated\n* The notation 0^+ and 1^- should probably be introduced or replaced by intervals (0, 1) instead of [0, 1]\n* The notation is at times slightly overburdened (e.g. the additional vector notation in 320), instead of just writing the actual values in there directly." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Can the key idea of the paper: optimization of the data space, be more cohesively or clearly presented? Currently, it's still difficult to understand the key idea of the paper without significant theoretical and literature knowledge." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper builds upon good theoretical foundations.\n\nThe paper well cites related work and the literature that leads to this contribution.\n\nThe paper creates an efficient heuristic based approach to solve a practical problem which rests on the previous theoretical contributions.\n\nThe paper well considers ablation studies and robustness studies.\n\nThe paper's theoretical arguments are well constructed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper functions as a well-thought-out \"momentum optimizer\" in the data space. Instead of considering the presentation of data as fixed as in SGD, we take a more expansive view and think of the data space as another component of the model to optimize.\n\nThe work is somewhat novel in the large model training space." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This should be better justified: This can be inefficient since different samples may have varied importance. Can you look at the influence functions or coresets literature?\n\nThis statement needs to be better motivated and explained, why is evolved sampling \"natural?\"\nIn general machine learning tasks, the typical behaviors of loss curves often appear decent trends\noverall, but can oscillate meanwhile due to certain noises. This introduces the sensitivity or instability\nissue of the sampling scheme (3.6). A natural smoothing operation is to use the exponential moving\naverage (EMA) of losses\n\nThe proof presentations are somewhat lacking. It's difficult for me to quickly match up concepts from the optimization literature to some of the theoretical arguments made, for example, the EMA to the minimax problem.\n\nIt may be worthwhile in explaining this better with regards to the control theory literature, specifically, control theory also deals with oscillations and rectifies them in similar manners:\n\nDecoupled EMA. To sufficiently leverage the loss dynamics in a more robust sense, we propose to\ncalculate the sampling probability as\npi(t) ∝ wi(t) = β1si(t − 1) + (1 − β1)ℓi(θ(t)),\nsi(t) = β2si(t − 1) + (1 − β2)ℓi(θ(t)), si(0) = 1/n (3.8)\nwith β1, β2 ∈ [0, 1] as two hyper-parameters. Here, the intermediate series {si(t)}t∈N, updated in\nthe EMA scheme, is also referred as the score (for the i-th sample). The scheme (3.8) is the so-called\ndecoupled EMA,\n2 which reduces to (3.7) when β1 = β2 = β. In Figure 1, it is shown by the red curve\nand appears an “interpolation” between the original loss and single EMA: When losses oscillate,\nthe decoupled EMA reacts moderately by not only capturing detailed dynamics of losses, but also\nremaining necessary robustness , exhibiting the flexibility to trade-off (by tuning two betas).\nIntuitively, by setting (β1, β2) → (0+, 1\n−), we are able to exploit the long-term historical information\nalong the training (via β2), while focusing on the importance of current losses (via β1) and thus can\nget the best of both world. This simple and elegant design turns out to be surprisingly beneficial in\npractice, which is further verified in numerous experiments in Section 4.\n\n\nThis should really be better explained. Again, this paper is moving into the \"total optimization landscape\" where both data and model parameters are considered components of the system to be optimized. It's not immediately clear whether this is a consequence of the problem the authors were solving, or the key insight that led to the approach.\n\n(ii) ES to solve a DRO problem. From another perspective, ES can be also reformulated as a\nsolution to the minimax problem..." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024dataefficient,\ntitle={Data-Efficient Training by Evolved Sampling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2CflgSMLoK},\nnote={under review}\n}" }, "abstract": { "value": "Data selection is designed to accelerate learning with preserved performance. To achieve this, a fundamental thought is to identify informative data samples with significant contributions to the training. In this work, we propose **Evolved Sampling** (**ES**), a simple yet effective framework for *dynamic* sampling performed along the training process. This method conducts *batch* level data selection based on *differences* of historical and current losses, significantly reducing the back propagation time with modest additional overheads while maintaining the model performance. Due to its conciseness, ES is readily extensible to incorporate *set* level data selection for further training accelerations. As a plug-and-play framework, ES consistently achieves lossless training accelerations across various models (ResNet, ViT, ALBERT), datasets (CIFAR, ImageNet, GLUE), and optimizers (SGD, Adam), saving up to 40\\% wall-clock time. Particularly, the improvement is more significant under the *noisy supervision* setting. When there are severe corruptions in labels, ES can obtain accuracy improvements of approximately 20\\% relative to the standard batched sampling. Our results motivate further investigations on the data efficiency aspect of modern large-scale machine learning." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "learning efficiency", "evolved sampling", "data selection", "loss dynamics" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f3ea8b6b5d183aae1b23b2a8fe03c1b0c19a72c4.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Data-Efficient Training by Evolved Sampling" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2Cg4YrsCMA
Data-Centric Human Preference Optimization with Rationales
main
Active
dpo;preference learning;alignment
alignment, fairness, safety, privacy, and societal considerations
3;5;5;8
4;3;3;5
2;2;2;3
2;2;2;4
2;3;2;3
5.25
3.75
2.25
2.5
2.5
0.54886
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "It would be great if the total cost (rationale annotation cost vs. fine-tuning performance) breakeven could be revealed in dollar terms, and the operating guidance could be discussed for project owners with a limited annotation budget.\n\nOne way could be to provide a detailed cost-benefit analysis, including estimated costs for generating rationales (e.g., API costs if using a language model) versus the potential savings from reduced annotation needs. This would give project owners more concrete information to assess the method's practicality within their budget constraints." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper is well written. The notations are clear.\n\n2. It provides up-to-date literature on RLHF techniques. It underscores the potential of rationale-based data augmentation in preference learning, paving ways for more effective language model alignment and encouraging further exploration of unpaired preference learning scenarios.\n\n3. Among many lines of work addressing the economic utility of dataset design and construction in RLHF, mechanism design has been recently explored to enhance the overall economic utility of dataset construction in RLHF.\nThe merits of introducing mechanism design are well supported by game theory studies, both theoretically and practically:\n\nZhang, G., & Duan, J. (2024). VickreyFeedback: Cost-efficient Data Construction for Reinforcement Learning from Human Feedback. \nhttps://arxiv.org/abs/2409.18417\n\nMatsushima, H., Noda, S.: Mechanism design with general ex-ante investments. Journal of Mathematical Economics 106, 102831 (2023)\n\n4. The experiments on Orca and UltraFeedback are convincing, with rational theoretical analysis using mutual information as a tool and in-depth ablation discussion in the appendix B.2." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a data-centric approach to RLHF by enriching preference datasets with machine-generated rationales. These rationales offer explanations for choices between preferred and non-preferred responses, addressing ambiguity and enhancing the effectiveness of preference learning. The proposed framework integrates rationales into the training process, can save annotation costs by 3x, and lands the fine-tuned model at better performance. Extensive experiments demonstrate that rationale-enriched learning outperforms traditional methods, with benefits across various preference optimization algorithms. \n\nThis work underscores the potential of rationale-based data augmentation in preference learning, paving ways for more effective language model alignment and encouraging further exploration of unpaired preference learning scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper underlines the impact of including rationale in the RLHF fine-tuning process. In other words, the proposed method generally leverages auxiliary data to enhance the model performance. \n\nHowever, generating qualitative rationales alongside existing datasets might increase the annotation cost in dollar terms. Therefore, the breakeven analysis and the operating guidance could have been more straightforward to project owners with a limited annotation budget in dollar terms." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Instead of plotting data points w.r.t performance metrics, it will be worthwhile to plot the total number of text tokens used for training w.r.t the performance metrics. For example, if the rationale itself is quite longer than the original texts for comparison it can contain a lot more information which might explain the improvement in performance. Additionally, it is also worthwhile to report the average training time for the both procedures.\n\n2. For the vs DPO and vs SFT section, can you please provide the exact formula you used to compute the win rates? Are there any tie-breaking rules?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This study is well motivated and adds to the literature of incorporating richer feedback (additional of rationales in this case) to do more efficient RLHF alignment." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates if incorporating rationales along with binary preference data can help improve alignment preference. To this end the authors propose rationale-DPO, an extension to the popular alignment method DPO. They compare the two algorithms with on different datasets (orca, ultrafeedback) and for different models (Mistral-7B-v0.1, Zephyr-7B-Beta etc.). The authors also propose a simplified information theoretic analysis to better understand rationale based preference modeling." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weakness:\n1. While the problem is well motivated, the methodology to maximize the likelihood of generating the given preference and the rational is a very intuitive and simple method and is not significantly novel.\n2. Difficulty collecting data: procuring rationales can be more expensive as compared to getting just binary feedback. In addition, for certain cases like when you are comparing artwork it might not be possible to humans to explain their choice. While using LLMs to generate rationales is an efficient way of scaling the method, there is a risk of getting a misaligned response if that model is misaligned (for ex. not harmless) and it may also lead to an echo chamber as no new perspective beyond what the rationale generating LLM believes is true will be in the dataset. How do you envision addressing these challenges?\n3. In Figure 2, it seems that DPO win rate is only lagging behind RDPO by ~5-8% for the same amount of data points, however, RDPO requires a lot more text for a single data point." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper proposes a new approach to integrate model-generated rationales in preference tuning, avoiding the need for additional human labels.\n2. Experimental results show that optimizing rationale likelihood alongside preference loss boosts model performance, reducing annotation needs and training data volume." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a new method for incorporating machine-generated rationales into preference fine-tuning, enhancing language models’ performance without extra human annotation. The authors demonstrate that maximizing rationale likelihood alongside preference loss improves model efficacy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method can be seen as a combination of the preference loss such as DPO and the rationale log-likelihood. The paper lacks further exploration of how the two components contribute to improved performance. A few questions are:\n - a. In the ablation study on $\\gamma$, it seems the scale of gamma (from 1.0 to 10.0) does not matter at all. Did the authors try smaller $\\gamma$ or extremely large $\\gamma$?\n - b. How does tuning solely on rationale likelihood without DPO loss affect performance? Will the performance increase? \n - c. Justification is needed for a variable $\\gamma$ given the theoretical suggestion of $\\gamma=1$.\n\n2. Experimentation lacks rigor and thoroughness:\n - a. Reporting win-rate against DPO alone does not fully capture the rationale’s benefit. It is hard to evaluate the absolute improvement brought by the rationale loss. It would be better to report win-rate against a fixed opponent such as GPT-4 on AlpacaEval 2.0. This can ensure that the baseline DPO model is properly trained to a satisfactory performance.\n - b. Another related question is that there is no evidence that the DPO model in this paper is fully optimized. One may question if the dataset is weak or if the hyperparameters are adequately explored. For example, Llama3-8b-instruct + Ultrafeedback annotated by PairRM (see SimPO’s GitHub page for their dataset and model performance) can achieve a 40% LC win-rate, and the LC win-rate reported in the appendix is below 25%. I understand that SimPO did not release their training configuration, but the point here is that one cannot effectively conclude that the rationale loss significantly improves the performance.\n - c. The length bias is a key issue in preference fine-tuning. In the main text, it is reported that RDPO can produce much shorter responses and maintain a higher win-rate against DPO. This is quite surprising and deserves more analysis or explanation from the authors. On the other hand, in section B.4, the length on the AlpacaEval 2.0 dataset remains close to DPO or the original model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weakness points above.\n\nSome additional questions I had when reading the paper, that I believe should be clarified:\n- How are draws measured in Fig 2?\n- I don’t understand this sentence: “the drawing rate for the RDPO model is stable and low across different training data sizes, which shows that RDPO winning rate is higher not due to flipping the draw points but the losing points.” Can authors clarify?\n- Fig2 caption typo: Winrare" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The area of generative reward modeling is important and gaining traction.\n- Promising experimental results across two datasets, performing comparable or better than DPO." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a new direct preference optimization method that leverages preference rationales (natural language explaining the reasons why one response is better than another). The proposed method adds a supervised loss term to the DPO objective, jointly training/anchoring the model to generate a valid preference rationale. Each preference rationales are generated with an LLM-as-Judge, augmenting a conventional binary preference dataset.\n\nThe method can be seen as a form of hybrid distillation from both preference data (DPO) and from LLM-as-Judge rationales." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited novelty and poor positioning with respect to the growing literature on synthetic preference generation and generative reward modeling (see missing references below, to be discussed in the paper). In addition, the authors focus entirely on direct preference optimization as an alignment method, but reward modeling + reinforcement learning remain a major paradigm for LM alignment. How does this work translate to this setting and compare to the following baselines?\n\nReferences\nRLCD: Reinforcement Learning from Contrastive Distillation for Language Model Alignment. Yang et al., 2024.\nWest-of-N: Synthetic Preference Generation for Improved Reward Modeling. Pace et al., 2024.\nJudging LLM-as-a-Judge with MT-Bench and Chatbot Arena. Zheng et al., 2024.\nSelf-Taught Evaluators. Wang et al., 2024.\n\n2. I found the theoretical analysis and motivation for the method unclear.\n - Equation 2 (L230) -> why is the joint probability decomposed in this way? Why doesn’t the preference label also depend on the rationale? Surely there isn’t a single ground-truth preference considering what is discussed in the intro (multiple valid preference labels based on different raters’ values)? In fact, does Section 5 not use the opposite formulation (“the preference inferred from the rationale”)?\n - Information-theoretic results may be interesting but are completely relegated to the appendix, so they cannot be counted as a contribution of the paper. Authors state “Our analysis demonstrates a closed-form relationship between rationale informativeness and its alignment with true preferences”, without including any explanation for this claim. What does this mean and what is the form of this relationship?\n\n3. Finally, the experimental setup is too weak to demonstrate the added value of the proposed method.\n - Is performance improvement statistically significant? Fig 2 suggests that DPO > RDPO with 1K Ultrafeedback data, but we obtain the opposite result in Fig 3. If the result is due to statistical uncertainty, this should be measured and shown on plots (RDPO outperforms DPO by a similar margin in Fig 2, which could therefore not be statistically significant).\n - Preference dataset sizes are typically >>11K (see top-perfoming RMs on RewardBench, for example). Why did the authors focus their analysis on such a small, non-representative dataset size. Also, why is there no improvement in performance with DPO beyond 1K preferences?\n - Related: L353, why pick the DPO model trained with 12K Ultrafeedback preferences as baseline, if its SFT performance is lower than that or models trained on less data?\n - Why not evaluated model performance on established benchmarks such as RewardBench/AlpacaEval?\n - How does RDPO with poor quality rationales (e.g. permuted / opposite) perform against standard DPO? I imagine much worse, since we are training on biased information. How can practitioners ensure that their rationales’ quality is sufficiently high to afford gains and not harm performance?\n - Why is RDPO performing similarly to DPO then trained on Llama-3-8B in Figure 5?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Human Preference Optimization with Rationales" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024datacentric,\ntitle={Data-Centric Human Preference Optimization with Rationales},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2Cg4YrsCMA},\nnote={under review}\n}" }, "abstract": { "value": "Reinforcement learning from human feedback plays a crucial role in aligning\nlanguage models towards human preferences, traditionally represented through\ncomparisons between pairs or sets of responses within a given context. While\nmany studies have enhanced algorithmic techniques to optimize learning from such\ndata, this work shifts focus to improving preference learning through a data-centric\napproach. Specifically, we propose enriching existing preference datasets with\nmachine-generated rationales that explain the reasons behind choices. We develop\na simple and principled framework to augment current preference learning methods\nwith rationale information. Our comprehensive analysis highlights how rationales\nenhance learning efficiency. Extensive experiments reveal that rationale-enriched\npreference learning offers multiple advantages: it improves annotation efficiency,\naccelerates convergence to higher-performing models, and reduces verbosity bias\nand hallucination. Furthermore, this framework is versatile enough to integrate\nwith various preference optimization algorithms. Overall, our findings highlight\nthe potential of re-imagining data design for preference learning, demonstrating\nthat even freely available machine-generated rationales can significantly boost\nperformance across multiple dimensions." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "dpo", "preference learning", "alignment" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/97321c40c8af9c1ef713912359bcc840e8b81d47.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Data-Centric Human Preference Optimization with Rationales" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2Chkk5Ye2s
Be More Diverse than the Most Diverse: Online Selection of Diverse Mixtures of Generative Models
main
Active
multi-armed bandits;evaluation of generative models;kernel-based evaluation scores
reinforcement learning
5;6;6;6
3;4;3;3
3;3;3;2
3;3;2;3
3;3;4;3
5.75
3.25
2.75
2.75
3.25
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In practice, FID metric is widely-used in the evaluation of generative models. Can this paper cover this metric and why?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Overall, this paper is well-written and easy to follow.\n2. The proposed method (Mixture-UCB) is somehow novel, although it is inspired by classical UCB in the multi-armed bandit setting.\n3. Theoretical results about the regret bound are provided for the proposed Mixture-UCB-CAB. The proof seems right although I have not checked the proof line-by-line.\n4. Empirical results illustrate the effectiveness of the proposed method in finding the optimal mixture of text-based and image-based generative models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to solve the online selection task over a group of well-trained generation models. It explores the selection of a mixture of multiple generative models and formulate a quadratic optimization problem to optimize the kernel-based evaluation scores including kernel inception distance (KID) and Renyi kernel entropy (RKE). Specifically, it proposes an online learning approach called Mixture Upper Confidence Bound (Mixture-UCB). Theoretically, regret analysis is provided for one method (Mixture-UCB-CAB). Experimental results illustrate the effectiveness of the proposed method for text-based and image-based generative models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I am afraid that the online selection of well-trained generative models might have few applications because it is already costly for the (large) generative model inference, then why do we need online selection rather than batch selection? Discussions about practical applications can be added.\n2. Experimental results show that Mixture-UCB-OGD might be better than Mixture-UCB-CAB. However, theoretical guarantees about Mixture-UCB-OGD are missing. I know it might be more challenging and more detailed discussions can be added to clarify why." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Below are a few questions and/or comments.\n\n1. The problem appears novel, so I believe it makes sense to better motivative it. For example, in which context are we interested in picking a model to generate a sample at each round, why it is of interest to use \"the fewest possible sample queries\"? How the proposed method performs in an offline setting, with respect to performance and/or scalability?\n2. When summarizing the contribution of this paper, could the authors also provide (forward) pointers to the precise results? For example, \"proposing an online learning framework in Section ??\". I personally believe that this may facilitate the interested readers to quickly grasp the main contribution of the paper.\n3. Is the working assumption of linearly mixed model somewhat restrictive? Is there something else in the literature, or even such linear combination is (the first time) proposed by the authors in this paper? In fact, on the top row of Figure 3, there is a linearly mixtured \"dog\" that appears a bit bizarre: is this due to some limitation of this linear mixture? \n4. I personally find Theorem 1 a bit surprising: To me, kernel matrix \"estimation\" problem plus some online selection problem, and solving the former problem in general requires a lot of samples to have a tight spectral norm control on the estimated kernel matrix. I believe that the authors avoid this issue by assuming/focusing on the case of bounded. Could the authors comment more on this? For example, does this bounded kernel/loss function setting limit the practical interest of the proposed methods? Also, could the authors comment on the observed sample size $n_i$ for the proposed OGD method to make sense? We do not see this in Theorem 2 and this has an impact on the computational complexity I believe?\n5. a tiny side remark: Figure 3 appears in the main text but commented in the appendix." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper consider online selection of generative mixture models, which, to the best of knowledge, is a novel problem of interest.\n* By making interesting connection to kernel-based scores and multi-armed bandit, the authors propose efficient methods to solve the above problem, with some theoretical guarantee. \n* Experiments on realistic data are provided, showing the practical applicability of the proposed approaches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors focus on the online selection of generative models, and in particular, the optimal linear mixture among a set of such models.\nThe problem appears novel, and the authors make interesting connections to the maximization of some kernel-based scores and multi-armed bandit. \nBased on this, the authors propose Algorithms 1 and 2 to solve this online selection of mixture efficiently, with performance guarantee given in Theorem 1 and 2, respectively. (Although I have some concerns on the settings and theoretical results, see below).\nThese methods can be used for widely used kernel inception distance (KID) and Renyi kernel entropy (RKE), and are tested on realistic image and text data in Section 6." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* It would be great to discuss the limitation of the proposed approach, see below for my detailed comments/questions.\n* some settings and theoretical results need clarification, see below for my detailed comments/questions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "* Can the authors find some other works that also aim to find good mixtures of generative models, and compare their method to these works?\n\n* Can the authors provide the quality scores Density (Naeem et al., 2020). and Precision (Kynkaanniemi et al., 2019) in the experiments that they conducted?\n\n* Small question regarding Lines 257&259: is $\\hat{L}(\\mathbf{a};\\mathbf{x}^{(t)})-(\\mathbf{\\epsilon}^{(t)})^{\\rm T}\\mathbf{a})$ a lower or upper bound of $L(\\mathbf{a})$?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* This paper is well written and easy to follow.\n\n* The theoretical framework underlying the proposed algorithms is well grounded.\n\n* Extensive experiments were carried out to demonstrate the performance of the proposed algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The main goal of this work is to maximize the diversity of generated samples by selecting not a single but a mixture of generative models. Formulating first a population loss of quadratic form that can translate into evaluation scores including kernel inception distance (KID) and Renyi kernel entropy (RKE), this article proposes two online algorithms based on continuum-armed bandit and gradient descent, to find the optimal mixture through minimizing an upper confidence bound of the quadratic population loss. Experiments show that the proposed algorithms are efficient at approaching the optimal mixture." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* According to the literature review of this article, there seems to be little interest in finding a good mixture of different generative models. Indeed, if the goal is to approach the target distribution, it makes more sense to select the single best generative model than to use a mixture of different generative models, which are usually trained in an independent manner, therefore unlikely to complement each other.\n\n* It is true that when the objective is to find the single best generative model, the online approach can help prevent sampling from suboptimal models. However, as using a mixture of generative models requires sampling from all member models, the online approach seems to be less useful in this setting." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "It's interesting to see the authors formulated the generative model selection problem as an online selection problem. The authors also developed two algorithms for this new setting and provide theoretical guarantees for one of them. Experimental results demonstrate the efficacy of the proposed algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper study online selection for generative models, in order to generate diverse samples. The authors formulated the problem as a mixture multi armed bandit problem and developed two algorithms for that: Mixture-UCB-CAB and Mixture-UCB-OGD. The authors developed theoretical guarantees for the Mixture-UCB-CAB algorithm. The authors conduct many experiments to show the efficacy of their developed methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Since this is a new problem, can authors provide more motivations for online selection of generative models, e.g., how important is the ability to generate diverse samples? And how important is to save samples in the selection process.\n2. The authors provide a convergence guarantee for Mixture-UCB-CAB in Thm 2. For comparison, what is the rate of convergence for the offline approach that randomly generate $T$ samples and then optimize over $\\alpha$?\n3. Does Thm 1 holds for all $\\alpha$? Also, the guarantee in Thm 2 doesn't suffer the curse of dimensionality even if the algorithm is selection $\\alpha \\in R^m$; can authors explain why does that happen?\n4. Compared to standard bandit problem where one gets an intermediate regret term at each round, it seems that the studied problems gets $O(t)$ (averaged) terms (the first Eq in Section 5), and all these terms are related to the previous selections $x_1, \\cdots, x_{t-1}$. Can authors elaborate how do they deal with these terms in the analysis? What are some technical contributions?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024be,\ntitle={Be More Diverse than the Most Diverse: Online Selection of Diverse Mixtures of Generative Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2Chkk5Ye2s},\nnote={under review}\n}" }, "abstract": { "value": "The availability of multiple training algorithms and architectures for generative models requires a selection mechanism to form a single model over a group of well-trained generation models. The selection task is commonly addressed by identifying the model that maximizes an evaluation score based on the diversity and quality of the generated data. However, such a best-model identification approach overlooks the possibility that a mixture of available models can outperform each individual model. In this work, we explore the selection of a mixture of multiple generative models and formulate a quadratic optimization problem to find an optimal mixture model achieving the maximum of kernel-based evaluation scores including kernel inception distance (KID) and Renyi kernel entropy (RKE). To identify the optimal mixture of the models using the fewest possible sample queries, we propose an online learning approach called *Mixture Upper Confidence Bound (Mixture-UCB)*. Specifically, our proposed online learning method can be extended to every convex quadratic function of the mixture weights, for which we prove a concentration bound to enable the application of the UCB approach. We prove a regret bound for the proposed Mixture-UCB algorithm and perform several numerical experiments to show the success of the proposed Mixture-UCB method in finding the optimal mixture of text-based and image-based generative models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "multi-armed bandits", "evaluation of generative models", "kernel-based evaluation scores" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2e5ac98ea7ebfabb4f0992b5c56b509717d1fe14.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/38e38b8b295e3069d30ca9af1d73c0dfad47a9d1.zip" }, "title": { "value": "Be More Diverse than the Most Diverse: Online Selection of Diverse Mixtures of Generative Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2D0uXQbntW
InfiniBench: A Comprehensive Benchmark for Large Multimodal Models in Very Long Video Understanding
main
Active
video understanding;benchmark;long video benchmark;long video understanding
datasets and benchmarks
3;3;5;6;8
4;5;4;4;5
2;3;2;3;4
1;2;2;2;4
2;3;3;2;4
5
4.4
2.8
2.2
2.8
0.215166
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Add references and discussions of related work.\n2. It would be better to evaluate more long-video models (e.g., Qwen2VL) and different input frame rates (1, 8, 32, 128, and more).\n3. Since most question-answer pairs are generated by GPT-4o, could this lead to inflated evaluation results for GPT-4o? Analysis is needed regarding dataset quality, hallucination rates, and potential information leakage issues." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The questions are comprehensive and well-structured, covering multiple dimensions and employing diverse construction strategies for different types of questions.\n2. The evaluation methods are reasonable, adopting different assessment metrics for multiple-choice and open-ended questions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces InfiniBench, a video understanding benchmark dataset featuring the longest video duration (average 52.59 minutes per video) and the largest number of question-answer pairs (108.2K) to evaluate 9 different video understanding tasks.\n\nThe authors conducted comprehensive evaluations of existing large multimodal models (including commercial models like GPT-4V, Gemini 1.5 Flash, and open-source models). Experiments show that even leading AI models still face challenges in long video understanding, with the best models GPT-4V and Gemini 1.5 Flash achieving average accuracy rates of only 49.16% and 42.72% respectively." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper lacks discussion of related work. For example, benchmarks proposed in Video-MME, LVBench, and Long VideoBench published in June 2024 are very similar to InfiniBench.\n\n2. Most of the question-answer pairs are generated by GPT-4o. Although multiple information sources were used as input, it's difficult to guarantee the quality of the dataset.\n\n3. Part of the data comes from IMDB content, which likely appeared multiple times in the training corpus of LLMs used by video models, potentially leading to dataset leakage issues." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How is GPT-4V's scoring aligned with human evaluation?\n2. Why weren't the latest models tested, and why wasn't there comparison and discussion of the latest benchmarks?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. InfiniBench provides a comprehensive evaluation of large multimodal models' capabilities in long video understanding through including the longest video duration and a large number of question-answer pairs, as well as designing diverse question types (multiple-choice and open-ended questions) covering nine different skills, thus thoroughly examining models' performance across multiple dimensions of long video understanding.\n\n2. By evaluating various models including both commercial and open-source models, InfiniBench reveals the challenges and limitations of existing models in long video understanding, especially in tasks requiring deep contextual understanding and critical thinking. This in-depth assessment helps identify model deficiencies and provides clear directions for future research and model improvements.\n\n3. InfiniBench's design not only tests models' technical capabilities but also drives models toward more human-like understanding and reasoning abilities. Through proposing human-centric questions, such as movie spoiler questions, it promotes model performance improvement in long video understanding tasks, which is significant for achieving more advanced AI applications and advancing the field of artificial intelligence." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces InfiniBench, an innovative and comprehensive benchmark focused on evaluating large multimodal models' performance in understanding very long videos. InfiniBench is notable for its ultra-long video duration (averaging 52.59 minutes per video) and massive question-answer pairs (108.2K), covering nine different skills including multiple-choice and open-ended questions. These questions are designed to be both diverse and human-centric, with videos primarily sourced from movies and TV shows. Experimental results show that even leading AI models like GPT-4V and Gemini 1.5 Flash face significant challenges in long video understanding, achieving average accuracies of only 49.16% and 42.72%, with mean scores of 3.22 and 2.71 (out of 5) respectively. This indicates that while these models perform relatively well on local skills, they still have limitations in skills requiring global reasoning and deep contextual understanding, such as scene transitions and movie spoiler questions. Open-source models generally perform below random chance on multiple-choice questions, highlighting long-sequence global reasoning as a major challenge for existing models. Additionally, models relying on both video and text information perform poorly without caption input, emphasizing the importance of processing both visual and textual information for long video understanding. The introduction of InfiniBench aims to fill the gap in long video understanding benchmarks, drive the development of open-source large language models, and motivate multimodal large models toward more human-like long video understanding and reasoning capabilities, despite current limitations such as video source restrictions and script dependency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The benchmark only uses movies and TV shows for testing, which is too limited. It should include more types of videos that show different parts of real life, like nature documentaries or home videos. The problem is that movies and TV shows follow certain storytelling patterns, so AI models might just learn these patterns instead of truly understanding the videos. They should add more casual videos like vlogs and livestreams to make the testing more realistic.\n\n2. The benchmark needs written scripts to create its questions and answers. This is a big problem because most real-world videos don't come with scripts. Without scripts or captions, the benchmark can't test how well AI models understand regular videos that people actually watch and share online.\n\n3. InfiniBench's testing does not cover current mainstream open-source models such as Qwen2VL, LLaVA-Onevision, and InternVL2. This makes it difficult to obtain a more comprehensive and in-depth comparison between open-source and closed-source models.\n\n4. In Table 1, the benchmark comparison is insufficient, especially regarding some recent video benchmarks such as Video-MME and LongVideoBench. Additionally, the authors' definition of \"very long\" is problematic - MLVU and MovieChat have only a 3-minute gap, yet MLVU is defined as very long. This is not reasonable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "I have a concern about potential copyright infringement in this work. The proposed dataset is based on copyrighted content (video frames and subtitles of movies and TV shows) that authors have downloaded and used for experiments. The paper also includes figures of frames from TV shows. It is unclear whether the authors obtained permission from copyright owners for their use of the data. Authors do not mention whether they intend to release the dataset publicly, but if they do, this would raise further concerns." }, "flag_for_ethics_review": { "value": [ "Yes, Legal compliance (e.g., GDPR, copyright, terms of use)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Given the concerns listed above, I have doubts that this paper is suitable for publication at ICLR. I hope that authors can provide evidence to address my concerns as well as answers to the following questions.\n\n* Could authors provide evidence of transcript quality? How accurate and complete are they? How much focus do they have on vision? Could authors provide examples? \n* Why are multiple-choice questions evaluated by asking the model to generate an answer and then using GPT to match this answer to the options? Authors state in the appendix that the reason is that models often do not follow the prescribed answer format, but from my experience at least the larger VLMs are good at following instructions about the answer format. \n* I am worried that using GPT for option matching introduces additional bias. I believe this could be measured by evaluating GPT or Gemini again by giving it the answer options in the prompt and asking it to respond with only the answer letter. Results could then be compared against the GPT-matched results. \n * Also to the above point, did authors verify that event ordering type questions get matched correctly with GPT? These answers only differ in their ordering of options, so I am wondering whether GPT matches them correctly. \n* The benchmark was constructed using GPT, and GPT is the best performing model across all tasks. It would be interesting to quantify if there is bias towards GPT, e.g. by generating part of the data with Gemini and checking if relative model performance is consistent with the original benchmark. \n* How are copyright concerns handled? Did authors obtain permission from the copyright owners to use the video material for this purpose and to reproduce this content in a publication? If the dataset will be publicly released, how are copyright concerns handled? \n * l. 198: “To address this limitation, we transformed the TVQA dataset from a collection of short clips into a long video dataset by gathering and sequencing the clips corresponding to each episode thereby reconstructing the full episode frames.“ How was this done and what data source was used? \n* Appendix l. 12: “The remaining two skills, i.e., local visual questions and summarizing, do not need human verification, as the first one is adopted from the TVQA dataset, and the latter is scrapped from human responses on the web.” I do not fully agree with this statement since existing benchmarks and the humans writing the summaries that were pulled from the web could still contain errors. Do authors have evidence to the quality of TVQA annotations and summaries obtained from the web? \n* How does the number of video frames provided affect the model accuracy? \n* Appendix B is quite important to understand the evaluation results presented, so I think it would be better suited to be in the main text. \n* Appendix B mentions that the benchmark videos have no audio, so video and subtitles are provided to the model separately. Does this mean that alignment between frames and subtitles is missing? Did authors measure the effect of this? \n* Could authors explain how spoiler questions are generated and provide the prompt used? \n* How does the “I don’t know” option affect results? How accurately does GPT match model answers to this option? \n* Fig. 5 (left) is redundant with Tab. 3, so one of them should be removed. \n* l. 363: The explanation of local vision and text questions is not clear. It is not explained what these questions are nor how they were generated. \n* It would be good to have random accuracy in Tab. 5 for direct comparability. Then, Tab. 4 could be omitted. \n* l. 482: “As shown in the table 5, MiniGPT4-video and LLaVA-NeXT-Interleave match lower than the random performance” What random performance is being compared to here? It would help to add this to the table as suggested above. \n* l. 482, l. 505: How can a model’s performance be lower than random? \n* l. 488: “One reason may be that eliminating the noisy information and focus on only the related information helps more in answering the questions“ How does the Goldfish model eliminate noisy information? \n* For the human verification, how were human responses on open-ended questions evaluated?\n\nMinor points\n\n* Tab. 1: I would not agree with the “human” checkmark for InfiniBench since questions were generated fully automatically. \n* Tab. 2 is never referenced. \n* Appendix B: It would be helpful to express this in tabular form so readers can see at a glance how many frames and what modalities were used in each model. \n* Tab. 5.: I would suggest to organize this into one big table with one column per task type. Also would be nice to visualize as a radar chart. \n* It would be helpful to annotate question types in Sec 3.2.2 and Fig. 1 with whether they are MCQ or OE. \n* It would be helpful to see a listing of modalities (vision, summary, transcript) used to generate each question. \n* Please use \\\\citep for citations to place citations in parentheses. \n* In tables, please right-justify numerical columns and use a consistent number of digits after the decimal point. \n* Fig. 4: The font size in these charts is very small in print. I suggest increasing it. Also I would suggest to change the pie chart into a bar chart for easier readability. \n* Fig. 5: Same concern as above about the font size. \n* l. 373: Here, the reference to Fig. 4 is repeated, but Fig. 5 is wrongly referenced. Suggest correcting this sentence to refer to Fig. 3\\. \n* l. 406: Broken reference. \n* l. 413: The reference should point to Sec. B in the supplementary material." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The presented benchmark has an impressive scale with 108.2k questions on 1,219 videos that average 52.59 minutes in length. \n* There are 9 different question types that test long video understanding models across a variety of skills. \n* The paper presents results of 8 long video models and draws interesting conclusions on their performance. \n* There is a large gap between human performance and model performance, suggesting the benchmark has ample room for improvement. \n* The paper has a good in-depth discussion of related work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes InfiniBench, a novel benchmark for long video understanding based on movies and TV shows. The benchmark has 108.2k question-answer pairs on 1,219 videos that average 52.59 minutes in length. The benchmark tests 9 different reasoning abilities including visual, long-context and local reasoning. This makes InfiniBench the largest-scale long video understanding benchmark to date. InfiniBench was constructed by combining and augmenting from two existing video benchmarks, TVQA and MovieNet. Most question types were generated by prompting GPT-4 with the transcript of the video while a custom pipeline was used to generate questions on changes in character appearance. The paper presents benchmark results of 8 long video understanding models, including 6 open source ones and 2 commercial ones, and discusses insights into their performance across various tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The question-answer pairs in the benchmark were generated fully automatically without any human intervention. This raises questions about soundness and of the questions and potential bias. A human evaluation is performed on a subset of the data, but good human performance is no proof that questions are well-formed and free of hallucinations. \n* Most questions are generated from transcripts that authors obtained online, but it is unclear what information these transcripts contain, whether they are complete and error-free. It is also unclear how much visual information the transcripts contain and therefore it is unclear to what degree this is a multimodal benchmark. \n* The use of movies and TV shows raises questions about generalizability. Most MLLMs likely know the plots of popular movies and shows because their summaries or transcripts were part of their training data. So, they may be able to answer the questions in the dataset without any context, which is not the case for most videos from the web. The effect of this is not examined. \n* It is unclear how much the benchmark relies on multimodal reasoning. Questions about movies and TV shows could often be answerable from subtitles alone, which are provided as context in the evaluation. It would be interesting to see an ablation that uses (1) No context, only the question itself (2) Only the question and subtitles (3) the question, subtitles and video frames. \n* The copyright implications of using movies and TV shows and possibly releasing the dataset are not discussed and raise ethical concerns. \n* Since the dataset has \\~100 questions per video, it is likely that there are (near) duplicate questions. However there is no analysis of this and no mention of a filtering stage to remove duplicates. \n* There are several issues with the presentation such as redundant figures, tables that are not referenced, and wrong references. The limitations section also exceeds the 10-page limit." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1 Important Topic. Long-form video understanding is a challenging but important problem. Hence, how to develop benchmark to evaluate this problem is critical.\n\n2 Experiments. The experimental results are sufficient to support the claim of benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work, the authors propose an InfiniBench for very long video understanding. To contain local/global events and understand visual/contextual content, they define a long video understanding covering nine skills through four critical aspects in movies and tv shows." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1 Similar work has been proposed in the literature. For example, [MoVQA: A Benchmark of Versatile Question-Answering for Long-Form Movie Understanding, arXiv:2312.04817]. Please clarify the difference. \n \n2 The writing and paper organization is not good. Please refine it for easy reading." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Can the authors comment on the limited variety of TV shows? What about sports events like NBA, NFL, Tennis, etc. \n\nOn GPT-4o evaluation, only 250 frames are selected. Are the 250 frames selected uniformly? Have you tried to reduce the frame size and squeeze more frames into GPT-4o?\n\nWill all the videos be released to public? Are there any legal issues?\n\nAre there text scripts (screenplay) associated with all the videos (movies and TV shows)?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The videos are very long with an average length 52 minutes.\n\nThe number of (question, answer) pairs is large (108k)\n\nSome of the questions are unique such as spoiler questions, global appearance, and scene transitions. \n\nCompared to the existing benchmarks, this benchmark contains much longer videos and contains some new interesting types of questions. It'll be very useful to the researchers who work on long video understanding." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a benchmark, called InfiniBench, for the evaluation of long video understanding. The dataset consists of 1219 videos. The average length of the videos is 52.59 minutes. There are 108.2K (video, question) pairs. The questions are divided into 9 categories. Some categories require the ability to make associations across a longtime span. Some categories require in-depth understanding and reasoning capabilities. It is a very interesting new benchmark for long video understanding." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The variety of TV show sources is limited since there are only 6 different TV shows." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024infinibench,\ntitle={InfiniBench: A Comprehensive Benchmark for Large Multimodal Models in Very Long Video Understanding},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2D0uXQbntW},\nnote={under review}\n}" }, "abstract": { "value": "Understanding long videos, ranging from tens of minutes to several hours, presents unique challenges in video comprehension. Despite the increasing importance of long-form video content, existing benchmarks primarily focus on shorter clips. To address this gap, we introduce InfiniBench a comprehensive benchmark for very long video understanding which presents 1)The longest video duration, averaging 52.59 minutes per video; 2) The largest number of question-answer pairs, 108.2K; 3) Diversity in questions that examine nine different skills and include both multiplechoice questions and open-ended questions; 4) Human-centric, as the video sources come from movies and daily TV shows, with specific human-level question designs such as Movie Spoiler Questions that require critical thinking and comprehensive understanding. Using InfiniBench, we comprehensively evaluate existing Large Multi-Modality Models (LMMs) on each skill, including the commercial models such as GPT-4o and Gemini 1.5 Flash and the open-source models. The evaluation shows significant challenges in our benchmark. Our findings reveal that even leading AI models like GPT-4o and Gemini 1.5 Flash face challenges in achieving high performance in long video understanding, with average accuracies of just 49.16% and 42.72%, and average scores of 3.22 and 2.71 out of 5, respectively. We hope this benchmark will stimulate the LMMs community towards long video and human-level understanding. Our benchmark can be accessed at (https://infinibench.github.io/Infinibench-website/) and will be made publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "video understanding", "benchmark", "long video benchmark", "long video understanding" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d969256772e8bb967c084d933943fb219d1359f7.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e4286990af860ccb3f16a8fce15b1da8635ef0b5.zip" }, "title": { "value": "InfiniBench: A Comprehensive Benchmark for Large Multimodal Models in Very Long Video Understanding" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2DD4AXOAZ8
Inference-Friendly Models With MixAttention
main
Active
language models;inference;transformers;architecture
foundation or frontier models, including LLMs
1;1;3;3
5;4;4;5
3;2;2;2
1;1;2;2
2;1;2;3
2
4.5
2.25
1.5
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Refer to the weaknesses." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The idea is simple and clear, the experimental setup is also quite clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce MixAttention, an architecture that employs sliding window attention to store only recent tokens while sharing KV caches across layers. They train and evaluate four different variants and report the corresponding results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper lacks innovation; both the recent window and multi-layer attention are established techniques. The paper simply combines these two methods without any improvements.\n\n2. The experimental results are presented solely as bar charts. I believe it would be beneficial to include a table with some precise values.\n\n3. This paper resembles more of a technical report rather than an innovative and well-developed research paper, which does not meet the high standards of ICLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the Weakness part" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The combination of sparsifying the token of sequence and sharing the KV cache across layers seems to be a promising method to reduce the inference cost. This paper conducts some interesting experiments, from pre-training to evaluation, to give us some insights regarding the impact of different choices of the setups of such combination.\n2. The experiment setup is reasonably designed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to optimize the inference efficiency of LLMs by reducing the amount of KV cache. The core intuition of this paper is to combine two existing approaches, i.e., sliding window attention and layer-wise sharing of KV cache, to further reduce the memory cost of inference. Although this kind of combination has already been proposed by some blog and papers, this paper aims to explore the effectiveness of this kind of method from an empirical perspective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty is limited in two ways. Firstly, it is a straightforward combination of two existing techniques without many adjustments. Secondly, this combination has already been explicitly described in the blog of character.ai, as cited by the authors.\n2. I can get that the value of this paper is to provide some empirical guidelines of this combination method, but still, the new information brought by this paper is also limited. For example, “…having the standard KV cache computed in the deeper layers is more important for long context abilities than the standard KV cache of the first few layers.” has been declared by some existing studies. In general, the experiment conclusions of this paper are some high-level phenomenons, instead of some practical methodology.\n3. The experiments are all based on a 5B MoE model, which makes the generalisability of the conclusions less convincing. \n4. There are quite a few new hyper-parameters getting involved, e.g., for a N-layer model, how to decide which layers are standard attention, which layers are sliding window? how many layers for a KV-sharing group? These decisions are pre-defined in this paper, but what’s really interesting is how to make these decisions wisely given a new model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- It would be interesting to see trends between performance and degree of cache-sharing for both standard attention and sliding window attention, as this would give us a better understanding of the rate at which the performance worsens.\n- More explanation for why certain choices were made for the experiments such as the eval benchmark of choice, selection of cache-sharing variants.\n- More discussion and analysis of the results that leads to deeper insights.\n- More discussion about the differences between this and the other cache-sharing paper [1].\n\n[1] William Brandon, Mayank Mishra, Aniruddha Nrusimha, Rameswar Panda, and Jonathan Ragan Kelly. Reducing transformer key-value cache size with cross-layer attention. arXiv preprint arXiv:2405.12981, 2024." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Cache sharing across layers has not been extensively studied and ablated over, and so this paper provides additional sample points that show the relationship between cache sharing approach and performance. \n- The authors tested their results on RULER which is a long-context benchmark and more conventional evals such as MMLU and HellaSwag through the Gauntlet evals framework which unveils differences in performance between different KV-cache sharing approaches.\n- Some of these KV-cache sharing variants perform as well as standard attention while being significantly cheaper in compute and memory." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper ablates over a particular modification to the transformer architecture where kv-caches are shared across layers and a portion of layers use sliding window attention, for the purpose of reducing compute and memory while retaining performance. \nTheir main findings show that sharing the KV-cache from the first layer, throughout the entire network hurts performance on RULER (at 32k ctx), and so the KV-cache for a non-sliding window attention layer should be computed at least once in deeper layers, while also controlling for the level of kv-cache sharing on the sliding window attention layers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of insight or discussion as to why certain cache-sharing approaches perform better or worse.\n- The paper lacks novelty, as it mostly relies on architectural configurations proposed by a blog by CharacterAI [1], and as a consequence, it lacks explanation as to why these configurations were selected in the first place.\n- In general, the main critique is that the paper presents only surface level analysis of the observations and does not contribute much to a deeper understanding of why certain cache-sharing approaches perform better than others.\n\n[1] Character.AI. Optimizing AI Inference at Character.AI — research.character.ai. https://research.character.ai/optimizing-inference/, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. There is no Pareto improvement shown. How does the proposed approach compare to a smaller standard MoE model with similar KV-cache size? It would be ideal to see a Pareto-improvement curve with KV-cache memory on the X-axis and model accuracy on Y-axis." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is easy to follow and unlike most approaches that use custom device-level code to make inference efficient, the approach doesn't require any custom kernels. This makes the approach easier to adapt to slight changes in the model architecture or running inference on hardware from other vendors." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed an approach called MixAttention which is interleaving standard attention with sliding window attention. Their MixAttention approach also shares KV-cache across the layers. All these optimizations lead to reduce memory usage for the model during inference without significantly deteriorating the model accuracy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is no novelty in the approach. The paper just evaluates the approach proposed in the [blog](https://research.character.ai/optimizing-inference/) by character.AI with slight modifications. Also, there is nothing new written in the paper different from the blog.\n2. The authors have not put in enough effort for the paper. There is no optimization done in SGLang to optimize the inference for sliding window attention baseline.\n3. The paper is poorly written and there are some typos in the paper. For instance, line 199 uses the word 'sequence' twice in succession.\n4. The paper also says to refer to the appendix for a few experiments, however, there is no appendix in the paper.\n5. I don't believe that any amount of experiments can make the paper in an acceptable format since there is no novelty." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "KV cache size is a major factor in determining the inference throughput and memory footprint of LLMs. We show that KV cache sharing between layers and adding sliding window layers can speed up inference while maintaining model quality." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024inferencefriendly,\ntitle={Inference-Friendly Models With MixAttention},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2DD4AXOAZ8},\nnote={under review}\n}" }, "abstract": { "value": "The size of the key-value (KV) cache plays a critical role in determining both the maximum context length and the number of concurrent requests supported during inference in modern language models. The KV cache size grows proportionally with the number of attention heads and the tokens processed, leading to increased memory consumption and slower inference for long inputs. In this work, we explore the use of MixAttention, a model architecture modification closely related to a blog published by Character.AI. MixAttention combines sliding window attention, where only a small subset of recent tokens is stored in the KV cache, with KV cache sharing across layers. Our experiments demonstrate that MixAttention significantly reduces memory usage and improves inference speed without sacrificing model performance in both short and long-context tasks. We also explore various configurations of this architecture, identifying those that maintain quality across evaluation metrics while optimizing resource efficiency." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "language models", "inference", "transformers", "architecture" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/93ebbaa6ce0c07754ccdf1a5f99b08c05f486650.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/95f1e50ebff704730aa78f914e5a8653830a95db.pdf" }, "title": { "value": "Inference-Friendly Models With MixAttention" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2E2q9t1MFp
Impact of Data Distribution on Fairness Guarantees in Equitable Deep Learning
main
Active
Fairness in Machine Learning;Equitable Deep Learning;Fairness Error Bound
alignment, fairness, safety, privacy, and societal considerations
3;5;6
3;2;4
2;3;3
2;3;2
2;4;3
4.666667
3
2.666667
2.333333
3
0.327327
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. I actually don't see that the main results (like Thm 7) have the sample size as a factor in the bound? Is this correct? Shouldn't the bound improve as a factor of n?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Overall, the evaluation seems reasonable within the specific domain. The authors present 4 real-world datasets for different detection tasks.\n\n2. The analytical results seem quite strong. Specifically Thm 7 and Cor 2 could be quite useful results for analyzing fairness under Gaussian assumptions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a theoretical framework for analyzing fairness in medical domains across diverse demographic groups. The authors present several strong analytical results, under some statistical assumptions. The authors evaluate on 4 datasets on two deep learning models with fairness over racial groups." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Overall, the evaluation in this work is quite weak. The primary result, Fig 1 is still a mystery to me. I dont have intuition for what the feature distribution 'ought' look like. So this seems the authors present mostly AUC over 4 detection tasks. \n\nUnless I missed it, this work doesn't actually present any bias mitigation strategy, except some discussion about sufficiently large sampling. \n\n2. there seems to be some assumptions of normality within this work that might not \n\n3. The overall scope of this work is somewhat limited. I didn't quite get the specifics that make these bounds hold under *medical domains* specifically (vs. domain independent). Why this is a domain paper is still a mystery to me, as none of the problem setting particularize it to medical.\n\nSmall: While the high level analytical results are fairly intuitive. I did seem to get lost in the theorem specifics. This could be an issue with me, or the notation, which I often found impenetrable without further highlighting or description in text. e.g. Thm 1, 6. Corr. 1,2. \n\nOverall, in my (unconfident) estimation, this could be a reasonable domain paper with strong analytical results. However, without a mitigation strategy, with some difficult interpreting the theorems, and without understanding the domain specificity (thus narrowing the paper), I'm not over the accept threshold on this." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "> We prove that under certain conditions, the local optima of the fairness problem can outperform those of the supervised learning problem, highlighting the importance of considering fairness criteria in model development.\n\nCan you please elaborate what this means?\n\nPlease see the weakness section above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper derives a range of theoretical results involving fairness error bounds, algorithmic complexity, generalization bounds, convergence rates, and group-specific risk bounds\n2. The paper also conducts extensive experiments on a variety of medical datasets to confirm the theoretical findings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents theoretical results regarding the fairness losses of machine learning models. These theoretical results are then validated on different medical datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Firstly, the authors frame the problem as specifically for AI-based medical diagnosis systems. This is also reflected by the mention of specifically the medical setting in both, the abstract and the introduction. However, the setting being considered is much more general, and therefore should not be framed as being specific to the medical setting.\n2. The motivation behind the paper is not very clear. The authors derive a number of theoretical results, however, these results are not well motivated. For example, it is not clear how these results can be useful in practice, or how they can help improve fair model training. \n3. There is no discussion of the implications of theorem 1. Why is it useful and what insights does it provide?\n4. The disease prevalence $r_i$ is not defined formally before assumption 1. \n5. In Theorem 2, what is the loss that the optimal function f^* minimises?\n6. I am not convinced that the result in Theorem 2 is correct. Firstly, there is no assumption on how close $\\hat{f}$ is to $f^*$. So in theory, $\\hat{f}$ could be very ‘far away’ from $f^*$ if it is not trained correctly. In this case, even as the number of data $n$ increases, the fairness errors of model $\\hat{f}$ could be very different from that of $f^*$. In specific, in the proof of this result, how do you get from line 790 to line 791 (i.e. from the second inequality to the third)?\n7. $\\epsilon$-optimality is not defined\n8. > The theorem suggests that to achieve a smaller fairness risk, one should have a larger sample size, a smaller VC dimension, and a smaller number of demographic groups (lines 263-265)\n\nThis is not necessarily true. This just means that the upper bound is small in this case, but does not necessarily mean that these parameters lead to a smaller fairness risk\n\n9. fairness risk in line 273 $R(f)$ is not defined explicitly.\n10. There is no discussion on how realistic the assumptions made are, and how robust the theoretical and empirical results are to these assumptions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "-- Could you highlight the main steps or nuances in the mathematical analysis that arises due to difference of loss in comparison to standard generalization bounds on loss functions? \n\n-- In the experiments section, are you comparing an upper bound with an upper bound?\n\n-- What would be the main takeaway of the experiments section?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-- Interesting mathematical analysis using techniques from learning theory such as Hoeffding bound, VC dimension, and Symmetrization lemma. \n-- The paper is generally well-written and the ideas are quite nicely presented.\n-- They have also included experimental results to show how their upper bound can be computable in several scenarios. The main strength is that the theoretical bound seems to be computable from the experiments, so the research has both depth and applicability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents an interesting learning-theory-inspired analysis of fair machine learning, focussing on fairness measures that aim to equalize the loss across all groups. Specifically, the fairness measure is of a similar spirit as equalized odds/equal opportunity and is defined as the differences in expected loss across various demographic groups. Next, they derive interesting complexity bounds on this loss difference using statistical learning techniques like Hoeffding's inequality, VC dimension, and the symmetrization lemma. They have also included experimental results on several datasets alongside their theoretical contributions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-- Though they mention AI-based medical diagnosis here and there including the abstract, I don't think the paper has anything unique to medical diagnosis here. I think the emphasis on medical diagnosis is a bit of a distraction and can be discussed only in experiments if needed. \n\n -- While the derivation of generalization bounds for a loss function in itself is not new, their main nuance (as per my understanding) lies in bounding the difference of the losses. They also make Lipschitz assumptions. I can increase my rating if the novelty of the analysis is distilled out.\n\n-- The problem statement is closer to accuracy-fairness tradeoffs. While the paper has referenced several early papers in this area of accuracy-fairness tradeoffs, a lot of other prior works in the last 2-3 years that are quite closely related to this work have not been discussed. \n[1] Menon, A. K. and Williamson, R. C. The cost of fairness in binary classification. In Proceedings of the Conference on\nFairness, Accountability and Transparency, 2018.\n[2] Zhao, H. and Gordon, G. J. Inherent tradeoffs in learning fair representation. arXiv preprint arXiv:1906.08386, 2019.\n[3] Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, Kush Varshney. Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing. International Conference on Machine Learning 2020.\n[4] Garg, S., Kim, M. P., and Reingold, O. Tracking and improving information in the service of fairness. In Proceedings\nof the ACM Conference on Economics and Computation, pp. 809–824, 2019.\n\nFor instance, [1] also considers similar fairness metrics. [3] also looks into tradeoffs using equalized-odds-like measures and difference in errors across groups.\n\n-- I would also be curious if this type of analysis has been explored in the context of fairness in federated learning attempting to characterize the worst gap in loss across multiple clients.\n\n-- Another possible limitation: It might be difficult to extend this to demographic parity?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This work theoretically analyzes the impact of disease prevalence and data distribution on fairness in medical AI, deriving bounds and providing empirical validation on diverse datasets to understand and mitigate unfairness." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024impact,\ntitle={Impact of Data Distribution on Fairness Guarantees in Equitable Deep Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2E2q9t1MFp},\nnote={under review}\n}" }, "abstract": { "value": "Fairness in machine learning is paramount to human society because machine learning systems increasingly influence various aspects of our daily lives, particularly in consequence-critical tasks such as medical diagnosis. Deep learning models for medical diagnosis often exhibit biased performance across diverse demographic groups. Theoretical analyses to understand unfairness in AI-based medical diagnosis systems are still lacking. This work presents a comprehensive theoretical analysis of the impact of disease prevalence and data distributions on the fairness guarantees of deep learning models for medical diagnosis. We formalize the fairness problem, introduce assumptions, and derive fairness error bounds, algorithmic complexity, generalization bounds, convergence rates, and group-specific risk bounds. Our analysis reveals that fairness guarantees are significantly influenced by the differences in disease prevalence rates and data distributions across demographic groups. We prove that considering fairness criteria can lead to better performance than standard supervised learning. Empirical results on diverse datasets, including FairVision, CheXpert, HAM10000 and FairFace, corroborate our theoretical findings, demonstrating the impact of disease prevalence and feature distribution disparities on the equitable performance of deep learning models for tasks such as glaucoma, diabetic retinopathy, age-related macular degeneration, and pleural effusion detection. The code for analysis is publicly available via \\url{https://github.com/anonymous2research/fairness_guarantees}." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Fairness in Machine Learning", "Equitable Deep Learning", "Fairness Error Bound" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/389543dc17341c784fad19fea65a7ec69a394df7.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Impact of Data Distribution on Fairness Guarantees in Equitable Deep Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2E6OK8cSoB
Semantic-Aware Diffusion Model for Sequential Recommendation
main
Active
Diffusion Model;Sequential Recommendation
generative models
3;3;5
5;4;5
3;2;2
2;2;1
3;3;2
3.666667
4.666667
2.333333
1.666667
2.666667
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The paper mentions using \"clean user sequences\" to generate semantic embeddings, but actual user interaction data usually contains noise (such as accidental clicks). Will the interaction sequences that have not been denoised affect the quality of semantic embeddings?\n\nThe description of Semantic Fusion Layer in the method section is relatively brief. Can you give a clearer explanation?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Clear motivation: the article identifies the noise problem of the diffusion model in the recommendation task, and proposes to use semantic information as a conditional input to reduce the impact of noise. This motivation is reasonable and meets the actual needs of the recommendation system.\n\nThe experimental design is relatively sufficient: the paper conducts comprehensive comparative experiments with mainstream recommendation methods on multiple real data sets, demonstrating the advantages of SDREC in recommendation quality. The experimental design is relatively reasonable and verifies the effectiveness of the model." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a semantic-aware diffusion model SDREC for sequential recommendation tasks. SDREC enhances the model's use of item semantic information by introducing the Semantic Fusion Layer, making the recommendation generation process more accurate. This layer fuses the semantic features in the embedding table to help the model better understand the user's interest dynamics when making recommendations, thereby improving the quality of recommendations. In addition, SDREC uses a contrastive learning framework to improve the model's adaptability to different sequence patterns. Experimental results show that on multiple real datasets, SDREC outperforms a variety of existing methods in recommendation accuracy and computational efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Lack of clear formulas and detailed descriptions: A key component of the article is the Semantic Fusion Layer, but the specific implementation details of this module lack clear formula support and detailed descriptions of its design points. This makes it difficult for readers to fully understand the actual role of this module in the model and its contribution to the recommendation effect.\n\nNoise in user interaction sequences: The paper mentions that \"the encoder receives clean user sequences and explicitly captures the semantic relationship between items through contrastive learning.\" I understand that the author's definition of clean here refers to the original interaction sequence (no noise is introduced). But my question is that the original user interaction sequence is often not clean and may contain noisy data such as misclicks and unexpected behaviors. Existing research points out that user behavior data usually contains noise, and unprocessed click data may cause the recommendation model to deviate from the user's true preferences [r1]. Therefore, whether user sequences that have not been processed with noise can generate high-quality semantic embeddings and whether such semantic embeddings are conducive to diffusion models require in-depth analysis.\n\n[r1] Hongyu Lu, Min Zhang, and Shaoping Ma. 2018. Between Clicks and Satisfaction: Study on Multi-Phase User Preferences and Satisfaction for Online News Reading. In Proceedings of the International SIGIR Conference on Research and Development in Information Retrieval. ACM, 435–444" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Study the problem of unawareness of global semantics in diffusion recommenders.\n\n\n2. Design an encoder-decoder architecture to address the issue and demonstrate the performance on three datasets across multiple baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper falls into the sequential recommendation, where a novel diffusion recommender that considers global awareness of item semantics is introduced. The proposed encode-decoder architecture is well-designed to learn from global semantics. However, the motivation is not reasonable, and the proposed technique is not practical in real-world scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation is not reasonable. The sequential recommendation aims to predict the next item. The recommender generally outputs the probability distribution on the item set in this setting. In other words, items with high probabilities should be ranked first by design. According to the user's past behavior in Figure 2, category 6 is the user's major interest. As a result, the concentration of distribution among the top 10 predictions is reasonable.\n\n\n2. The proposed solution is not practical. A real-world recommender usually handles millions of items. The computational complexity of the proposed solution is related to the number of items, which makes it challenging to scale up and handle new items. Thus, the inference time comparison in Table 4 will have a different conclusion when using a large-scale dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. **Can You Clarify What \"Semantics\" Are Used and How They Are Encoded?**\n - The term \"item semantics\" is central to the model, but the specifics are not clearly defined. Could you provide examples (e.g., item attributes, categories) and explain how these semantics are encoded and integrated into the model?\n\n2. **What Is the Theoretical Justification for the Semantic Fusion Layer?**\n - The Semantic Fusion Layer is a novel component, but its role and theoretical basis in the diffusion process are not fully explained. Could the authors elaborate on why this specific mechanism enhances the recommendation performance compared to other methods?\n\n3. **Is SDREC Scalable to Large-Scale Real-World Applications?**\n - The paper shows efficiency on moderate-sized datasets, but how does SDREC scale to millions of users and items in real-world settings? Have the authors conducted any scalability tests or optimizations to demonstrate its readiness for large-scale deployment?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "#### 1. **Originality**\nThe paper presents a novel approach to sequential recommendation by introducing the **SDREC** model, which leverages a **Semantic Fusion Layer** to effectively incorporate item semantics into the diffusion process. This contribution stands out for several reasons:\n - It addresses a critical limitation in current diffusion-based recommendation methods, which often fail to utilize semantic information effectively.\n - The combination of a **contrastive learning framework** with a generative diffusion process is a creative and unique approach that differentiates this work from existing models.\n\nOverall, the originality arises from the integration of **semantic-aware mechanisms** in diffusion-based recommendation, which fills a significant gap in the current literature.\n\n#### 2. **Quality**\nThe paper demonstrates decent quality in terms of both methodological rigor and empirical validation:\n - The authors provide a **clear and detailed description** of the SDREC model, including theoretical motivations and the design choices behind its components (e.g., the Semantic Fusion Layer). \n - Extensive experiments are conducted on multiple **real-world datasets** (e.g., Amazon Beauty, Amazon Toys, and Movielens), showing consistent and significant improvements over state-of-the-art baselines.\n - The implementation details, including the training strategies and parameter settings, are well-documented, ensuring reproducibility and transparency.\n\n#### 3. **Clarity**\nThe paper is well-structured and clear in its presentation:\n - The introduction effectively outlines the problem and motivates the need for the proposed model. \n - The model design and methodology are described systematically, with helpful visual aids (e.g., figures and diagrams) that clarify complex concepts, such as the diffusion process and the role of the Semantic Fusion Layer.\n - The results and analysis are presented in an organized manner, making it easy for the reader to understand the comparative performance and the benefits of SDREC over baseline models.\n\nThe clarity of explanation, coupled with structured figures, enables readers to follow the technical details without ambiguity, contributing to the paper's accessibility.\n\n\n#### Summary of Strengths\nIn summary, the paper excels across multiple dimensions:\n - **Originality**: Innovative integration of semantics in diffusion processes.\n - **Quality**: Methodologically rigorous with comprehensive empirical validation.\n - **Clarity**: Clear presentation supported by visual aids and structured explanations.\n\nThe combination of these strengths makes this paper a valuable addition to the literature on sequential recommendation systems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes SDREC, a semantic-aware diffusion model for sequential recommendation tasks, which aims to predict the next item a user is likely to interact with based on their historical interaction sequence. The authors highlight the limitations of existing diffusion-based recommendation models, which often fail to incorporate item semantics effectively, leading to suboptimal recommendations. To address this, SDREC introduces a Semantic Fusion Layer, an innovative component designed to enhance the diffusion process by integrating item semantic information through an attention mechanism. This approach, combined with contrastive and generative losses, ensures that item semantics are fully utilized, improving the model’s accuracy in predicting user preferences. \n\nThe experimental results show that SDREC outperforms state-of-the-art models, achieving over a 10% relative improvement in performance while maintaining computational efficiency, making it suitable for real-time applications. The paper demonstrates SDREC’s superiority through experiments on multiple datasets, underscoring the importance of integrating item semantics in diffusion-based sequential recommendation systems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "#### 1. **Unclear Motivation and Explanation of Semantic Utilization**\nWhile the paper introduces SDREC as a model that integrates item semantics through the **Semantic Fusion Layer**, the motivation behind why and how semantics are critical in the diffusion process remains insufficiently explained. Although the authors mention that traditional models do not effectively leverage semantics, the paper does not provide a clear, detailed rationale for why this limitation specifically impairs recommendation accuracy. Additionally, the semantic information utilized (e.g., item categories, attributes) is not well-defined, leaving readers uncertain about what constitutes the \"semantics\" and how exactly it is encoded or represented.\n\n**Recommendation for Improvement**: \n - **Motivation**: The paper would benefit from a stronger motivation section that explicitly explains why semantics are crucial in sequential recommendation tasks and why their integration into the diffusion process is expected to enhance performance. The authors could provide theoretical justifications or empirical evidence showing the gap in current models and how the proposed method aims to bridge this.\n - **Clarification of Semantics**: The authors should clearly define what they mean by \"item semantics.\" Providing specific examples (e.g., movie genres, product categories, textual descriptions) and explaining how these elements are encoded and utilized within the model would make the approach more transparent. Additionally, it would be helpful to include an illustration or case study demonstrating how semantic information influences the diffusion process and leads to better recommendations.\n\nBy improving the clarity of motivation and the description of semantic use, the paper could strengthen its theoretical foundation and make its contributions more accessible and convincing to readers.\n\n#### 2. **Scalability Concerns with Large-Scale Datasets**\nAlthough SDREC demonstrates efficiency on moderate-sized datasets (e.g., Amazon and Movielens), the paper does not provide evidence of its scalability on **larger, real-time recommendation systems** that involve millions of users and items. Given that the diffusion process involves multiple steps and attention mechanisms, it is important to understand whether SDREC can scale without compromising latency and computational resources in a production environment.\n\n**Recommendation for Improvement**: Including a scalability analysis or experiments on larger datasets (e.g., a full-scale Amazon dataset or Netflix prize data) could strengthen the paper’s claims about the model’s efficiency and its readiness for real-world deployment. \n\n\n\n\n#### Summary of Weaknesses\nIn summary, while SDREC shows promise, the following areas need improvement:\n - Improving the clarity of motivation and the description of semantic use.\n - Evaluating the model's scalability with larger datasets." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We incorporate item semantics into the diffusion model through a Semantic Fusion Layer to enhance its performance for sequential recommendation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024semanticaware,\ntitle={Semantic-Aware Diffusion Model for Sequential Recommendation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2E6OK8cSoB},\nnote={under review}\n}" }, "abstract": { "value": "Sequential recommendation aims to predict the next click for a particular user based on their historical interacted item sequences. Recently, diffusion-based methods have achieved the state-of-the-art performance in sequential recommendation. However, they fail to effectively utilize the rich semantic information embedded in items during the diffusion process to accurately guide the generation, leading to sub-optimal results. To address this limitation, we designed SDRec, a **S**emantic-aware **D**iffusion model for sequential **Rec**ommendation. Our model introduces a novel architecture, the Semantic Fusion Layer, which leverages the embedding table from the encoder to incorporate item semantics into the diffusion process through an attention mechanism. Together with the well-designed contrastive and generative losses, SDRec effectively utilizes the item semantics in diffusion model, unleashing the potential of sequential recommendation. Our experiments show that SDRec has over 10% relative gain with superior efficiency compared with existing methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion Model", "Sequential Recommendation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/588ae3b8d4859db67568719dca3160aa71c12cbd.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Semantic-Aware Diffusion Model for Sequential Recommendation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ET561DyPe
Few-Class Arena: A Benchmark for Efficient Selection of Vision Models and Dataset Difficulty Measurement
main
Active
Few-Class;lightweight;small neural network;benchmark;scaling law;image similarity;convolutional neural network;CNN;transformer
datasets and benchmarks
5;5;6;6
4;3;3;3
2;3;3;3
2;2;3;2
2;2;3;3
5.5
3.25
2.75
2.25
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* I may have missed something, but I cannot understand exactly how the proposed difficulty metric is intended to be used?\n* The proposed difficulty metric seems expensive to compute, with pairwise similarity scores required for large subsets of the data. How does this compare to the cost of conducting a single model training run?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Tab 1 / Fig 2b shows the results when training different base architectures on different classification datasets from scratch. Interestingly, not all results are correlated with the trend on ImageNet-1K, indicating the optimal architecture choice depends on the dataset.\n- The code is open sourced and well documented. It seems that it would be simple for a researcher to reproduce the authors claim with limited effort (though I have not run the code myself).\n- It is interesting that sub-models consistently outperform full models on ImageNet. The fact that full models have seen more training datapoints in total may have compensated for fewer classes, which makes the result not totally intuitive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles the problem of choosing image classifiers for tasks with only a small number of categories (\"Few-Class\"). To do so, they introduce a new benchmark, termed \"Few Class Arena\" (FCA) on which they train and evaluate a range of models on subsets of various full datasets (e.g wit so-called sub-models trained on between 2 and all 1k ImageNet categories). The FCA benchmark is open-sourced with code available on GitHub. The paper provides detailed discussion of how the open-source package can be used for model selection in the few-class setting.\n\nOverall, the authors show that models trained on specific sub-classes (sub-models) are better than models trained on the full dataset and evaluated on the same sub-classes, across model sizes. They further show that there is no single best model architecture for a given dataset, and that training models on different datasets result in different rankings of architecture. The authors also propose a \"dataset difficulty\" metric which can be computed without training a model, and correlates well with the few-class performance of a model on a dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main issue with this paper is in overall utility. The high level goal of the paper is to provide a tool with which practitioners can select a model (dominantly through the lens of model *architecture*) for a few-class classification task. The tool basically allows authors to train a model (with most results presented from scratch) on subsets of a given dataset. However, this does not align with the practical problem to me, where practitioners might take a model pretrained on a large amount of *data* (e.g DINOv2 or CLIP) is finetuned for a given task (note that lightweight variants of these models are also open-source). \n\nGiven that this paper is predominantly an empirical examination which proposes a practical open-source library, I feel that the lack of experiments with pretrained models prevents acceptance. \n\nOther issues:\n\n- The citation format makes the main text quite difficult to parse\n- L52: Main text does not seem to describe Figure 1 accurately?\n- There is no discussion of few-shot literature, which is at least tangentially related to this problem" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. For FC-Full, when N_{CL} decreases, how to make sure that model predicts only few classes? Are the logits of those few classes are selected to get the prediction and discard logits of all other classes?\n\n2. If a user has a custom dataset with few classes and want to find a model that works better on this custom dataset, it would be helpful to have an explanation on how this benchmark can assist the user in this case." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "•\tAddressed an important problem by proposing the benchmarking tool.\n\n•\tThe tool is designed to be user friendly and allows to run wide range of experiments by setting few hyper-parameters. The tool allows to benchmark on custom models and datasets.\n\n•\tProvided a behavioral understanding between models trained on large number of classes vs smaller number of classes in the few-class regime.\n\n•\tThe proposed similarity metric shown to be linearly corelated with the model performance on small number of classes. Such proxy helps to save computation cost and time of conducting various experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a benchmark tool called Few-Class Arena to benchmark models on different datasets with smaller number of classes (e.g. < 10) and propose a similarity metric called SimSS to measure dataset difficulty measure. They show that ResNet family models trained on full ImageNet 1K classes show reduced performance when tested only for few ImageNet classes (< 10 classes). On the other hand, the same models when trained on smaller number of ImageNet classes from scratch show higher performance on these classes when compared to models trained on all classes of ImageNet 1k. They show that the proposed SimSS metric can serve as a proxy to estimate the upper bound accuracy of model performance on few-class datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite focusing on an interesting problem setting, the analysis shown in the paper has limited scope. Authors shown experiments on models evaluated or trained on smaller number of classes, however there are no details discussed on how these few classes have been selected and how semantically close these few classes to each other? Would the analysis presented differ by choosing the those few classes differently?\n\nThe aspect of transfer learning has not been discussed. It is a common practice to finetune ImageNet pretrained models like ResNet50 or ViT, or recent foundation models like CLIP and DINOv2 to different downstream tasks that include adapting or finetuning them on few classes. The analysis presented in the paper is missing this exploration. Is it better to train the models from scratch on the few classes or finetuning these models work better for few classes? Does SimSS score also align on the finetuned models?\n\nTo compute SimSS, a score called Nearest Inter-Class Similarity requires a nearest class (C_hat) to the target class (C), it is not clear how this C_hat is acquired.\n\nOverall, I appreciate the motive and tool for benchmarking few-class regime, however the analysis presented in the paper is incomplete, and I suggest authors to extend their analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What original research do you expect will use this benchmark and what do you hope it will achieve or unlock ?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The benchmark is well executed and will be useful for “few-class” adaptation research. There are many models evaluated with many datasets and the analysis is thorough. In particular, the author study in depth the evolution of the performance of models trained on large set of classes compared to specialized models, as a function of the number of classes, and show the importance, and therefore propose a metric for evaluating this adaptation.\n\n- The similarity benchmark is a nice addition. It correlates well with the performance while being easy to evaluate and with a modest cost.\n\n- The presentation and writing are very clear. The figures are very informative." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new benchmark for the “few-class” problem, which is a classification problem with very few classes. Most of the scientific literature focuses on datasets with many classes while practitioners often encounter the few-class scenario. The benchmark consist of several selected datasets and several settings, such as training on large set of classes and evaluating on a smaller set, and popular vision models are evaluated and compared. Finally, an analysis of what happens in few-class regimes is proposed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The motivation behind few-class evaluations is not fully convincing. In practice, one will take a large model and fine-tune it (without the classification layer) to a target set of classes, hence obtaining a specialized model. Evaluating the capabilities of a full model on few-class is only interesting when there are too many subsets to consider ? When does that happen in practice, and could you just not use small adaptation layers on top of frozen backbone for each of the subsets ?\n\n- One thing that is missing from the paper is a recommendation for practitioners on which vision model to use for someone interested in the few-class problem. Basically discussing in more details the results from Table 1 and providing comparison between models in Section 4.2 and 4.3. One interesting question is, do models that perform really well on the many classes setup are the same that also perform well on the few-class setup ?\n\n- Some of the findings in the paper are fully expected. The fact that a model specifically trained on the target subset of classes perform better that a larger model trained on a superset is not very surprising or novel." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. (Related to W2) The finding that the scaling law w.r.t. model size is violated for submodes is interesting. I would be curious if this only applies to supervised training or also to self-supervised pre-training combined with minimal fine-tuning or linear probing. Did you conduct any experiments into this direction or do you have an intuition on that?\n2. You mention that you conducted experiments on different architectures such as ResNet’s and ViT’s. However, the results are presented in an aggregated way. Did you find any significant differences between model architectures? Do the findings of Fig 1 equally apply for both architectures?\n3. I agree with your description and ad-hoc interpretation of Fig. 5. However, I am missing a discussion on why we see the low correlation for DCN-Full. Do you have any interpretation of this?\n\n\n**Minor comments:**\n\n- The correct use of author-year citations would improve readability. I.e.: Author (2024) for in-text citations and (Author, 2024) elsewise." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The necessity of a few-class benchmark is well-motivated by a strong finding that models pre-trained on many-class datasets perform worse than expected on few-class datasets which is an issue not addressed in literature thus far.\n2. The paper presents some interesting insights that contradict expected model behavior based on intuition.\n3. The authors provide a ready to use code base integrated with other frameworks and libraries ensuring usability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a benchmark designed to evaluate and select efficient image classification models in scenarios with a limited number of classes. This setting, common in real-world applications (e.g., 2-10 classes), contrasts with widely used benchmarks like ImageNet and COCO, which involve hundreds or thousands of classes. The paper presents FCA as a tool to help researchers and practitioners efficiently select models for few-class tasks. The paper coins the term ``few-class regime'' and presents some interesting non-intuitive insights regarding the performance of models that are pre-trained with many class datasets and then applied in few-class settings.\nIn addition, they introduce a dataset difficulty metric by inverting image similarity measured via CLIP and DINOv2 features." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The scope of the study only covers image classification, while it is in principle also applicable to dense prediction tasks where object classes are present (e.g., object detection, semantic segmentation). It would be insightful if the same findings hold for these tasks. The authors could add a discussion or experiments (if available) for other vision tasks.\n2. The experiments cover classification tasks based on supervised (pre-)training. However, there is an increasing trend that classification models are fine-tuning based on self-supervised pre-trained models. This paradigm is not covered in this study, and therefore, the findings are limited to the more traditional fully supervised paradigm. The authors could discuss these aspects or add experimentation with self-supervised pre-trained models (if available)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a Few-Class neural network benchmark for model selection with deep analyses." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024fewclass,\ntitle={Few-Class Arena: A Benchmark for Efficient Selection of Vision Models and Dataset Difficulty Measurement},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ET561DyPe},\nnote={under review}\n}" }, "abstract": { "value": "We propose Few-Class Arena (FCA), as a unified benchmark with focus on testing efficient image classification models for few classes. A wide variety of benchmark datasets with many classes (80-1000) have been created to assist Computer Vision architectural evolution. An increasing number of vision models are evaluated with these many-class datasets. However, real-world applications often involve substantially fewer classes of interest (2-10). This gap between many and few classes makes it difficult to predict performance of the few-class applications using models trained on the available many-class datasets. To date, little has been offered to evaluate models in this Few-Class Regime. We conduct a systematic evaluation of the ResNet family trained on ImageNet subsets from 2 to 1000 classes, and test a wide spectrum of Convolutional Neural Networks and Transformer architectures over ten datasets by using our newly proposed FCA tool. Furthermore, to aid an up-front assessment of dataset difficulty and a more efficient selection of models, we incorporate a difficulty measure as a function of class similarity. FCA offers a new tool for efficient machine learning in the Few-Class Regime, with goals ranging from a new efficient class similarity proposal, to lightweight model architecture design, to a new scaling law. FCA is user-friendly and can be easily extended to new models and datasets, facilitating future research work. Our benchmark is available at https://github.com/fewclassarena/fca." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Few-Class", "lightweight", "small neural network", "benchmark", "scaling law", "image similarity", "convolutional neural network", "CNN", "transformer" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/faee30214550acbd447b439fb62fed02f1d19dca.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/9799eb1de73a892cf597ec8162e99398f7c1eafd.zip" }, "title": { "value": "Few-Class Arena: A Benchmark for Efficient Selection of Vision Models and Dataset Difficulty Measurement" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ErS9Bkc3O
Towards unlocking the mystery of adversarial fragility of neural networks
main
Active
deep learning;adversarial attack;adversarial robustness
learning theory
3;5;5;5
3;2;2;4
2;2;2;3
2;2;2;2
2;1;3;3
4.5
2.75
2.25
2
2.25
-0.174078
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Doesn't the perturbation e chosen is a one gradient step attack with simple targeted loss? \n2. Does x + \\epsilon x1, and x+ \\epsilon x2 necessarily classified differently? it seem plausible that the classification is different only for very big \\epsilon. have you tested it?\n3. What does the experiments shows?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Writing: The paper is well-written and effectively communicates its final goal from the outset. The intuitions behind the proofs are presented in a highly accessible manner.\n\nThoroughly Detailed Experiments: The experiments are described with great clarity and organization, providing all the necessary details for readers to fully understand the methodology and findings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper provides a theoretical analysis of adversarial robustness in several specific contexts. It examines the dataset and adversarial perturbations across different settings, starting with a random linear network and progressing to trained multi-layer non-linear networks and arbitrary multi-class datasets. The authors conduct experiments with 12-dimensional synthetic data and linear or two-layer networks to support their theoretical findings regarding the sizes of adversarial perturbations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Lack of Related Research: The paper overlooks existing theoretical work on random networks, such as \"Adversarial Examples in Multi-Layer Random ReLU Networks\" by Bartlett et al. \n\nConcepts: With the exception of Theorem 7, the settings discussed are largely unrelated to each other or to real-world scenarios. Theorem 7 relies heavily on linearity, (although the claim to apply on for highly non-linear networks) state only that changes in output due to input perturbations can be captured through projections on the relevant gradients.\n\nOverstating Generality: The paper makes broad claims about phenomena related to dimensionality that are primarily observed in random networks, a point that is only briefly mentioned in the introduction and not sufficiently discussed throughout the rest of the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "ref weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I appreciate the clear presentation and detailed theoretical derivation of this manuscript." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript provides a theoretical investigation into the robustness of DNNs in classification tasks. Through rigorous matrix-theoretic analysis, they establish that the minimum adversarial perturbation—the smallest input modification required to change a network's classification decision—exhibits an intrinsic relationship with input dimensionality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am not an expert in this theoretical area, thus I cannot check all proof details and judge the theoretical contribution.\nFrom my perspective, the conclusion of this work---adversarial robustness can degrade as the input dimension d increases---is not rigorous. \n\n* What if the additional dimension of $\\bf x$ is correlated with other dimensions? I.e., the new dimension does not bring any new imformation, would it degrade the robustness? \n* On the other hand, if the new dimension brings new information, the new $\\bf x \\in R^{d+1}$ and the prior $\\bf x \\in R^{d}$ are drawn from different data distributions. How to compare the robustness of DNNs over different data distributions?\n* How to compare the norm for variables with different dimensions? I.e., let $\\bf \\delta_1\\in R^d$ and $\\bf \\delta_2\\in R^{d+1}$, can we directly compare $||\\delta_1||_2$ and $||\\delta_2||_2$? They are in different dimensions, for example, can we say volume > area > length?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethical concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See suggestions for improving the paper." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "A better understanding of adversarial attacks and robustness of neural networks remains an important topic." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a theoretical analysis of why neural networks classifiers are susceptible to adversarial perturbation, ie, adversarial fragility: specifically why small, targeted perturbations can dramatically change their classification outputs. The authors challenge existing theories, which attribute this fragility to factors like smoothness of decision functions or curvature of decision boundaries, arguing these approaches only partially address the problem. The authors present a matrix-theoretic analysis of this problem and explore how neural networks' robustness declines as the input dimension increases, theorizing that their adversarial robustness is inherently limited to approximately $1/\\sqrt 𝑑$ of optimal robustness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "To the best of my knowledge, the paper's conclusion that adversarial robustness can only be $1/\\sqrt d$ is already known [1, 2] and has been shown in more general settings.\n\n- The theoretical analysis is weak, as all the theorems make important and unrealistic assumptions, i.e. normal distribution of the data, constraints on the weight matrices.\n- $\\ell_2$ is the only distance considered, several other papers have proposed theoretical analysis with respect to the $\\ell_p$ norm (see [1, 2]).\n- Theorem 1 spans 2 full pages to show a probabilistic bound over a linear network with several assumptions, but it's unclear why the authors came to all this work, since the distance to the decision boundary for a linear network can be computed in closed form. \n- The paper proposes a total of 7 theorems, each of which is accompanied by a proof. \n- The paper does not propose any related work\n- The paper does not provide usable results \n- The experimental section only proposes toy experiments \n\nSuggestions for improving the paper:\n- Instead of presenting a list of theorems, the authors should motivate their analysis and explain why it's interesting. How can these results help the community? Even if the theoretical analysis has assumptions, how can it be useful for real-world applications?\n- Authors should propose a related work section and compare their analysis with other work. How is their analysis better or novel than the competing work?\n- Authors should propose real-world experiments, adversarial robustness is now a mature research topic, and large-scale (e.g. ImageNet) experiments should be performed. \n\n[1] Yang et al. Randomised smoothing of all shapes and sizes. ICML 2020 \n[2] Kumar et al. Curse of Dimensionality on Randomised Smoothing for Certifiable Robustness. ICML 2020" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1.\tBased on the assumptions in Eq. (1) and Eq. (5), under what conditions can $w_i$ satisfy these assumptions? Does this imply that new constraints have been added to $w_i$? The authors should discuss the practical feasibility of these conditions and how they relate to real-world DNNs.\n\n2.\tIn Theorems 1 and 4, what does \"with high probability\" specifically refer to? Please provide a rigorous definition in the main text or appendix." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "$\\bullet$ Exploring the smallest magnitude of perturbations that can change model output is intriguing. The paper also provided detailed derivations and proofs.\n\n$\\bullet$ The comparison between the adversarial robustness of minimum-distance classifiers and DNNs is also noteworthy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studied the smallest magnitude of perturbations that could alter the model output, particularly in linear cases under several assumptions. The authors demonstrated that the adversarial robustness of models degraded as the input dimension $d$ increased. Besides, they analytically showed that the adversarial robustness of linear networks could be $1/\\sqrt{d}$ of that of the minimum-distance classifier." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Several theorems were based on assumptions that were too strong, and provided little assurance that the analysis of \"adversarial robustness of neural networks\" can be generalized to any two-layer DNN, including: \n\n$\\bullet$ Theorem 1 analyzed the robustness of a two-layer linear network under the following assumptions: (a) each dimension of the training samples follows a standard Gaussian distribution, (b) the activation layer is identity matrix $I$, (c) the linear matrix $H$ is an orthogonal matrix, and (d) for the i-th sample, each i is a distinct label, with the model outputting a score of 1 for category i, and a score of 0 for all other categories, as stated in Eq (1). \n\n$\\bullet$ Theorem 4 used similar assumptions.\n\nTo improve it, the authors could include detailed discussions of how Theorems 1, 4, 6 and 7 might extend to, or provide insights into, more general neural network architectures used in practice. \n\n2.\tWriting: The authors could revise the manuscript to highlight the physical significance of each theorem, the limitations, and the insights for the DNN robustness community. For example, the authors could discuss potential implications of Theorem 7 for adversarial attacks on real-world DNNs.\n\n$\\bullet$ It is suggested to focus on the potential significance and application scenarios of each theorem and lemma, and the general ideas of the proofs in the main text, while moving the detailed proofs (e.g. Lines 151-244) to the appendix. For example, for Theorem 7, what is the actual context in which \"the classifier wrongly think the input is $x+\\epsilon x_2$ instead of $x+\\epsilon x_1$\"? \n\n$\\bullet$ Since each theorem has different assumptions, it would be beneficial to make a table that clearly lists the assumptions of each theorem, indicating which theorems represent purely ideal cases and which can be generalized to typical DNNs.\n\n3.\tThe abstract and introduction contained overclaims. Most of the theorems presented in the main text were derived under strong assumptions, and some conclusions had its restrictions, e.g., in linear networks. For example, in the abstract, the authors demonstrated \"neural network’s adversarial robustness can degrade … only $1/\\sqrt{d}$ of the best possible adversarial robustness.\" What is \"best possible adversarial robustness\"? Do these results apply to any DNN, or only to linear networks, or only to two-layer linear networks? Are there some strict assumptions implicit in this conclusion? The authors should revise the abstract and introduction to clarify the conditions under which the conclusions are applicable.\n\nBesides, the authors could conduct validation experiments on typical DNNs (e.g. validating Theorem 7 on the convolutional neural networks such as the LeNet or ResNet-18). And a thorough discussion of the limitations is recommended." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards unlocking the mystery of adversarial fragility of neural networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ErS9Bkc3O},\nnote={under review}\n}" }, "abstract": { "value": "In this paper, we study the adversarial robustness of deep neural networks for classification tasks. The adversarial robustness of a classification algorithm is defined as the smallest magnitude of possible additive perturbations that can change the output of the classification algorithm. We provide a matrix-theoretic explanation of the adversarial fragility of deep neural network. In particular, our theoretical results show that neural network's adversarial robustness can degrade as the input dimension $d$ increases. Analytically we show that neural networks' adversarial robustness can be only $1/\\sqrt{d}$ of the best possible adversarial robustness. Our matrix-theoretic explanation is consistent with an earlier information-theoretic feature-compression-based explanation for the adversarial robustness of neural networks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "deep learning", "adversarial attack", "adversarial robustness" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b461a246c523c9f6b290eb25edc8df6b822ee35f.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Towards unlocking the mystery of adversarial fragility of neural networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2Ey2hkFicp
Benchmarking and Enhancing Large Language Models for Biological Pathway Reasoning
main
Active
Large Language Model;Reasoning;Biology;Biological System;Pathway;Agent
applications to physical sciences (physics, chemistry, biology, etc.)
5;6;6;6
5;3;1;3
4;3;4;3
2;3;3;3
3;3;4;2
5.75
3
3.5
2.75
3
-0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In Figure 4, how are the lines fitted? For the right figure (open-ended questions), the gap between CoT and PathSeeker is very small. What is the standard deviation?\n\n2. In Table 2, Table 3, Table 6, and Figure 5, please add what metrics and units are used. Also add evaluation method in Experiment section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Clear identification of research gap: I think it is an interesting question whether LLMs can reason on biological pathways, and how well they do it. The authors have identified the limitations here clearly.\n\n2. Innovative benchmark: BioMaze is a valuable contribution to the field, providing a systematic evaluation framework for assessing LLM performance across various dimensions of biological pathway reasoning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses a gap in LLMs to reason biological pathways, especially with complex perturbations, interventions, and varying conditions. To address this gap, the authors first introduce a new benchmark, BioMaze, that contains 1.3k high-quality questions for biological pathways reasoning.\n\nNext, the paper evaluates LLMs on BioMaze with existing reasoning methods and finds that they struggle with perturbations. Then the authors propose a new reasoning approach, PathSeeker, that reasons through subgraph-based navigation within pathway graph. PathSeeker achieves better performance in biological reasoning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Data presentation is not very clear. For example, when the paper evaluates the performance of different models and reasoning methods, it simply writes \"performance\" without defining the metrics. Therefore, it is not clear whether a higher number means a better performance. In Table 2 and 3, the authors underline the lowest results, which is confusing.\n\n2. Baseline choice is not clear. The paper uses CoT as a baseline in 5.3.1 Task Analysis. I think a better baseline may be a method with pathway graph augmentation since PathSeeker also uses pathway graph augmentation.\n\n3. Analysis is not thorough enough. If the authors want to claim that PathSeeker reduces the performance gap between natural and intervened/perturbed groups, then they should provide more evidence and analysis on them." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "\"We then apply multiple data filters and validation steps to ensure the correctness, quality, and relevance\nto biological pathways. The correctness of each question is validated by checking whether LLMs\ncan answer it accurately using the original paper content, allowing us to exclude question-label pairs\nwith errors. Question quality is ensured through several filters, removing questions that are poorly\ndefined, unpredictable (e.g., asking for specific measurement values), query more than one fact, are\ntrivial with answers revealed in the question’s context, or are unrelated to biological pathways. After\nall the filters, BioMaze contains 1.3k high-quality questions for biological pathways reasoning\"\nCan you give me more confidence that these questions all have a single right answer that can be answered from the context? To what degree are the manually verified? Filters are great, but where does the buck stop? \n\nA pathway is essentially a knowledgebase. It would be good to connect this work to recent approaches that use knowledgebase graph structure in RAG, such as GraphRAG. Indeed, generally speaking, contextualization within prior work could be stronger.\n\nBiggest question: Why did you not run eval on cutting-edge larger models or larger open-source models like your LLaMa-3.1-405B, or fine-tuned SLMs. Bit sus. Willing to upgrade review if this concern is addressed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "* The benchmark is a solid contribution. The authors did good work in breaking down the benchmark by various categories.\n* PATHSEEKER has promise, though I wish it were better motivated and contextualized within related work within systems biology as well as graph reasoning tasks with LLMs as well as graph-based RAG techniques.\n* The breakdown of failure modes for LLM reasoning over pathways, particularly in terms of causality, and showing how the graph augmentation helps is useful. Breaking down the reasons for failure with human validation is also a useful contribution and I wish I saw more of that." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a benchmark to evaluate LLMs' reasoning abilities about biological pathways including perturbed pathways. The benchmark is diverse and covers different biological domains and scenarios.\n\nThe authors' evaluations show that while LLMs understand natural mechanisms well, they struggle with intervention scenarios\n\nThe authors propose PATHSEEKER, an LLM agent that navigates pathway graphs using subgraph-based exploration. This approach improves reasoning accuracy, including accuracy for intervention scenarios.\n\nKey contributions:\n\n* BioMaze Benchmark\n* Evaluation of LLMs on benchmark\n* PATHSEEKER Agent, analysis of its performance on benchmark, its failure modes, and ablation study" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* With PATHSEEKER, I think there is a lack of motivation for explore the pathways via subgraphs other than \"Inspired by how humans browse web networks\". I don't disagree with this approach per se, but I don't think the authors motivate doing it this way as opposed to, say for example, adding including the whole graph or a big chunk of it into the prompt template. Indeed, as an experiment I pasted an XML file of a MAPK KEGG map into GPT-4's context window and it fits. And if something doesn't fit, context windows will get bigger. I think the authors should motivate the local approach, for example, by citing work that demonstrates failure modes for graph-based reasoning with LLMs, and citing work that shows how local approaches do better. \n* I find it concerning that the authors did not include results for a cutting edge model like GPT-4, Claude, PALM 2 and limited tests to GPT-3.5 and Llama-3 8b, neither of which were fined-tuned for performance in this domain. The gap between GPT-3.5 and GPT-4, as an example, on general medical QA performance is quite large. This makes me worry the benchmark might be already saturating on more advanced models. Budget could have been an issue, but could have fine-tuned GPT-3.5 (perhaps on hold-out data from their benchmark), or they could have used their instance of LLaMa-3.1-405B to answer questions as well as evaluate them. Similarly, they could have used other fine-tuned open source models to evaluate." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 1 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Maybe the author can try to answer the reason why particular types of errors occur in categorization.\n2. May using other models to do ground truth answers validation." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The study is very comprehensive. I like the rigorous experimental design that systematically evaluates different aspects of pathway reasoning.\n2. It contributed to the field of BIOLOGICAL PATHWAY REASONING by making benchmarks and the problem formulation combining biological pathway reasoning with LLM capabilities.\n3. I also found myself enjoy reading the paper and like the well-structured presentation progressing logically from problem motivation to solution." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces BioMaze, a large-scale benchmark for evaluating large language models' ability to reason about biological pathways. The authors also introduced PATHSEEKER, a new approach to enhance LLMs' performance on these tasks.\nThey found that while LLMs can understand basic biological mechanisms but LLMs struggle when asked to reason about perturbations or interventions in biological systems. \nThrough their experiments, they observed that LLMs perform worse on perturbed systems compared to normal conditions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The author presents error categorization but it doesn't provide detailed analysis of when and why particular types of errors occur. If the authors can provide more analysis of the occurance, it would be nice.\n2. The validation of ground truth answers relies heavily on LLMs themselves (LLaMA 3.1-405B and GPT-4). This circular dependency could reinforce existing model biases." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Pathway graph limitations: This paper highlights that faulty reasoning persists even with pathway augmentation, especially with perturbations. Could the authors provide more insight into potential sources of error in the pathway graph data? Is it the case that some specific cases or graph structures are more challenging for the LLM to navigate, and some are easier for LLMs to handle?\n2. Handling multi-step reasoning decline: Given that CoT reasoning shows decreased accuracy with increased steps, have the authors considered alternative strategies or mechanisms, such as hierarchical reasoning, to mitigate this drop in performance, or are those questions just naturally challenging? \n3. Error analysis: The error analysis indicates that omissions remain an issue with PATHSEEKER. What approaches might the authors consider to address these issues, especially when key pathway branches are missed? Could further database expansion, enhanced subgraph search criteria, or developing a different graph search algorithm improve the performance?\n4. Using RAG: would authors consider incorporating RAG into this framework given the graph structure of biological pathways? Specifically, RAG could allow the model to retrieve specific or relevant information from related literature or pathway databases. This retrieval would provide the LLM with dynamic access to more detailed and more recent biological knowledge, instead of the graph structure constructed from a fixed database KEGG, as currently used in the paper. \n5. Evaluator setting in this paper: this paper proposes using the llama 405B model as the evaluator model for LLM's outputs, as this is costly to run multiple times, would authors consider any alternative evaluation approaches such as applying rule-based methods or using alternative LLMs to strengthen the statistical validity of the benchmarking results?f" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. BioMaze benchmark for biological pathway reasoning: The authors present BioMaze, a benchmark dataset designed to evaluate LLMs’ reasoning abilities within a biological context. BioMaze focuses on assessing how well LLMs comprehend and reason about complex biological pathway phenomena, including cause-effect relationships in natural and perturbed conditions. Curated from the literature, this dataset includes high-quality questions and answers generated with Llama 3.1405B and GPT-4. Covering multiple biology subfields, BioMaze undergoes extensive filtering and validation to ensure relevance, accuracy, and diversity of pathway scenarios.\n2. Pathway graph augmentation via PATHSEEKER agent model: Given that biological pathways are naturally structured as networks, the authors incorporate pathway graph data to improve LLM reasoning. They introduce PATHSEEKER, a novel graph-augmented agent that navigates pathway subgraphs to enrich LLM understanding and support reasoning in complex pathway contexts. This approach allows LLMs to access and utilize structural information essential for nuanced pathway reasoning, particularly in scenarios involving biological interventions.\n3. Comprehensive evaluation and analysis: The paper conducts a thorough evaluation across multiple LLM models and experimental settings, systematically analyzing LLM performance with and without pathway graph augmentation. Additionally, the ablation study of PATHSEEKER explores its effectiveness by examining API usage, step distribution, and performance impact. These analyses further strengthen the value of pathway augmentation, validating the importance of PATHSEEKER in enhancing LLMs’ reasoning capabilities in biological pathway contexts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study explores the under-examined ability of LLMs to reason about biological pathways, particularly focusing on how system perturbations affect downstream biological processes. The authors introduce the BioMaze dataset, a benchmark designed to assess LLMs’ reasoning on how various interventions, like mutations, infections, or treatments, impact downstream targets through complex pathway mechanisms across different biological contexts. With this dataset, the authors then test LLMs with reasoning techniques such as Chain-of-Thought (CoT) and graph-augmented methods, and they find that while LLMs can understand basic biological mechanisms, they struggle with predicting effects after perturbations. To enhance the reasoning ability of LLMs, the authors also developed PathSeeker. In this novel approach, the LLM agent navigates pathway subgraphs to improve performance in pathway reasoning, particularly in scenarios with biological perturbations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited evaluation method for open-ended questions: outputs from different LLMs are evaluated by another LLM, specifically using the Llama 3.1 405B model, which is considerably powerful but would be costly to replicate the results. It would be more helpful if the authors could consider some alternatives, such as using rule-based keyword-matching or for example, using ROUGE score or embedding-based summarization methods to compare how similar or dissimilar answers from LLMs are to the ground truth answers. Another alternative could be to construct different evaluation methods based on the failure modes discovered later from the error analysis study. \n2. see questions" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024benchmarking,\ntitle={Benchmarking and Enhancing Large Language Models for Biological Pathway Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2Ey2hkFicp},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) have demonstrated remarkable performance across various domains of biology, but their ability to reason about biological pathways remains underexplored. This includes reasoning about how perturbations in biological systems lead to various downstream effects through complex intermediate processes. Such reasoning is crucial for explaining and predicting biological phenomena, as well as for formulating hypotheses and designing experiments.\n\nIn this study, we investigate whether LLMs can effectively understand and reason about biological pathways by introducing BioMaze, a comprehensive benchmark focusing on reasoning about the effects and mechanisms of natural and synthetic interventions—such as mutations, infections, or treatments—on various downstream targets under different conditions through complex intermediate pathway processes. BioMaze spans multiple biological domains and is categorized along three reasoning dimensions, capturing various aspects of pathway reasoning.\n\nWe evaluate LLMs using the BioMaze benchmark with reasoning methods like Chain-of-Thought (CoT) and pathway graph-augmented approaches. Results show that while LLMs can understand mechanisms in natural organisms, they struggle with predicting phenomena after perturbations, highlighting their limitations in reasoning about biological pathways. To address these challenges, we propose PathSeeker, a novel LLM agent that interactively reasons through subgraph-based navigation within the pathway graph. This approach enhances LLMs' reasoning in biological pathways by leveraging pathway graph augmentation, particularly in cases involving perturbations, potentially bridging the gap between LLMs' current capabilities and the complexities of biological systems." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Model", "Reasoning", "Biology", "Biological System", "Pathway", "Agent" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/716718cdf9285f8e116742e52bfac964bc2253b7.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Benchmarking and Enhancing Large Language Models for Biological Pathway Reasoning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2Ez4dhU3NG
SPLR: A Spiking Neural Network for Long-Range Temporal Dependency Learning
main
Active
spiking neural networks;long range dependencies;event data modelling;hippo matrix;state space models
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;5;5;6
4;4;3;3;5
2;1;3;3;3
2;2;2;3;2
1;1;3;3;3
4.4
3.8
2.4
2.2
2.2
0.089087
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- How does the dendritic attention layer differ from a current-based (CUBA) leaky integrate-and-fire (LIF) neuron model?\n- How are spikes produced between the SPLR convolution layers?\n- How are convolutions applied within the proposed model?\n- In equation (1), is the variable $u(t)$ a binary vector representing input spikes?\n- How does the inclusion of a decay matrix in the HiPPO framework enhance memory retention?\n- Could you clarify the setup for the Sequential CIFAR-10 and CIFAR-100 tasks? How are frames sequenced? Similarly, could you elaborate on the experimental setup for the other datasets?\n- For clarification, could you specify what spikes $i$ and $j$ refer to in line 187?\n- Is the manuscript proposing a new type of spiking neuron, or an entire network architecture?\n- Since the manuscript emphasizes improving SNNs' capacity to handle long-term dependencies, could you elaborate on why simple LIF models face challenges with this?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The manuscript presents an innovative approach by augmenting spiking dynamics with state-space model dynamics, potentially enabling spiking models to tackle more challenging tasks that require capturing long-range temporal dependencies. This direction could be of significant interest in broadening the applications of spiking models in complex temporal tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript introduces SPLR, a spiking neural network model designed to capture long-range temporal relationships by integrating state-space dynamics with spiking neuron models and augmenting the HiPPO framework to handle spike-driven inputs. The proposed model reportedly achieves high performance comparable to other models on event-based datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the proposed method is intriguing and addresses the relevant challenge of enabling spiking models to capture long-term dependencies, the manuscript has several critical weaknesses. \n\nFirstly, the presentation lacks clarity, making it difficult to fully grasp how the method works, interpret the experimental results, or potentially reproduce the findings. Essential details expected in a research paper, such as a discussion of related works, are missing (e.g., [1]). Additionally, fundamental concepts necessary to understand the work are not well-introduced; although state-space models (SSMs) have gained popularity recently, they are not widely understood in machine learning, so a brief overview would be beneficial.\n\nThe manuscript also omits essential citations, including the original HiPPO framework, which is central to this work, and does not offer a proper explanation of how it functions. The equations are unclear; while convolutions are frequently mentioned, no equations illustrate how or where convolutions are applied. Variable definitions are sometimes confusing or incomplete; for instance, on line 187, $\\Delta t$ is described as the time difference between spikes $i$ and $j$, but it is unclear what $i$ and $j$ refer to in the context of the matrix $F_{ij}$, as it operates over the hidden state rather than directly on spikes.\n\nRegarding the experiments, the manuscript lacks details about the setup, hindering the interpretability of the results. For example, it’s unclear what “Sequential CIFAR-10” entails, such as the sequence length or frame generation process. Similarly, for the DVS Gesture dataset, it's ambiguous whether the processing was done for independent events or if events were accumulated into event frames.\n\n[1] Stan, MI., Rhodes, O. Learning long sequences in spiking neural networks. Sci Rep 14, 21957 (2024). https://doi.org/10.1038/s41598-024-71678-8" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors propose SPLR which effectively captures long-range temporal dependencies, addressing limitations in traditional SNNs and enhancing temporal modeling capabilities for complex event-driven tasks.\n2. The experiments show good results. SPLR achieves both computational efficiency and scalability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces SPLR (Spiking Network for Learning Long-Range Relations), designed to efficiently capture long-range temporal dependencies while maintaining the hallmark efficiency of spike-driven architectures. SPLR integrates a state-space convolutional layer and a Spike-Aware HiPPO (SA-HiPPO) layer, addressing the limitations of conventional SNNs in complex temporal modeling. The SPLR convolutional layer leverages state-space dynamics to enhance feature extraction, capturing spatial and temporal complexities in event-driven data while preserving the efficiency of sparse spike-driven processing. The SA-HiPPO layer adapts the HiPPO framework to spike-based formats, enabling efficient long-term memory retention. Through dendrite-based spatiotemporal pooling and FFT-based convolution techniques, SPLR demonstrates scalability when processing high-resolution event streams and outperforms traditional methods across various event-driven tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The primary weakness of this paper lies in its writing, which significantly hinders clarity and understanding. A reorganization is recommended to improve readability and logical flow.\n\n1. Writing and Structure. For instance, Section 2 presents the SPLR components sequentially but lacks an overview that connects each part to the overall model structure, making it difficult for readers to understand how the parts interact. Section 3 is dense with theoretical content and proofs but does not clearly convey the main ideas, making it hard to follow the section’s intended focus.\n\n2. Lack of Citations. The paper frequently omits citations in crucial areas. For example, although modifications to the HiPPO framework are proposed, no supporting references are provided. Furthermore, DH-LIF is introduced without citation, and the reference for this component is missing from the bibliography, weakening the academic rigor of the paper.\n\n3. Confusions and Errors. There are several errors and confusions throughout the paper, such as the incorrect abbreviation of the Spiking State-Space Model as SPLR in line 855. Such errors further impact the readability and precision of the work." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. This paper only compares FLOPs vs. accuracy between the proposed SPLR and other models. Does SPLR have an advantage over other methods in terms of inference latency?\n2. The ablation studies only examine the effects of removing the dendrite attention layer and replacing SA-HiPPO with LIF. What if we replace NPLR decomposition and FFT convolution with standard convolution?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper is well-written and technically solid. Each module in SPLR is introduced in detail and highlighted in different colors. This paper presents a detailed theoretical analysis, including the long-range dependency capability and stability of SPLR.\n2. The proposed SPLR model achieves competitive accuracy with less computational overhead than other state-of-the-art models on the Celex-HAR dataset." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a Spiking Network for Learning Long-Range Relations (SPLR). The proposed SPLR model comprises the dendrite attention layer, the Spike-Aware HiPPO (SA-HiPPO) layer, and the SPLR convolution layer. These modules enhance the long-range temporal dependency learning capability of SPLR. Experimental results demonstrate that SPLR outperforms prior methods in tasks requiring both fine-grained temporal dynamics and the retention of long-range dependencies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The proposed SPLR model incorporates several non-spike operations, including the NPLR decomposition and FFT convolution. It makes SPLR a hybrid architecture instead of a pure spiking neural network. The hybrid nature may compromise its hardware compatibility and make it difficult to deploy on neuromorphic hardware." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please see weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This work explores the combination of Spiking Neural Networks (SNN) and State Space Models (SSM), which is an interesting direction. Using state-space methods to improve SNNs' ability to model long-term dependencies holds promise.\n2. The experimental comparisons in this work are extensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a spike SSM method named Spiking Network for Learning Long-Range Relations (SPLR). The proposed SPLR convolutional layer leverages state-space dynamics to enhance feature extraction while retaining the efficiency of sparse, event-based\nprocessing, and incorporates a Spike-Aware HiPPO (SA-HiPPO) matrix that allows SPLR to effectively maintain long-range memory by adapting the HiPPO framework for discrete, spike-driven inputs. The authors tested their method on several datasets, such as Celex HAR, DVS128 Gesture, Sequential CIFAR-10/100" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This work requires comprehensive improvements, with the main weaknesses outlined as follows.\n\n1. Writing: The writing in this work requires careful and comprehensive improvement, covering the overall organization of the paper, paragraph structure, and numerous details that need refinement. (1) The authors placed the related work section in the supplementary materials and omitted citations to many key works, which can confuse readers. For instance, the authors did not cite relevant papers when HIPPO was first mentioned (in fact, there are almost no citations in paragraphs 3 and 4 of the introduction). (2) The methodology section is not clearly explained. What type of spiking neurons does this work use? How does SPLR integrate with spiking neurons? The authors repeat content introduced in the main text within the supplementary materials. Lines 147-150 reiterate the significance of SPLR, but readers are likely more interested in the methodological details and the rationale behind the proposed approach’s significance. Unfortunately, these critical details are missing. (3) In the theoretical discussion, the authors present several theorems but do not clarify why these are necessary. While sparse properties and FLOPS reduction are mentioned, the evaluation details are not provided. \n(4) The supplementary materials contain excessive repetition and are overly lengthy, making it difficult for readers to stay engaged. (5) Overuse of Abbreviations: The excessive use of abbreviations makes the paper difficult to follow. For instance, I cannot understand why \"Spiking Network for Learning Long-Range Relations\" is abbreviated as \"SPLR\"—what does \"P\" represent here? Is there a difference between \"Spiking Network\" and \"Spiking Neural Networks (SNNs)\"? Additionally, what does HIPPO stand for? Shouldn’t the authors explain these abbreviations first? (6) Overstatements: The paper is filled with terms like \"spike-driven,\" \"asynchronous,\" and \"real-time.\" As I understand it, \"spike-driven\" implies a purely additive network[2], yet the pink section in Figure 1 seems unable to achieve this. Regarding \"asynchronous,\" the authors’ explanation in lines 92-95 is too brief, making it difficult to discern what kind of preprocessing the network applies to the data.\n\n2. Motivation. The authors repeatedly state that the proposed SPLE can address the challenge of modeling both short- and long-term dependencies in SNNs. However, they fail to analyze why SNNs have limitations in this area and why their proposed method can solve this issue. For instance, this is mentioned in lines 58-67, 147-149, and 1846-1850 without providing the necessary analysis.\n\n3. Innovation.The originality of this work is limited. The dendrite modeling directly uses DH-LIF, and the SSM modeling is simply an SNN combined with HIPPO. I did not observe any standout contributions in this approach.\n\n4. Experiments. The datasets chosen by the authors, DVS128 Gesture and Sequential CIFAR-10/100, do not effectively test the model's ability to handle long-range dependencies. The authors could consider more challenging datasets, such as LRA. Additionally, the fact that no one in the SNN field has addressed the Celex HAR dataset does not imply that SNNs cannot handle it. The authors have not even provided a complete description of the size and scope of the Celex HAR dataset. If the authors aim to compare with other SNNs on challenging DVS datasets, they might try HAR-DVS[1]. Furthermore, the authors overlook comparisons with many recent SOTA methods on the Gesture dataset, where SNN performance has already surpassed 99.5%[2].\n\n---\n[1] Hardvs: Revisiting human activity recognition with dynamic vision sensors. In AAAI 2024.\n[2] Spike-driven transformer. In NeurIPS 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The integration of SA-HiPPO and SPLR convolution enhances the model's ability to model long-range dependencies.\n2. The SPLR model designed in this paper introduces a dendrite-based pooling layer, which further improves the performance using the DH-LIF neuron model.\n3. The theoretical and experimental results in this paper confirm the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper integrates state-space models (SSMs) and spiking neural networks (SNNs), and proposes the Spiking Network for Learning Long-Range Relations (SPLR) to enhance the ability of SNNs to capture long-range dependencies. Theoretical proofs and experimental results support the performance advantages of SPLR." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The core innovation of this paper is the SPLR convolution layer to enhance the long-range temporal modeling capability of SNNs. However, while this improves the performance of the SNN, the integration of the SSM and the SNN requires significant computational overhead, which counteracts the power consumption advantage of the SNN. In addition, as the authors mention SPLR is difficult to implement in hardware, making the significance of this work seem small.\n\n2. Can the authors clarify the key differences between the DH-LIF model in this paper and the one presented in [1]? How does the DH-LIF used in the Dendrite Attention Layer relate to attention? Is the author's claim of dendrite-based spatio-temporal pooling just spatial pooling after the output of DH-LIF? Where is there temporal pooling?\n\n3. I suggest that the authors compare their method with other optimized spiking neural networks capable of long-range temporal modeling, such as [2,3,4].\n\n\n[1] Zheng H, Zheng Z, Hu R, et al. Temporal dendritic heterogeneity incorporated with spiking neural networks for learning multi-timescale dynamics. Nature Communications, 2024.\n\n[2] Wang L, Yu Z. Autaptic synaptic circuit enhances spatio-temporal predictive learning of spiking neural networks. ICML, 2024.\n\n[3] Zhang S, Yang Q, Ma C, et al. Tc-lif: A two-compartment spiking neuron model for long-term sequential modelling. AAAI, 2024.\n\n[4] Shen S, Zhao D, Shen G, et al. TIM: An efficient temporal interaction module for spiking transformer. IJCAI, 2024." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce SPLR, a novel spiking neural network architecture that effectively captures long-range temporal dependencies through Spike-Aware HiPPO and dendrite attention." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024splr,\ntitle={{SPLR}: A Spiking Neural Network for Long-Range Temporal Dependency Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2Ez4dhU3NG},\nnote={under review}\n}" }, "abstract": { "value": "Spiking Neural Networks (SNNs) offer an efficient framework for processing event-driven data due to their sparse, spike-based communication, making them ideal for real-time tasks. However, their inability to capture long-range dependencies limits their effectiveness in complex temporal modeling. To address this challenge, we present a Spiking Network for Learning Long-Range Relations (SPLR). SPLR address the limitations of conventional spiking network in two ways. First, we introduce SPLR convolutional layer that leverages state-space dynamics to enhance feature extraction while retaining the efficiency of sparse, event-based processing. Second, we incorporate a Spike-Aware HiPPO (SA-HiPPO) matrix that allows it to effectively maintain long-range memory by adapting the HiPPO framework for discrete, spike-driven inputs. Together, the preceding novel spike-aware state space dynamics, enhance feature extraction while retaining the efficiency of sparse, event-based processing. Experimental results across various event-based datasets demonstrate that SPLR outperforms prior methods for processing event-driven data in tasks requiring both fine-grained temporal dynamics and the retention of long-range dependencies. This unified framework advances the state of event-based learning, providing a scalable and efficient solution for real-time applications such as event-based vision and sensor fusion in neuromorphic computing." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "spiking neural networks", "long range dependencies", "event data modelling", "hippo matrix", "state space models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3cf405d9d336b2650e21dba19b01ad7cf0dd35e7.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/556ef2959af4a2573b142d34dd3ac5a7f2864284.zip" }, "title": { "value": "SPLR: A Spiking Neural Network for Long-Range Temporal Dependency Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2F7MFqATdo
Intention Model: A Novel Explanation for In-context Learning
main
Active
In-context learning;Large language models
interpretability and explainable AI
3;5;6;6
4;3;3;4
2;3;3;3
2;2;2;2
1;1;2;2
5
3.5
2.75
2
1.5
-0.408248
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In Section 4.2, it is mentioned that the effect of high quality demonstrations is multiplicative on ICL error; however, other terms like next-token prediction accuracy and prediction smoothness have an additive effect. Intuitively, I don't follow why this would be the case. It seems to me the first factor in equation 13 (i.e., the factor with several additive terms) is merely assessing how well the model is at pretraining and modeling the distribution at hand, and the second factor (i.e., demonstration shift) assesses how good the demonstrations are at eliciting the desired capabilities. Is this the right interpretation? If not, can you better explain why the first term has additive influence of next-token error and prediction smoothness (which I would have expected to themselves have a multiplicative effect)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The prime contribution from a theoretical standpoint in this paper is introduction of the user intent as a first-class citizen in theory. This helps accommodate phenomenology around experiments where alteration of the context lead to the same outputs---if the user intent remains the same, the model is likely to produce the same output. That said, I have some apprehensions on relation to past work and general presentation / experimentation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The current paper attempts development of a unified theoretical model of in-context learning that can help reconcile the incoherent empirical results seen in prior literature (e.g., the effect of data to label map randomization). To achieve this, the authors explicitly model the notion of \"intention\", i.e., the task the user wants the model to perform on a given datapoint, and assess what conditions lead to the inferred task from the model matching the intended task. This leads to a three-part decomposition of ICL error: (i) error of next-token prediction (essentially the autoregressive training loss); (ii) smoothness of predictions (how drastically they change if context is altered); and (iii) \"quality\" of demonstrations. All terms have intuitively expected effects and hence reconcile past empirical results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Relation to past work.** A crucial missing reference seems to be Lin and Lee (\"Dual operating model of ICL\", ICML 2024). Therein, authors define a prior over tasks the model can solve and assess the effect of context size scaling to reconcile prior empirical results. It would help if authors can delineate how their contributions differ from that of Lin and Lee.\n\n- **Experiments.** I understand the paper is theoretically driven, but there are several papers on the theory of ICL at this point and it is unclear which theoretical framework is in fact correct. I hence encourage authors to take a prediction-centric perspective: what predictive claim does your theory offer, and can you demonstrate that said claim checks out experimentally? I am happy with an entirely synthetic experiment. The currently existing experiments suggest induction heads may be the mechanism for intention inference, but that claim is mostly speculative and is not well corroborated by the current induction head knockout experiments (by knocking out induction heads, you might be removing general ICL ability, and it is unclear if where the model is failing is inference of the correct intent). \n\n- **General presentation.** I found the writing quite convoluted in several parts of the paper. For example, the introduction has several typos and grammatical errors, and at times has unnecessarily complicated phrasing (e.g., \"Numerous outstanding works have revealed the enigmatic characteristics inherent to ICL\"). The citation commands are also incorrectly used---only \\citet was used, with no \\citep usage. If citations are to go in parentheses, then \\citep should be used." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "What is “n” in Table 1?\nHow does Table 1 “show that larger LLMs can capture the intention”? Isn’t the result just scaling?\nL1182 “group them into 2 to 5 categories” which ones? Can you provide more details or samples for the dataset preparation?\nF.2 do the induction heads identified here affect intent recognition in section F.1? I.e, if you “knock out” the heads then extract features, does the intent prediction performance degrade?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This work introduces a latent variable “intent” based model for understanding ICL. The model is a reasonable plausible model for ICL in LLMs, outlining the weak assumption used in the theoretical analysis. Based on the intent model, several theoretical results are given, including conditions for when ICL can emerge in LLMs, under the intent model. The model also provides theoretical understanding to explain the phenomenon of demonstration selection for ICL, and adapting to random label changes (or other task shifts) using ICL. \n\nThe paper provides some experimental confirmation of their intent model and theoretical analysis. The experiments show the performance of LLMs under task shifts which appear to support the analysis. Moreover, experiments that use (2-layer) probes to classify intents and isolation inductions heads for intents are included, which provide some justification for their model.\n\nThe idea of changing the value of isolated induction heads for intent and observing its effect on the LLM is interesting. The results appear to confirm the importance of the identified heads." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a latent variable model to study in-context learning (ICL) of large language models (LLM). It contributes a theoretical framework based on hidden “intentions” which are sampled during LLM generation. A hidden Markov model is used to model the overall generation process of tokens from an intent. The paper proves a novel no free-lunch theorem for the intent model which describes the conditions for when ICL emerges in an LLM (small next-token prediction error and prediction noise). In addition, the paper relates the selection of demonstrations for ICL to matching the original intent, and provides theoretical insights for the process. Empirically, the paper reports experiments on the ability of LLMs to adapt to randomized labels in-context, linear probing for intents, and identifying induction heads for intent inference." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The amount of detailed mathematical analysis in Sections 3 and 4 is dense and obscures the key take away messages from the theory. For example, after detailing the many assumptions for the intent model and deriving the no-lunch theorem in 4.2, the conclusion of the theorem appears to be “LLMs with weak capability in prediction the next token and non-qualified (?) demonstrations fail to exhibit ICL capability.” This is very well-known from empirical evidence (since GPT4), so it is not very surprising that the intent model, under reasonable assumptions, arrived at this result. As a key result of this paper, its relevance to a broader ICLR audience is unclear. One suggestion to the authors is to reconsider whether to keep all of the technical details in the main paper, or describe the main takeaways and the theorem, but move the rest into the appendix.\n\nThe paper lacks direct empirical confirmation of its theoretical findings. In 5.2 it states that “it is challenging to calculate or estimate these values (values predicted as necessary for ICL in Theorem 1)” hence indirect experiments must be done. This is a significant weakness for the theory, as it essentially cannot be experimentally confirmed or falsified. Can the values be estimated in a toy setting? \n\nThe intent recognition experiment is not totally convincing. It sets up an intent prediction task and uses features extracts from different layers of LLMs’, along with a 2-layer network to predict intent. Can this task be solved without using an intent model? Please consider including a baseline that plausibly does not implement an intent model. Details of the task setup are also missing. For example, what are some of the 50 intents? Are they instructions or tasks? How are train/test splits done?\n\nA lot of the content in the appendix is highly What is “n” in Table 1?\nHow does Table 1 “show that larger LLMs can capture the intention”? Isn’t the result just scaling?\nL1182 “group them into 2 to 5 categories” which ones? Can you provide more details or samples for the dataset preparation?\nF.2 do the induction heads identified here affect intent recognition in section F.1? I.e, if you “knock out” the heads then extract features, does the intent prediction performance degrade?\n\nrelevant to the paper. For example, Appendix D which discusses the theoretical and empirical challenges. Moreover, the experiments that actually try to confirm the plausibility of the intent model within real LLMs are in Appendix F. Please discuss these experiments in the main body of the paper, state their conclusions and how they support the theory.\n\nWriting of the paper needs significant editing and proofreading. \nJust a few examples:\nL076 “Introducing an external to modify” external what?\nL225 “error of next-token predicting”\nL375 “It shows that ICL” what is “it”? \nL385 “can be wrapped in the way” what does “wrapped” mean?\nL498 “GTP-4”, “achieves exciting performance” what does “exciting” mean?\nL501 “matrixes”" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Why is it called the no-free lunch theorem? No one expects ICL to emerge in models with high next-token prediction errors, or models to perform well on under-specified ICL tasks.\n\n- Why do we need to have a neighborhood of intentions, which are exactly modeled; compared to Xie et al’s exact intention which may have some modeling error (as is generally the case with all ML models).\n\n- What is the difference between Assumption 3 and 5?\n\n- Section 5.2: Why is it difficult to estimate next-token error of LLMs? And the results of this section don't mean much. Everyone would expect the ICL performance to go down with a more complex task. This does not imply that the model is performing inference of a complex intention as presented by the theory. There is no \"introduction of an external matrix T”, it is all in the theory. Where is the causal link that implies that the model is figuring out this new matrix T?\n\nIn all, I find it hard to justify this paper because the theory it presents does not make any verifiable new predictions that Xie et al did not already make, and in my opinion does not explain the previously unexplained phenomena like Min et al.\n\nI will increase the score if the paper is clearly rewritten. I know that this is a long paper and hard to put concisely in 10 pages, but in this sort of work, the paper would greatly benefit if some of the underspecified theory (which makes it hard to understand) is moved completely to the appendix, and some more readable results like the experiments are moved to the front. The distinction between the results presented in this paper and Xie et al are unclear which could have been greatly improved in the introduction section. These are just some of my personal suggestions." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The theory is very similar to Xie et al's Bayesian Inference theory with some modifications like neighbourhood of an intention, etc.\n- The authors provide an interpretable way to connect LLM performance on next work prediction and the quality of demonstrations to the performance on ICL tasks (under their theory), which is nice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "A theory about In-Context Learning similar to Xie et al's Bayesian Inference theory. Aims to explain some characteristics of ICL noted but not explained by prior works, for example perturbations in the label space. Aims to break down the error in predictions to interpretable quantities like LLMs' performance, quality of demonstrations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Next token error is conditioned on \\theta in assumption 4. Even if the LLM infers intention, that would be solely determined by o_1:i-1, say \\theta_inferred. If the LLM is well trained, we can assume that it mostly infers the right intention and hence the condition with \\delta_4,1 can be satisfied. But in cases when it fails to infer the right intention, this error may be quite large. So the assumption is strong. Moreover, there is no way to get predictions from the LLM given the same context and some different intention \\theta_different, as the intention inference (if that is what happens in LLMs) is implicit and can not be disentangled. The LLM will always infer the same distribution over intentions given the same context, so I don’t understand assumption 4. \n\n- Like Xie et al, the no free lunch theorem in this paper does not explain task learning capabilities of LLMs, on completely novel tasks (\\theta not in the intention family) unrelated to pretraining text.\n\n- The whole external mapping thing does not make much sense to me. Users do not provide an external mapping when getting the model outputs; they directly present demonstrations with this transformation. If the LLM infers this mapping, it can only be implicit. Making it a part of the original intention family. It is hard to tell if a mapping like flipped labels is present in the intention family learnt by the model during pretraining. If the mapping is randomly generated, this becomes a contradiction as it is surely not present in the pretraining corpus. The authors say that they will explore this in future work, but it is an important point that makes Xie et al’s theory and this paper’s theory inconsistent with Min et al’s results where the model is able to infer the right intention with randomly generated labels. \n\n- Experiment section is too small and severely cut (defered to the appendix). Which model? What ICL task? It is an important part of the paper and needs to be put in the main text. Also, the evidence is circumstantial. Intervening on model activations can imply so many things related to completely different theories. How can we claim that these results imply anything specifically about the intention model? This also highlights the difference between theory and practice, as the presented theory does not elicit easily verifiable causal experiments.\n\n- The paper is very hard to read and follow. Like\n - section 3.3, should define \\delta_1,1, 1,2, 4,1, etc. What do forbidden tokens mean, what are forbidden transitions?\n - citations are placed very poorly. Sometimes before the sentence, sometimes after, sometimes unrelated; without proper usage of citet/citep. \n - [nitpicky] “Advanced works”: what is advanced, and compared to what? 
“fortunately consistent”: while good to know that the authors felt relieved that the method worked, it maybe inappropriate in a technical report. Some words feel too artificially placed like “enigmatic characteristics”.\n - “These intriguing phenomena highlight the difference between traditional learning and ICL Kossen et al. (2024). These seminal explorations provide a fruitful guide for understanding and explaining ICL.” These sentences don’t flow well. which works?\n\n - “a relatively weak connection to empirical investigations, potentially due to strong assumptions” [ICML 2024 paper](https://arxiv.org/abs/2310.08540) illustrates this and may be appropriately cited.\n\n - Line 77: “Introducing an external to modify …”, external what?\n - Line 116: definition of Sn can be confusing to read.\n - o is used for both delimiter and document tokens. confusing.\n - Line 297: \\theta_g is now called inferred intention, previously it was ground truth. confusing.\n - Line 292: where does m come from? What does it mean? Unclear.\n - Table 1 is referred to as Table 2 in the text.\n - Many more ...\n\n Although I don't believe in reducing review scores for small clarity and writing issues, this paper seriously suffers from them and hampers the readers ability to understand the concepts conveyed. I would recommend a clear rewrite with more help from experienced co-authors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The theoretical analysis breaks away from the typical assumptions about perfect model alignment. It feels like it’s providing a more grounded explanation, making it easier to connect theory with the real behaviors of LLMs.\n\n2. The writing is generally clear, and the mathematical notation is thoroughly defined, which makes it easier for readers to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a theoretical framework called the \"intention model\" to explain ICL behaviors. The authors present a \"no-free-lunch\" theorem for ICL, showing that its emergence depends on prediction error, prediction noise, the model's smoothness, and demonstration quality. Unlike previous approaches with strong assumptions, this work relaxes the assumptions on perfect model alignment and demonstration representation. The intention model helps bridge the gap between theoretical explanations and empirical observations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper's theoretical framework largely follows the derivation approach from Xie et al. (2022), particularly leveraging Bayesian Inference. Although it extends the original work by adding an error term between the LLM and the real distribution, this extension doesn’t feel groundbreaking. The contribution seems more like an incremental step rather than a major theoretical innovation.\n\n2. The use of the term \"No Free Lunch\" for the presented theorem seems a bit off. The typical connotation of \"No Free Lunch\" is about the impossibility of a universal solution that works optimally in all scenarios. Here, the theorem implies that LLM performance depends on factors like prediction error, prediction noise, and demonstration quality. While there is indeed an implication of trade-offs, if the theorem isn’t emphasizing a broad, universal limitation but rather a specific condition for ICL, then this choice of terminology could easily confuse readers.\n\n3. The experimental section lacks clarity on how each of the theoretical components, particularly the terms in Equation (13), manifests in practice. It’s unclear how specific terms like \"error in predicting the next token,\" \"prediction smoothness,\" and \"distribution smoothness\" are reflected in the real experimental observations. This disconnect makes it difficult for readers to see how well the theory aligns with the empirical results, and it weakens the overall support for the claims made in the theoretical part." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose the intention model, a novel theoretical explanation for ICL." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024intention,\ntitle={Intention Model: A Novel Explanation for In-context Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2F7MFqATdo},\nnote={under review}\n}" }, "abstract": { "value": "In-context learning (ICL) has demonstrated remarkable success in enabling large language models (LLMs) to learn to do a downstream task by simply conditioning on a few input-output demonstrations. Distinct from traditional learning paradigms, ICL does not require model updates, thus attracting significant interest in understanding the mechanisms behind LLMs’ ICL capabilities. Advanced works aim to understand ICL through an empirical viewpoint to provide the multifaceted nature of ICL, while some works aim to explain how ICL can emerge theoretically. However, the current theoretical analysis exhibits a weak connection to empirical explorations due to strong assumptions, e.g., perfect LLMs and ideal demonstrations. This work proposes an intention model, providing a novel theoretical framework for explaining ICL. With mild assumptions, we present a ``no-free-lunch'' theorem for ICL: whether ICL emerges depends on the prediction error and prediction noise, which are determined by \\emph{\\textbf{i)}} LLMs' error of next-token prediction, \\emph{\\textbf{ii)}} LLMs' prediction smoothness, and \\emph{\\textbf{iii)}} the quality of demonstrations. Moreover, our intention model provides a novel explanation for the learning behavior of ICL under various input-output relations, e.g., learning with flipped labels. This is fortunately consistent with our experimental observations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "In-context learning", "Large language models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/365556ed034f94a9c3293c4d2ec3361856709588.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Intention Model: A Novel Explanation for In-context Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2FMdrDp3zI
Is Complex Query Answering Really Complex?
main
Active
complex query answering;knowledge graph;multi-hop reasoning
datasets and benchmarks
3;5;5;5
4;4;4;5
2;3;2;3
2;2;2;1
4;3;3;3
4.5
4.25
2.5
1.75
3.25
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please respond to my two concerns in the weakness part." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper conducts an in-depth study of existing benchmarks and reveals biases regarding tree-shaped queries and union operators in several datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript presents a data-level study of knowledge graph complex query answering. The main argument is that the query-target pairs in existing datasets (q, t) can be somehow reduced to the easier ones (sub(q), t) if the required triple can be found in training KG. Therefore, the paper proposes to focus on the irreducible query answer pairs and empirically examine that the performance of all existing methods will drop significantly. The facts revealed above motivate a search approach highlighted by letting the edges in the train graph be memorized. The performance of the approach is compared against previous works on old and new benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have two concerns about the content discussed and the angle studied in this paper.\n\nFirstly, the content seems to be very old. I am not sure whether this paper has been recycled on a sufficiently long time, so the author is not aware of the recent progress in this field. \n1. The dataset discussed only covers the query types in [1], which is outdated today. In 2024, several datasets covering far more complex queries are also proposed, including [2] for queries with logical negation, [3] for cyclic queries, and [4] for multi-answer variables. For the ``fair'' split of triples, the answer is also unaware of existing studies on temporal CQA [5].\n2. The baselines discussed are also no later than the year 2023. ULTRAQ is almost the same as the GNN-QE.\n3. Given the above ignorance, the proposed CQD-hybrid method is fundamentally identical to the QTO [6] proposed in ICML'23. Those two methods are all search-based approaches that involve memorizing the train edges, which is proposed in this paper and also reflected in Equation 4 in [6], noticing that normalizing link predictor scores into [0,0.9] will not change the order of solutions. \n\nI prefer to recognize methodological identicality as unawareness rather than plagiarism. Therefore I didn't raise an ethical review flag.\n\n\nSecondly, saying that \"the existing benchmark is problematic\" is questionable and somehow self-contradictory with this paper's philosophy of choosing outdated simple queries. \n- On the one hand, scores on the previous benchmarks [1-5] are far from saturated because the average score is still less than 50 out of 100. Optimizing empirical models on previous benchmarks will also benefit the performance of the proposed \"hard\" benchmark. Meanwhile, recognizing the importance of training edges, although motivating the CQD-hybrid in this paper, is not new to the community because it is practiced in QTO [6] and later followed by FIT [3]. It hardly says why these findings are essential.\n- On the other hand, the paper only focuses on the simpler query forms proposed in [1]. One might argue that such simple query forms cover a sufficiently large portion of real-world user cases, so the choice of such forms is reasonable. The same practical point of view can also apply to the easy-hard contrast produced by whether the reasoning triples of a query are observed or not. Although the previous benchmark consists of too many observed triples, as shown in this paper, it can also be reasonable by arguing that the train graph consists of a sufficiently large portion of knowledge that users are interested in.\n\n\n\nReferences:\n\n[1] Ren, H., Hu, W., & Leskovec, J. (2020). Query2box: Reasoning over knowledge graphs in vector space using box embeddings. arXiv preprint arXiv:2002.05969.\n\n[2] Ren, H., & Leskovec, J. (2020). Beta embeddings for multi-hop logical reasoning in knowledge graphs. Advances in Neural Information Processing Systems, 33, 19716-19726.\n\n[3] Yin, H., Wang, Z., & Song, Y. (2023). Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors. arXiv preprint arXiv:2304.07063.\n\n[4] Yin, H., Wang, Z., Fei, W., & Song, Y. (2023). ${\\rm EFO} _k $-CQA: Towards Knowledge Graph Complex Query Answering beyond Set Operation.\n\n[5] Lin, X., Xu, C., Zhou, G., Luo, H., Hu, T., Su, F., ... & Sun, M. (2024). TFLEX: temporal feature-logic embedding framework for complex reasoning over temporal knowledge graph. Advances in Neural Information Processing Systems, 36.\n\n[6] Bai, Y., Lv, X., Li, J., & Hou, L. (2023, July). Answering complex logical queries on knowledge graphs via query computation tree optimization. In International Conference on Machine Learning (pp. 1472-1491). PMLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The comparison of 2u-filter is dubious. As the definition of union query just requires one link to hold in the graph, I do not see the necessity to do such filtering as Figure A.1 as it more resembles 2i query type after filtering." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The key observation of this paper is interesting. The partial-inference pair is prevailing in existing datasets and the paper shows that full-inference pair is empirically much harder than partial-inference pair and thus the reasoning ability of SOTA CQA models may be less powerful than our imagination.\n\n2. This paper's case study and deep analysis are praiseworthy. For example, the paper studies the query type with union and additionally finds that if we filter out such pairs that can be accessed by just one link, the performance of 2u will increase significantly, similar to that of the 1p query type." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the complex query answering task on knowledge graph and questions whether the data in existing dataset is unqualified. To be specific, the author proposes to term those pairs of complex query & answers which the corresponding reasoning process can leverage some parts of the knowledge in the training graph as partial inference pair and thus evaluate existing CQA models on full-inference pairs. This paper conducts extensive experiments to showcase this observation and analysis on some certain query types like 2u." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Firstly, the discussion of the query type is constrained in this paper. Most dominantly, almost all researches conducted on complex query answering in recent years included negative queries yet this paper avoids that completely. Perhaps it's a drawback of their model design originating from the initial CQD paper, or perhaps the reasoning process defined in this paper fails in a negative query. Either way, it's problematic as the scope of the query type it investigated is strictly contained.\n\n2. The claim of SOTA CQA models fail significantly on so-called full-inference pair is questionable, as it doesn't include recent models that are built by symbolic search algorithms, like QTO[1] and FIT[2], which use neural link predictors combined with searching algorithms and seems to bypass the challenges proposed by full-inference pair. As the paper itself proposes a symbolic search method, the missing baselines in other symbolic search methods is questionable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Do you vary your argument in train queries? I am wondering the phenomenon that existed CQA models fails is caused by the train datasets have too many partial inference answers. Thus I am curious about the performance of symbolic search methods where these methods don not use queries to train." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper find a interesting weakness of existing CQA dataset and propose a useful method and benchmark.\n2. This paper is well written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The hard answers studied in complex logical queries are those that cannot be retrieved due to gaps in the knowledge graph (KG). This paper reclassifies these hard answers into two categories: full-inference and partial-inference The authors argue that partial inference can be reduced into simpler query types and partial inference occupies the majority of existing datasets, BetaE. They discover that current models perform poorly on full inference tasks and propose a new benchmark to highlight this issue. Additionally, they introduce a new model specifically designed to tackle partial inference answers by explicitly retrieving existing links from the training knowledge graph." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The baselines lack of the symbolic methods like QTO and FIT, which are the mainstream of CQA methods. The used CQD is a old symbolic method.\n2. BetaE have three KGs but only two KGs are presented in the paper.\n3. The argument of 'reduced to easier types' is weird because query types with less constraint will be easy to solved than original query types, for example the performance of 3i is good than 2i. I suggest the authors use a preciser expression.\n4. I disagree your arguments that your proposed CQD-hybrid is the first an hybrid solver. QTO and FIT use the information from observed KG and trained link predictor to construct the matrix and can use the hybrid information of train edges and pre-training embeddings.\n5. Because of Weak 4, I am curious that the performance of symbolic method QTO and FIT as they already have the hybrid information." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* The problem of information leakage in training graphs can be solved well by the inductive setting in naive knowledge graph reasoning (one-hop reasoning) task. Actually, there have been some attempts to establish inductive settings in CQA[1][2], where there will be no information leakage because the training and test graphs are different. How do you think this paper differs from these works?\n* In my opinion, link leaks in the training graph only affect the GNN based and neural link predictor based methods, while the embedding-based methods do not take advantage of the information in the training graph (except for 1p queries). Why does this type of approach also degrade on the new benchmark?\n* As mentioned in weakness, what's the difference between CQD-Hybrid and QTO?\n\n\n[1] Inductive Logical Query Answering in Knowledge Graphs. In NeruIPS 2022.\n\n[2] Type-aware Embeddings for Multi-Hop Reasoning over Knowledge Graphs. In IJCAI 2022." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Motivation of the paper is novel, and the re-examination of existing benchmarks is valuable.\n* Experiments in this paper can support the conclusion well.\n* Writing of the paper is good, the structure is clear, the layout is good, and it is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, authors re-examine the existing problems of knowledge graph complex reasoning datasets. The authors propose that the current dataset cannot effectively measure the generalization ability of the reasoning model, that is, the complex queries in the dataset can be solved by the triples leaked in the training graph, and verifies their conjecture through extensive and sufficient experiments. Further, the authors propose a new set of benchmarks to more effectively measure the performance of complex reasoning models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Lack of discussion of related work, if the space is limited, this part can be placed in the appendix.\n* In Section 5.1, the author proposes a CQD-Hybrid solver. Actually, the practice described in the paper is very similar to the QTO [1] and I think the difference should be cited and discussed.\n* As an effort to propose new benchmarks, the experiments for the new benchmark are a little less. More baselines, some case analysis, etc., should be added.\n* Some typos, such as line.468: 50.000\n\n[1] Answering Complex Logical Queries on Knowledge Graphs via Query Computation Tree Optimization. In ICML2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024is,\ntitle={Is Complex Query Answering Really Complex?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2FMdrDp3zI},\nnote={under review}\n}" }, "abstract": { "value": "Complex query answering (CQA) on knowledge graphs (KGs) is gaining momentum as a challenging reasoning task. In this paper, we show that the current benchmarks for CQA are not really complex, and the way they are built distorts our perception of progress in this field. For example, we find that in these benchmarks most queries (up to 98% for some query types) can be reduced to simpler problems, e.g., link prediction, where only one link needs to be predicted. The performance of state-of-the-art CQA models drops significantly when such models are evaluated on queries that cannot be reduced to easier types. Thus, we propose a set of more challenging benchmarks, composed of queries that require models to reason over multiple hops and better reflect the construction of real-world KGs. In a systematic empirical investigation, the new benchmarks show that current methods leave much to be desired from current CQA methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "complex query answering", "knowledge graph", "multi-hop reasoning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/01186d51b6656e1c5197229eddc62b1e044efec3.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Is Complex Query Answering Really Complex?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2G021ZqUEZ
From Commands to Prompts: LLM-based Semantic File System
main
Active
Large Language Model;Semantic File System
infrastructure, software libraries, hardware, systems, etc.
3;5;5;8
4;4;3;3
2;2;2;3
2;2;2;3
2;4;3;3
5.25
3.5
2.25
2.25
3
-0.70014
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "W1 - W5" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1. Semantic file systems enhance file management by incorporating content context, enabling more intuitive and effective operations, which is an important direction.\n\nS2. LSFS simplifies interactions with file system, making file management more accessible and user-friendly.\n\nS3. Integrating LLMs in system-level tasks expands functionality, enabling intelligent, responsive, and user-focused file extraction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an LLM-based Semantic File System (LSFS), designed to improve file management through natural language prompts, rather than traditional command-based interactions. LSFS integrates large language models (LLMs) to facilitate semantic file operations like retrieval, summarization, and rollback. At its core, LSFS uses a vector database to create semantic indexes for files, enabling high-level file operations that consider the content and context of files. It also includes a comprehensive set of APIs that allow complex operations, such as CRUD, grouping, and semantic retrieval, to be executed through natural language prompts. Experimental results show that LSFS outperforms traditional systems in retrieval accuracy (with a 15% improvement) and speed (2.1x faster), proving especially effective for semantic file tasks that go beyond conventional keyword searches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. The motivation is not concretely convincing, especially the first challenge mentioned in Introduction.\n\nIn the Intro section, the authors mentioned that \"For instance, if two files have similar content–such as different versions of the same document–traditional file systems lack the capability to organize or retrieve these files based on their content similarity.\" Why the files should be organized by the similarity of their content? What are the benefits and what are the practical application scenarios? It would be better to at least add one short discussion or a few examples. \n\nW2. This paper does not point out what key problem they want to solve. Compared to a research paper, it seems more like a technical report.\n\nW3. The experimental setting is questionable. No baselines and introduction of datasets.\n\nW4. The experimental results need more explanation. The traditional file system needs to execute commands to extract files, which should be faster than calling one LLM, even though these LLMs are light-weight. The authors should also list the inference time of LLMs, which should be also counted as the interaction time between users and the file system. Then the authors can also list the time that users manually write commands, which should be a good point to prove the point that -- LSFS can not only speed up file retrieving accuracy and speed, but also can reduce the interaction time between users and file system.\n\nW5. Safety insurance mechanisms is pointed out as one contribution, however, there is no description of this mechanism and no experimental comparison between the performance of LSFS with and without the safety insurance mechanisms." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Intro (line 57) \"The community still lacks a more general LSFS to serve as a common foundation that can be used by various agents on the application-level.\" Do you have some references backing up this lack? I mean, have people expressed somewhere they'd like/need a FS organized by an intelligent agent, i.e., an LLM here?\n2. Related Work (line 133) \"Besides, it integrates comprehensive semantic information across all aspects of the file system–from storage and file operations to practical applications\" This sentence is a bit vague, could you name some of the semantic information here, so to guide the reader?\n3. Could you add a positioning sentence in §2.3 to explain clearly where LSFS delineates from this research axis?\n4. In this stage many operations from traditional FS aren't there, from what I understood, this is typically the case for right modification of files or group affiliation… These would be particularly helpful so to \"propagate\" these rights to any retrievers, preventing typically to see the private/exclusive file of someone else appearing in _my_ search results, wouldn't it?\n5. In §4,2 (line 294), the authors mentioned that supervisor updates \"periodically\" what is the period between each check and therefore how expensive resource-wise is it? Did the authors check various values for this, searching for the sweet-spot between resource-consumption and freshness of the LFSF data? Also how does it scale in terms of file number and disk footprint?\n6. Overall, §4.4 seems to be more or less an NL2CSV tool, filling fields of a JSON, right? In such case, this is something that the community has been exploring a lot these past two years, so maybe adding some pointers wouldn't hurt. This goes also for the §5.1 associated with RQ1.\n7. Are authors considering releasing their test data for §5.1? Also, it would be good to have some examples in the body of the article.\n8. In §5.2 why no QWen or Gemma in the experimental run for Table 2, and no Gemma in Figure 6?\n9. In §5.2 still, what about very large number of files?\n10. Ibid., same for the number of versions?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ Well motivated, especially through the Introduction.\n+ Clearly written\n+ Very nice figures\n+ Hot topic nowadays, with many LLM-based applications reshaping the ways we interact with machines" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this article, the authors based their efforts on the hypothesis that Large language models (LLMs) have the potential to improve file management systems by enabling interactions through natural language rather than traditional manual commands. Following this idea, they proposed LLM-based Semantic File System (LSFS) to address some of the current File System limitations (to the users), by allowing typically semantic file management through natural language prompts. LSFS has through some APIs for semantic file operations, achieves better retrieval accuracy and speed compared to traditional systems or the use of standalone LLMs respectively. It supports complex tasks like semantic file retrieval, rollback, and sharing with high success rates." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "### General Remarks\n- Even though the Introduction is clear, I'd have liked a more concrete / detailed example, maybe having a finer-grained figure 1 would help.\n- The balance between §3 Vs. §4 is unexpected, I would have imagined a more detailed architecture section (§3), explaining the various design choices. Instead, the authors motivated their architecture. In addition to not being referenced, this motivation -to me- would have been better positioned directly in the Introduction. Similarly, the overview of the architecture and the description of Figure 2 would have benefit also the introduction of an example, especially since Fig2.a doesn't contain precise information, but rather e.g. a list of blocks entitled API.\n- Overall, for §5, it would have been very interesting and convincing to see an experiment involving users' usage and performances, using a TFS and the presented LSFS. The authors could have reviewed the success rate and the time efficiency of users in both settings together with collecting feedback from them, following the traditional user studies.\n- Very disappointed to have the future directions (as Appendix H) not included at all in the main article, but instead, pushed as optional reading material at the very very end of the .pdf…\n- Finally, the overall article, from §3 onwards especially, reads a bit like a technical report and lacks -to me- a step-back to better highlight the novelties and the perspectives while guiding the reading with more examples / intuitions directly in the text.\n\n### Minor Comments:\n- Abstract \"agent applications **TO** perform file operations\"\n- Introduction (line 95), \"a LLM-based\" should be **an**\n- Table 1 (line 235), typo on \"Hybrid retrieval\" better to put everything lower-case as the rest of the table\n- In §4.1, in Composite Syscall of LSFS, it would be better if the authors could make explicit the composition of atomic calls, i.e. for each entry, adding a generic formula (or examples) of how the composite call is practically chaining the atomic ones.\n- In Figure 4, \"Please summary all paper from AAA University about LLM\" there's a typo in the second word: **summarize**.\n- Similarly, still in Figure 4, \"Please use file A update the content of file B\" misses the word **to** before 'update'.\n- In §5.2, (line 450), typo: \"vary the the number of rollback versions\" remove one **the**.\n- In §5.3, (line 478), \"Therefore, we make two enhanced versions, named as TFS-grep and TFS-grep* to make the comparison\" I would be great to tell there differences in a line instead of relying on the Appendix, so to make the article (before page 10) self-contained." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Your systems seems to have advantages for searching and retrieving files based on keyword or semantic search. This could be implemented on top of a conventional file system, why implement a new API for that?\n2. Is the accuracy of the LSFS parser of about 90% enough for meaningful work? That means 10% incorrect results. How did you collect the NLP commands for this evaluation?\n3. How exactly did you perform the evaluation summarized in table 2?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- very interesting and original setup\n- includes interesting examples how to use the file system" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose, implement and describe an LLM-based semantic file system, where commands are replaced by prompts. They describe APIs and in several experiments compare how this filesystem is used and performs, compared to a conventional file system." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- weak evaluation, based on examples and not on more extensive use cases and human evaluation, evaluation setup not described in detail / cannot be reproduced\n- unclear for which users this could be helpful\n- unclear how robust the system is when translating NLP input into actual actions\n- unclear how the new API maps and extends conventional file handling APIs, and why setting up a new API set is superior to adding some APIs to a conventional file system" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) Are there guardrails in place to restrict the LLM to not scan through personal data?\n2) How much cost are we saving with the new architecture?\n3) Are there any security concerns for using this architecture?\n4) How much of an operational overhead is this architecture based on traditional architecture ?\n5) What are the other use cases of this architecture in real life scenarios?\n6) This seems like an ongoing problem that needs to be resolved; are there any similar existing architectures? Have you looked at those papers?\n7) Is there an Andon Cord mechanism to stop the LLM to give out hallucinations and wonky results to the user?\n8) While scanning through the files, is the data saved in memory? Does the data contain PII information (ppersonal information about the user)?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper touches on an existing problem that exists in the day-to-day lives of developers and Mac OS users of remembering the file names and directory where the files are present and need modification. There is no way to solve this problem at the present. Even while using LLMs, sometimes developers have to hard code the file path for retrieval. LLM based file retrival system is new and useful for anyone who is fed up of the traditional based systems.They did pretty well work on describing the APIs to be used in the new framework and a commendable job in comparing the APIs to the traditional ones. The quality of the paper was good and the presentation with diagrams were very useful to get the context of the paper.\nThe architecture of the new framework was explained in detail and they have done a good job in explaining how each component in the architecture is integrated with LLMs. Evaluations are carried out based on success, performance, and performance on non-semantic based tasks like file sharing over sample data/files and are pretty easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper represents a problem in the current scenario of semantic file matching algorithms; currently, we use the traditional way of semantic matching algorithms based on file name, size, and timestamps. This involved remembering syntax and filenames. This fails in scenarios where two files have similar text; here its hard to distinguish files based on pure string matching. The paper introduces LLM with traditional file systems to do LLM based semantic file management.\n\nLSFS extracts semantic features from the file content and generates corresponding embedding vectors. LSFS incorporates semantic information into its file operations. In Linux, if we need to change a file, i.e., replace a file with another, we need to remember the path, but with LSFS the users don't need to remember the file name and can talk to LLM to make the changes for them. They have introduced a LLM-based Semantic File System (LSFS), an LSFS parser, and safety insurance mechanisms to the traditional file matching algorithms. The paper has done a great job at explaining the traditional way and modifications done with NLP. They have elaborately explained the API changes they have made over traditional architecture and given diagrams to explain the architecture. Also, they have demonstrated how components of LSFS interact with each other to achieve different functionalities.\n\nEvaluations are carried out based on success, performance, and performance on non-semantic based tasks like file sharing over sample data/files." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper could have used an example to walkthrough the implementation. Each component description could have been presented with design diagrams or a flowchart that is easy to understand; visual representation always helps! More evaluations to prove their architecture is better than the traditional ones based on performance, latency, operational burden, and cost. The paper didn't touch on any security concerns while using the LLMS. Are there guardrails in place to restrict the LLMs to scanning through the personal data? One more thing the paper lacked was elaborating on the use cases where this architecture can be used." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a LLM-based semantic file system" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024from,\ntitle={From Commands to Prompts: {LLM}-based Semantic File System},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2G021ZqUEZ},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) have demonstrated significant potential in the development of intelligent LLM-based agents. However, when users use these agent applications perform file operations, their interaction with the file system still remains the traditional paradigm: reliant on manual navigation through precise commands. This paradigm poses a bottleneck to the usability of these systems as users are required to navigate complex folder hierarchies and remember cryptic file names. To address this limitation, we propose an LLM-based Semantic File System ( LSFS) for prompt-driven file management. Unlike conventional approaches, LSFS incorporates LLMs to enable users or agents to interact with files through natural language prompts, facilitating semantic file management. At the macro-level, we develop a comprehensive API set to achieve semantic file management functionalities, such as semantic file retrieval, file update summarization, and semantic file rollback). At the micro-level, we store files by constructing semantic indexes for them, design and implement syscalls of different semantic operations, e.g., CRUD (create, read, update, delete), group by, join. Our experiments show that LSFS can achieve at least 15% retrieval accuracy improvement with 2.1× higher retrieval speed in the semantic file retrieval task compared with the traditional file system. In the traditional keyword-based file retrieval task (i.e., retrieving by string-matching), LSFS also performs stably well, i.e., over 89% F1-score with improved usability, especially when the keyword conditions become more complex. Additionally, LSFS supports more advanced file management operations, i.e., semantic file rollback and file sharing and achieves 100% success rates in these tasks, further suggesting the capability of LSFS. The code is available at https://anonymous.4open.science/r/LSFS-8CCF/." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Model", "Semantic File System" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1dbf1e3e06b676078d47623aa8624498426b1171.pdf" }, "presentation": null, "primary_area": { "value": "infrastructure, software libraries, hardware, systems, etc." }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "From Commands to Prompts: LLM-based Semantic File System" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2GEiBzs2Do
Simple and Fast CNN for Vision
main
Active
Convolutional Neural Network;Vision Backbone;Lightweight;Fast
applications to computer vision, audio, language, and other modalities
3;5;5;5
5;5;5;5
3;3;2;2
1;2;2;3
4;3;2;2
4.5
5
2.5
2
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "**(Q1) Trade-offs Analysis and Discussions:** \nThe paper's analysis of various trade-offs deserves deeper exploration. The proposed SFCNN shows great superiority in speed. However, I have noticed that some architectures like MogaNet show better parameter efficiency while at lower speeds. Thus, a more detailed investigation of parameter efficiency versus computational speed would provide valuable insights for practitioners choosing between different model configurations. Moreover, there are several points that are tightly associated with this work that deserve further exploration: First, the memory-compute trade-off analysis could be expanded to include different hardware scenarios and deployment conditions. Second, the relationship between training efficiency and inference efficiency deserves more attention, since these can often have different optimal choices. Third, the model scaling properties, particularly regarding the relationship between model depth and width at different computational budgets. \n\n**(Q2) Broader Architecture Considerations:**\nThe scope of this paper lies in ConvNets in vision tasks. However, there are more kinds of architecture emerged these years. A thorough comparison with emerging architectures like Vision Mamba and RWKV models would provide valuable context for the field's evolution. Besides, there are various efficient computation techniques proposed to boost the computational efficiency of these new architectures. The evaluation against attention-based alternatives could provide insights into the relative strengths and weaknesses of different vision backbone architectures. These expanded analyses and discussions would significantly strengthen the soundness and contribution of this paper and provide valuable guidance for future research in the community.\n\n---\n**Additional Comment:**\n\nI hope my review helps to further strengthen this paper and helps the authors, fellow reviewers, and Area Chairs understand the basis of my recommendation. I also look forward to the rebuttal feedback and further discussions, and would be glad to raise my rating if thoughtful responses and improvements are provided." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "**(S1) A critical research question with real-world significance:**\nThe paper shows great industrial relevance by addressing a critical need in real-world deployment scenarios, in which computational efficiency is crucial, particularly in edge devices and mobile applications. The proposed SFCNN is notably cost-effective, providing a more resource-efficient alternative to existing approaches while maintaining or improving performance metrics. The presented thin-and-deep architecture appears to show great scalability, demonstrating computational efficiency across different model sizes, and making it highly adaptable to various resource constraints. From a practical impact perspective, this work has the potential to significantly reduce infrastructure costs for computer vision applications at scale, making it valuable for industrial applications rather than merely pushing the limit of accuracy metrics.\n\n**(S2) Thorough experiments and validation:** \nExtensive experiments are conducted on multiple mainstream computer vision tasks, such as ImageNet-1K classification, COCO detection, and ADE20K semantic segmentation. The consistency of performance across different scales is noteworthy. Ablation studies are also conducted, providing a detailed analysis of the contribution of each component to the overall performance. More importantly, the authors present a clear demonstration of the impact of model depth vs. width, supported by the evaluation of different activation functions and receptive field analysis. Hardware performance evaluation is particularly thorough, encompassing cross-platform testing on GPU, TensorRT, and iPhone, with detailed latency and throughput measurements under various scenarios. All these experiments strongly support the paper’s claim." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper targets the computational inefficiency and hardware compatibility issues of recent ConvNets that rely on large kernels to capture long-range dependencies. The authors propose a new ConvNet architecture SFCNN for vision tasks, which has shown impressive performance through a thin-and-deep design philosophy. Concretely, it combines a dual 3×3 depth-wise convolutions branch with Global Sigmoid Linear Unit (GSiLU) activation, which captures both local and global dependencies without large kernels. The proposed SFCNN is evaluated on mainstream vision benchmarks, such as ImageNet-1K classification, COCO instance segmentation, and ADE20K semantic segmentation, demonstrating great performance while maintaining better hardware efficiency across different platforms (GPU, TensorRT, iPhone). The experiments seem to strongly support the claims about achieving better accuracy-efficiency trade-offs compared to existing ConvNets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**(W1) Technical Originality:** \nThe basic building blocks of SFCNN, including 3×3 depth-wise convolutions and point-wise convolutions, largely rely on well-established techniques without significant technical originality. The GSiLU activation also bears considerable similarity to existing approaches like CBAM and SE modules. The thin-and-deep philosophy, while effectively implemented, has been explored in previous works. The theoretical foundation could be strengthened significantly, as it currently lacks enough theoretical insights into the nature of convolution operations and their relationships with model depth. A more thorough analysis of the relationship between depth and receptive field would strengthen the paper's contributions.\n\n**(W2) Technical Soundness & Empirical Analysis:**\nWhile mobile testing is included, more empirical analysis could benefit this work and improve its technical soundness. For example, the Grad-CAM heat map visualization and training dynamics investigation would provide insightful and straightforward support for understanding the technical strengths of SFCNN. Moreover, the discussion of failure cases and limitations is inadequate, potentially leaving practitioners without clear guidance on the architecture's boundaries. The exploration of model behavior under extreme resource constraints could provide valuable insights for edge deployment scenarios. I strongly recommend that the authors carry out more empirical analyses that lead to more systematic conclusions for efficient ConvNet design. The thin-and-deep design philosophy is inspiring but not specific and systematic enough. Also, this work first tries stacking multiple depth-wise convolutions in a single block rather than just one. How it works for better representation capacity is still worth digging deep.\n\n**(W3) Presentation Clarity and Details:**\nThe writing organization exhibits several points that require further improvement. The technical content sometimes lacks coherence, with important methodological details scattered across different sections rather than presented in a unified manner. The description of the architecture could benefit from a more structured approach, particularly in explaining the interaction between different components. Several key concepts are offered within dense paragraphs, making it challenging for readers to extract crucial implementation details. In addition, the method description, while comprehensive, could be reorganized to better highlight the progressive development of ideas and design choices. Moreover, the tables of experimental results presentation would benefit from highlighting the performance advantages. The formatting consistency across tables and figures needs attention, with some inconsistencies in style and presentation detracting from the overall appearance. For example, the thickness of table lines is inconsistent. I recommend the authors to first go through the entire manuscript for a thorough refinement." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see above my main concerns." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper performed experiments of the proposed approach on several visual recognition tasks.\n2. The architecture presents good results in comparison with other approaches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a CNN architecture for visual recognition. The main contribution is a small CNN architecture called Simple and Fast CNN with a core idea of stacking 3x3 convolutions to design a deep architecture. The work proposes an inverted residual bottleneck with two 3x3 depth-wise convolutions. Also, this paper proposes a Global Sigmoid Linear Unit activation function to capture global information." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main concern is regarding the lack of novelty. The proposed contributions are already well known and explored in literature for quite a long time. For instance, using a sequence of stacked 3x3 convolutions to enlarge the receptive field is an approach deeply explored in computer vision community for many years (ResNets, VGG-Nets, MobileNets, etc). Depth-wise convolutions are also explored intensively for efficiency gains (Xception, MobileNet, etc). Including the proposed Global Sigmoid Linear Unit is just a form of the existing work (already quite old) Squeeze-and-Excitation Networks. After reading this work I could not find anything novel or some new insight that is not already known to the vision community.\n2. Besides lack of novelty, this work does not compensate with some new experimental findings or some new insights for practitioners. \n\nOverall, I find the contributions of this work to be quite limited to qualify for publishing the work at such high venue. Maybe a workshop contribution can be more appropriate." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The contributions of this paper should be further explained.\n\n- More analysis on the advantages of using multiple small-kernel convolutions should be elaborated more.\n\n- The motivation of introducing global information in activation functions should be made clearer." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The motivation of this paper is clear. Using small-kernel convolutions may lead to faster inference but makes the receptive field of CNNs not large enough to capture the target objects. This paper presents to add more 3x3 convolutions to enlarge the receptive field of the proposed network.\n\n- This paper receives better trade-off between model classification performance and latency. Compared to recent CNN-based models, the proposed method has compact model architecture but better performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new convolutional neural network architecture, called SFCNN. Unlike recent popular CNN works that mainly aim to explore how to better take advantage of large-kernel convolutions, this paper explains that using thin but deep network architecture with only 3x3 convolutions can still achieve good results. In addition, the authors also rethink of the design of the SiLU activation function and propose a new one, which involves in global information based on SiLU. Experiments show that the classification performance on ImageNet is better than most previous CNN-based models. In terms of latency, the proposed approach achieves better results as well." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The authors claim that large receptive field is important for CNNs. In Fig. 3, it is shown that the proposed approach has large effective receptive field. However, it is not as large as the one of UniRepLKNet. According to the numerical results on ImageNet, the proposed approach gets better numbers. Does this mean that large effective RF is not an important measurement for building CNNs?\n\n- The authors claim that their bottleneck block design with two 3 × 3 DWConvs is novel. However, as far as I know, adding two depthwise convolutions in a single basic block has been explored before, e.g., MobileNeXt (ECCV'20, Zhou et al.). Though the order of the layers is a bit different, the design intention is similar. So, I do not think this can be viewed as a core contribution for this paper.\n\n- From the paper, it seems that CNNs with thin but deep architecture and small kernel convolutions perform more efficient than those with large kernels. However, the macro model architecture of the proposed method is not actually the same to previous large-kernel CNNs. I think the authors should conduct more specific experiments to demonstrate this.\n\n- In Table 5, it is good to see the results on instance segmentation but the methods the authors compare with are not new already. I have no idea why the results of the recent published works on CNNs are not reported here.\n\n- It seems that the 7th table has two captions? Where is Table 8?\n\n- From the ablation results, it seems that the proposed GSiLU indeed performs better than other activation functions. However, have the authors analyzed why global information should be added into activation functions? The motivation of designing such an activation function is not clear. In addition, as GSiLU is already used, why the original SiLU is still used?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N.A." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Minor performance gain vs large variance in different architecture hyperparams. This deserves a deep discussion.\n\n2. Due to the nature of carefully manually crafted CNN (which may be overfitted on IN1k), I am wondering how the architecture perform on IN22k-pretraining + IN1k-finetuning? \n\n*This is not a must-do due to the training cost. However, if this is provided, my concern on the performance perspective can be alleviated." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper writing is clear, including the motivation, main results summary, architecture design, main results, architecture ablations. All these necessary components are easy to find and comprehend.\n\n2. The experiments are adequate for a traditional architecture design work, including main results on 1N1k, coco and ADE20k. There are also component contribution ablations and architecture variant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper is a very traditional architecture design paper. The major motivation of this work is: large kernel vs small kernel. This debate has been in the community since 10 AlexNet. In VGG, they change the large kernel from 7x7 to a stack of 3x3. Recent years observe a reverse trend that moving back to extremely large kernels. In this work, the authors argue again to use stack of small kernels, due to \"Nevertheless, these approaches are unfriendly to hardware, imposing a serious computation burden on training or inference\"\n\nBased on this motivation, the authors carefully craft a new architecture, SFCNN. The architecture shows minor performance gain on IN1k (\"+0.1% accuracy compared to SwiftFormer (Shaker et al., 2023) with 87% FLOPs\"), as well as downstream tasks like COCO and ADE20k." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation / idea of this work is not new (from large kernels to a stack of smaller kernels.)\n\nThis idea dates back to VGG (2014). Authors can refer to sec 2.3 in the paper for more discussions. Placing this as the main motivation largely harm the overall contribution, because this makes the paper more like a revisit / conversation in the debate.\n\n2. Minor performance gain vs large variance in different architecture hyperparams.\n\nThe performance gain over SOTA models is minor, compared with performance variance in similar architectures with different hyperparams. As shown in Table9, searching a best setup for 1N1k is critical (min 81.3 vs max 82.6), while the performance gain over sota is only 0.x% level. This is also reflected in Table 8.\n\nI deeply appreciate the efforts in searching a best setup for the architecture. However, this makes the major performance contribution more in the \"searching\" part but no in the architecture itself. Currently, due to the development of NAS, such searching efforts can be largely automated.\n\n3. (Minor) Table 7 and 8 are mixed together in the manuscript. It is confused." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024simple,\ntitle={Simple and Fast {CNN} for Vision},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2GEiBzs2Do},\nnote={under review}\n}" }, "abstract": { "value": "Traditional Convolutional Neural Networks (CNNs) tend to use $3\\times 3$ small kernels, but can only capture limited neighboring spatial information. \nInspired by the success of Vision Transformers (ViTs) in capturing long-range visual dependencies, recent CNNs have reached a consensus on utilizing large kernel convolutions (e.g., astonishingly, 111 kernel). \nNevertheless, these approaches are unfriendly to hardware, imposing a serious computation burden on training or inference. \nThis paper introduces a Simple and Fast Convolutional Neural Network (SFCNN) that employs a sequence of stacked $3\\times 3$ convolutions but surpasses state-of-the-art CNNs with larger kernels. \nIn particular, we build a thin and deep model, which encourages more $3\\times 3$ convolutions to capture more spatial information under the limited computing complexity rather than opting for a heavier and shallower architecture. \nTo further enlarge the receptive field, we redesign the traditional inverted residual bottleneck with two $3\\times 3$ depthwise convolutions. \nIn addition, we propose a novel Global Sigmoid Linear Unit (GSiLU) activation function to capture global coarse-grained spatial information. \nOur SFCNN performs better than state-of-the-art CNNs and ViTs on various tasks, including ImageNet-1K image classification, COCO instance segmentation, and ADE20K semantic segmentation. \nIt also has good scalability and outperforms existing state-of-the-art lightweight models. \nAll materials containing codes and logs have been included in the supplementary materials." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Convolutional Neural Network", "Vision Backbone", "Lightweight", "Fast" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/df8b9b262f2ddcf33a04d6b357e180257be5df07.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/cb0f2474d3b5189982c0699b7466aeb248f3f290.zip" }, "title": { "value": "Simple and Fast CNN for Vision" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2GcR9bO620
I Can Hear You: Selective Robust Training for Deepfake Audio Detection
main
Active
Deepfake audio detection;Audio augmentations;Frequency-Selective Adversarial Training
alignment, fairness, safety, privacy, and societal considerations
5;6;6;6
3;3;5;4
4;3;3;2
3;2;3;3
4;2;3;3
5.75
3.75
3
2.75
3
0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What frequency range is the spec-magnitude attack being applied over in Figure 8a?\n1. Do you have any _formal_ explanation for why the time-domain attack is less successful than the spec-magnitude attack? To me this seems counter-intuitive because the STFT is a linear and (mostly) invertible function so, from the perspective of the optimization, it should not matter if the attack was computed in the frequency domain or the time domain. I would be very interested in seeing more explanation for why the time-domain attack is unable to reach the solution acheived by the frequency-domain attack. Please also provide the detailed settings for all the adversarial attacks (time, frequency and phase domain) used in AT, F-SAT and during evaluation.\n1.In principle, a frequency selective adversarial attack could be constructed entirely in the time domain by applying a band-pass filter to the adversarial perturbation after each optimization step (i.e. include the BP filter as part of the projection operation). This might be less computationally intensive than the proposed approach. Can you provide some discussion on why the proposed approach was favored?\n1. Why is the performance of the model trained on DeepFakeVox-HQ so low on the In-the-wild dataset (see Figure 3)" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Paper is generally well-written and easy to read but some important details are missing\n1. DeepFakeVox-HQ is a novel dataset containing data from prior datasets as well as novel deepfakes generated from SOTA speech synthesis models. I appreciate that the authors have curated a test set containing deepfake generation methods not covered in the training set _as well as deepfakes gathered from the internet_. I encourage the authors to consider uploading the dataset to a platform like Huggingface Hub.\n1. The proposed randaugment data augmentation method is effective at improving deepfake detection for RawNet3 models and is likely to be widely adopted if the source code is easy to use (I looked at the README in the attached supplemental material but did not find any instructions for the augmentation).\n1. The proposed adversarial training method improves deepfake detection accuracy on clean and adversarially perturbed recordings (though I have some reservations regarding the experimental setup)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper attempts to improve deepfake detection by (1) proposing a large training and evaluation dataset called DeepFakeVox-HQ containing diverse synthetic and real speech recordings, (2) proposing a data augmentation method similar to randaugment for deepfake detectors and (3) proposing a frequency-selective adversarial training (F-SAT) method to make deepfake detectors more robust to adversarial attacks. \n\nDeepFakeVox-HQ is a large dataset containing real and synthetic speech from existing datasets, speech generated by SOTA speech synthesis models as well as deepfakes found in-the-wild (on social media, etc.). Results show that models trained on DeepFakeVox-HQ generally perform better on existing deepfake while the models trained on the existing datasets have weak performance on DeepFakeVox-HQ, which indicates that DeepFakeVox-HQ includes information that prior works do not provide. DeepFakeVox-HQ will likely be a useful resource in deepfake research. \n\nThe proposed RandAugment scheme for deepfake detection utilizes a large bank of audio augmentations during training and yields significant improvements in deepfake detection accuracy.\n\nThe key contribution of F-SAT is to add adversarial perturbations to only certain frequency bands, which apparently results in lesser degradation of accuracy on un-perturbed data while providing greater robustness than standard adversarial training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some important details about the proposed approaches are not mentioned in the paper. \n \n 1. The value of $\\epsilon$ and $p$ (or $q$) used in adversarial training methods should be mentioned in the main body of the paper. Currently, it is mentioned in the caption of a table in the appendix\n 1. The settings used for adversarial attacks during AT, F-SAT and evaluation need to be mentioned.\n 1. The parameters of the augmentations used in randaugment need to be mentioned at least in the appendix.\n 1. Detailed composition of DeepFakeVox-HQ needs to be mentioned including \n\n 1. the method used for generating deepfakes (particularly noisy deepfakes), \n 1. the number of audios from each deepfake generation system, \n 1. the demographic distribution of the real and fake speakers,\n 1. the number of utterances used from each of the datasets from prior works, \n 1. quantitative measurement of synthetic speech quality using metrics like DNSMOS, or NORESQA.\n\n1. The choice of accuracy as a metric seems to be inappropriate for a binary classification task. I would suggest using F1-score and equal error rate as the metrics. Moreover, reading tables and plots with two accuracy metrics for accuracy is a little confusing.\n1. Conducting adversarial attacks in the frequency domain and reverting to the temporal domain is not novel and has been done before [2].\n1. There is no comparison with other adversarial defenses for audio models. Many of the defenses created for speech and speaker recognition will also apply to the deepfake detection scenario. One method that is quite simple is [3]\n1. The common practice is to use signal-to-noise ratio (SNR) as the bound for adversarial attacks in the audio domain [1] instead of $\\ell_p$ bounds. I would highly recommend the authors use SNR as well. It is fairly straightforward to convert SNR to $\\ell_2$ bounds and vice-versa. The main advantage of using SNR is that one has an idea how _perceptible_ the adversarial attack is.\n\n1. Clarity issues:\n 1. The caption Figure 9 needs to state that the results are of F-SAT\n 1. Add results for 0-8K in Figure 8b \n\n\n[1] Carlini, Nicholas, and David Wagner. \"Audio adversarial examples: Targeted attacks on speech-to-text.\" 2018 IEEE security and privacy workshops (SPW). IEEE, 2018. \n\n[2] Koerich, Karl Michel, et al. \"Cross-representation transferability of adversarial attacks: From spectrograms to audio waveforms.\" 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020.\n\n[3] Olivier, Raphael, Bhiksha Raj, and Muhammad Shah. \"High-frequency adversarial defense for speech and audio.\" ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.Given that F-SAT focuses on high-frequency perturbations, have the authors considered whether these perturbations might be perceptible to human listeners? \n\n\n\n\n2. Were all baseline models subjected to similar adversarial training procedures as the proposed F-SAT model? Consistency in adversarial training across baseline models is essential to ensure a fair comparison of robustness improvements. If not, would the authors consider including adversarially trained baselines in future comparisons?\n\n3 . How sensitive is F-SAT to the choice of hyperparameters, particularly the frequency ranges and perturbation magnitudes used for adversarial training?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.DeepFakeVox-HQ stands out as a substantial addition to the field, with over 1.3 million samples, including 270,000 high-quality deepfake samples from 14 sources. This dataset addresses the limitations of existing datasets in diversity and scale, making it a valuable resource for benchmarking future detection models. Releasing this dataset would have a broad impact on the community.\n\n2.The F-SAT method is an important innovation, targeting high-frequency features that are critical for detection but vulnerable to adversarial manipulation. This frequency-focused adversarial training enhances model robustness without compromising accuracy on clean data, addressing a key gap in existing deepfake detection methods.\n\n3.Comprehensive Experimental Evaluation: \n The experimental design is extensive, evaluating performance across standard benchmarks (ASVspoof2019 and WaveFake) as well as the authors' own test dataset. F-SAT demonstrates clear improvements in robustness across multiple corruption and adversarial attack scenarios. The addition of an ablation study further supports the effectiveness of the proposed method.\n\n4. Extending RandAugment from image processing to audio is an inventive adaptation that helps improve model robustness on both clean and corrupted audio. This demonstrates the authors' resourcefulness in leveraging existing techniques and could be beneficial for future work in audio data augmentation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of deepfake audio detection, presenting two major contributions: (1) the creation of DeepFakeVox-HQ, the largest and most diverse public dataset for deepfake audio detection, which enables realistic testing conditions and exposes limitations in existing models, and (2) the introduction of Frequency-Selective Adversarial Training (F-SAT), a novel approach that improves detection robustness by focusing on high-frequency audio components. The work is well-written and logically structured, making complex concepts accessible, and holds significant potential for advancing the robustness and reliability of deepfake audio detection models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper does not specify whether baseline models were subjected to adversarial training. If only the F-SAT model received this enhancement, it could bias the results. Including adversarially-trained versions of baseline models using contemporary adversarial methods would provide a fairer comparison and highlight F-SAT’s unique advantages.\n\n2. While F-SAT’s focus on high-frequency components is intriguing, the rationale behind the reliance on high frequencies for detecting deepfake audio could be further elaborated. \n\n3.Adversarial training, especially in the frequency domain with iterative updates, can be computationally demanding. Assessing F-SAT's efficiency, particularly compared to baseline models, would improve the paper's practicality." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "My main concern is about the generalization and robustness issues listed above.\nI will consider changing my score based on the author's responses and other reviewers." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Three main contributions involved in this work include (1) a carefully organized dataset, (2) a deepfake detection method, and (3) the ability against adversarial attacks (with the setting focusing on high-frequency signals).\nIn general, the contributions of this work are multi-fold." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Glad to review the paper.\nThis paper proposes a novel method, F-SAT for deepfake audio detection.\nThe topic of this work is promising, and the paper is easy to follow.\nI believe this work has reference values to domain-related researchers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My major concern is whether the contributions (or advantages) of this work are over-claimed.\nRegarding the dataset, although it is well organized and processed, the samples are generated using existing approaches, thus, \"the largest\" is not a significant contribution.\nRegarding generalization, as in Table 2, the significantly superior results of the proposed method are achieved on the self-organized dataset, DeepFake Vox-HQ. However, as the author introduced in Section 3, there are overlapped synthesis methods between training and testing data in this group of results (as in Figure 6). Thus the results in DeepFake Vox-HQ can not indicate out-of-distribution generalization.\nRegarding enhancing robustness, in the last paragraph of the related section, the referenced solutions were published in 2019,2018 and 2018, I am not sure whether any recent works focus on the adversarial issue, that should be discussed or compared." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Which model was used to create the plot on the left side in Figure 2?\n- How many hours of speech does the dataset encompass exactly? How long are the samples?\n- Are the samples aligned? Do all models synthesize speech using the same input sentences?\n- Is it possible to add a data sheet that outlines the exact sources and utterance lengths per source?\n- Are the WaveFake test samples also part of the DeefFakeVox-HQ test set?\n- WaveFake contains Japanese language JSUT samples.\n - Are these part of the dataset?\n - Should the Caption of Table 1 make this explicit? Since WaveFake is listed as \n English-language data set, I assume JSUT is not considered a part of WaveFake in this paper.\n - Do the Utterance numbers in table one exclude JSUT?\n - If yes, should this be mentioned somewhere else?\n\n- Is it possible to include leading works from the audio classification world, like the Audio Spectrogram Transformer (AST)[1], in the evaluations? Related work [2] found it to perform well on the WaveFake-dataset. It would be interesting if it also outperforms other methods on DeepFakeVox-HQ.\n\n- The Wavefake paper [3] trains with binary settings with fake audio from a single source and measures generalization. Training on which source network led to the numbers in Table 2? Are the numbers comparable to the related work?\n\n- Which software libraries have been used to implement this project?\n\n- Which hyperparameters underpin the network training?\n\nRelated work:\n[1] AST: Audio Spectrogram Transformer, https://arxiv.org/pdf/2104.01778,\n[2] Towards generalizing deep-audio fake detection networks, https://arxiv.org/pdf/2305.13033,\n[3] WaveFake: A Data Set to Facilitate Audio Deepfake Detection, https://arxiv.org/abs/2111.02813" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper tackles a significant problem.\n- The related work is well-researched and described.\n- The adversarial attack perspective is interesting.\n- Authors ensure their results are up to date, combining existing datasets with samples from commercial models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper \"I CAN HEAR YOU: SELECTIVE ROBUST TRAINING FOR DEEPFAKE AUDIO DETECTION\"\nintroduces the DeepFakeVox-HQ data set, which contains audio from 14 sources.\nIn addition to the dataset, the authors introduce Frequency-Selective Adversarial Training (F-SAT), a training method that focuses on the high-frequency part of the spectrum. In addition to FSAT, this paper evaluates robustness concerning various input perturbations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Traditional compression algorithms like MP3 remove high-frequency content; according to line 84, FSAT focuses on this part of the spectrum.\n- If I understand correctly, compression is not part of the corruption set, as shown in Figure 7. Including compression would have been important for real-world applicability.\n- Data-set details like the length in hours or training hyperparameters like the learning rate are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024i,\ntitle={I Can Hear You: Selective Robust Training for Deepfake Audio Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2GcR9bO620},\nnote={under review}\n}" }, "abstract": { "value": "Recent advances in AI-generated voices have intensified the challenge of detecting deepfake audio, posing risks for scams and the spread of disinformation. To tackle this issue, we establish the largest public voice dataset to date, named DeepFakeVox-HQ, comprising 1.3 million samples, including 270,000 high-quality deepfake samples from 14 diverse sources. Despite previously reported high accuracy, existing deepfake voice detectors struggle with our diversely collected dataset, and their detection success rates drop even further under realistic corruptions and adversarial attacks. We conduct a holistic investigation into factors that enhance model robustness and show that incorporating a diversified set of voice augmentations is beneficial. Moreover, we find that the best detection models often rely on high-frequency features, which are imperceptible to humans and can be easily manipulated by an attacker. To address this, we propose the F-SAT: Frequency-Selective Adversarial Training method focusing on high-frequency components. Empirical results demonstrate that using our training dataset boosts baseline model performance (without robust training) by 33%, and our robust training further improves accuracy by 7.7% on clean samples and by 29.3% on corrupted and attacked samples, over the state-of-the-art RawNet3 model." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Deepfake audio detection", "Audio augmentations", "Frequency-Selective Adversarial Training" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b98880e448b91432ac0226cc5e9573acbadb0ca9.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/da76cb98372407130972cee048e78bf77d22ee53.zip" }, "title": { "value": "I Can Hear You: Selective Robust Training for Deepfake Audio Detection" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2GwMazl9ND
Algorithmic Stability Based Generalization Bounds for Adversarial Training
main
Active
algorithmic stability;generalization;adversarial training
learning theory
3;5;6;8
3;3;4;4
2;2;3;3
1;3;3;3
1;3;3;3
5.5
3.5
2.5
2.5
2.5
0.83205
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Some small typos: \n\t- Line 190: pernutation -> permutation\n\t- Line 267: exist -> exists\n\t- Line 483: It then curious -> It is then curious\n\n- How does the bound in [1,2] change when considering $\\text{tahn}_{\\gamma}$-PGD?\n- Have you tried larger $\\gamma$s? $\\gamma = 10^{5}$ seems to be very far from sign-PGD in Figure 2 (b). It would be interesting to see how does $\\text{tahn}_{\\gamma}$-PGD behave when it’s close to sign-PGD.\n- Can you construct an experiment where the dependence on $n$ is displayed? For example taking a smaller number of samples from the studied datasets in order to see how the generalization gap grows. A synthetic distribution could also be employed where more and more samples are drawn and the gap decreases to zero for finite $\\gamma$.\n\n**References:**\n\n[1] Wang et al., Data-dependent stability analysis of adversarial training, ArXiv 2024.\n\n[2] Xiao et al., Stability Analysis and Generalization Bounds of Adversarial Training, NeurIPS 2022" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Simple theory and easy to follow paper. I didn’t read the proofs in full detail, but the main paper is easy to follow and the analysis and experiments are reasonable.\n\n- I found the analysis of the expansiveness of the adversarial attack operator very interesting and up to my knowledge, this has not been considered before.\n\n- When considering finite $\\gamma$, authors can show that their generalization upper bound converges to zero with increasing number of training samples $n$." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors study the generalization of adversarial training with projected gradient descent. They provide uniform stability upper bounds of the generalization gap that consider the expansiveness of the adversarial attack operator. In the particular case of replacing the $\\text{sign}(x)$ operation in the PGD attack with $\\text{tanh}(\\gamma\\cdot x)$, they can show that their generalization upper bound decays to zero with the number of samples $n$ for a finite value of $\\gamma$. The experimental evaluation shows the tradeoff between generalization and robustness given by $\\gamma$, where smaller values of $gamma$ obtain good generalization but poor robustness and the opposite happens for larger $\\gamma$." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Authors claim that their upper bound converges to zero with increasing number of training samples $n$ and the ones of [1,2] do not. This is misleading as [1,2] do not consider the expansiveness of the attack operator and the bound provided in this work, in the same setup as [1,2] does not vanish with $n$ (see lines 369-371). \n\n- The difference with the previous bounds is not clearly covered in the paper. The proof technique and assumptions are very similar to [1,2], nevertheless the bounds in [1,2] are not presented in the work and there is no discussion about how to integrate previous bounds with the expansiveness setup introduced in this work, making it difficult to assess the contributions. It would be nice to add a discussion about which role expansiveness plays in the result of [1,2], i.e., can it result in a vanishing upper bound with $n$? It would also be good to have a table comparing the different upper bounds." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Which part of the analysis bypasses prior work and makes the bounds decay as $O(\\frac{1}{n})$?\n- In the experiments of Section 6, why do you change the threat model (finding different values of $\\lambda_p$)? One could imagine experiments with different steepest descent algorithms for the solution of the inner maximization problem, where the threat model does not change (i.e., projecting every time to the same $\\ell_\\infty$ balls around the original points). Of course, different steepest ascent algorithms (besides the commonly used sign gradient ascent) will perform worse in finding adversarial examples, so the number of inner iterations should be adjusted appropriately. However, I believe this could be an interesting experiment to conduct." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper studies an interesting problem: the large generalization gap of robust empirical risk minimization (adversarial training) in neural networks. This work leverages the framework of uniform stability, which has been rather unexplored in the robust learning community, and could potentially provide insights on this topic. Based on the theoretical analyses, the authors propose a sensible relaxation of the commonly used PGD attack, using the $tanh$ function instead. Finally, I agree with the authors that the optimization algorithm in the inner maximization problem has not received adequate attention in the literature, and thus, its study is welcome (despite its limitations—see below)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies generalization bounds for robust training by leveraging the framework of uniform stability. The authors analyze $\\ell_\\infty$ perturbations and derive several upper bounds on the generalization gap of predictors. They then investigate experimentally the performance of adversarially trained models using several algorithms to solve the inner maximization problem of the robust objective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper is unfortunately difficult to follow, making it challenging to assess its content due to presentation issues. Furthermore, the conclusions seem unoriginal to me. In particular, I identified the following weaknesses:\n\n- Poor presentation: There are many instances where the text is not polished, with numerous grammatical errors (see the non-comprehensive list at the end of the Weaknesses). Additionally, the presentation of the technical results could be substantially improved (e.g., Theorem 4.1: remind the reader of the constants $\\beta, \\Gamma_X$). Furthermore, the authors should mention in the introduction that all of their results are solely about $\\ell_\\infty$ perturbations.\n- Introduction of many ad-hoc terms to denote well-established concepts: In many places, the authors use obscure words to define concepts that are well-defined in learning theory. For instance, lines 258-259: \"mis-matched generalization gap\" — this is just the standard generalization gap of a predictor trained robustly. Several such choices make it difficult for readers to comprehend the contributions of this work. Similarly, with so-called \"RG-PGD\" and the \"expansiveness property\" (a relaxed notion of Lipschitz continuity).\n- Unclear contributions: The paper does not clearly describe how the derived bounds differ from those of Xiao et al. (2022b) and Wang et al. (2024). In particular, the bounds from these prior works are not presented, and they are solely critiqued on the basis that they do not vanish with increasing sample size. Furthermore, the criticism of the non-smoothness of the loss function adopted in prior work seems unfounded (\"The source of the non-smoothness is, however, not explained in their work\"). Even for linear models under $\\ell_\\infty$ perturbations, a cross-entropy loss is non-smooth. Hence, the property of non-smoothness is well-motivated.\n- Unclear motivation for experiments: The authors seem to identify the sign function in the solution of the inner maximization problem in the robust objective as problematic, and they suggest an alternative based on a smooth approximation. However, they do not show any benefits in terms of robustness with the new method. Furthermore, the fact that for small $\\gamma$ we do not observe overfitting and the generalization gap is small appears to be a trivial observation, as the evaluation basically approaches the standard case of no perturbations. In short, it is not a good method for finding worst-case $\\ell_\\infty$ perturbations.\n- Results of Section 6: The authors mention the connection between adversarial training and steepest descent methods, but it is clear that this has been the motivation for the iterative solution of the inner maximization problem since the introduction of adversarial training. Furthermore, the experiments fail to highlight anything new, in my understanding (basically optimising an $\\ell_\\infty$ objective yields better coverage against $\\ell_\\infty$ attacks).\n\nGrammatical errors (non comprehensive list):\n- in the abstract: \"These expansiveness parameters appear not only govern the vanishing rate of the generalization error but also govern its scaling constant.\"\n- line 190: \"perturnation\" -> perturbation\n- line 202: \"related with\" -> related to\n- line 241: \"draw\" -> draws\n- line 245, 256: \"descend\" -> descent\n- line 316: \"independent with\" -> independent of\n- lines 536-537: \"Like all up-bound based theoretical results, such an approach is adequate for understanding performance guarantees but may be inadequte to explain poor generalization.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address my comments above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1) This paper is clear and easy to understand.\n\n(2) This paper studies the algorithmic stability of adversarial training from an interesting angle of the PGD attack.\n\n(3) Experiments demonstrate that using tanh to replace sign function can improve the generalization performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the algorithmic stability of adversarial training with a focus on how the inaccuracy of PGD attack affects the stability. Theoretical analysis are provided to justify that the sign function in PGD updates can significantly harm the stability, leading to a worse generalization (gap)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) While the paper considers the algorithmic stability of PGD attack, a missing component is the convergence of PGD attack. Intuitively, if we always use a fixed attack direction, then the algorithmic stability is not largely affected by the attack. However, the attack is not efficient. When using PGD attack, there is supposed to have a trade-off: with more iterations, the stability gets worse, but the attack becomes stronger. If at the test stage the attacker uses a very strong attack, e.g., AA attack, then balancing the attack effectiveness and stability is essential to obtain a better robust testing performance. Could the authors elaborate more from this perspective?\n\n(2) Please highlight the technical challenges for the theoretical contributions in this paper.\n\n(3) Please consider using some SOTA methods from RobustBench, e.g., leveraging synthetic data in adv training, to conduct the experiments. While improving the sign function seems to be helpful as illustrated by this paper, there is no enough evidence to demonstrate that this is one of the key issues in adversarial training.\n\n(4) Minor: In one line of research, to save the computation budget of adversarial training, algorithms have been proposed to explore fast adversarial training: instead of calculating the attack at each iteration, they update the attack for each sample for one step at each iteration, e.g., \n\nCheng, Xiwei, Kexin Fu, and Farzan Farnia. \"Stability and Generalization in Free Adversarial Training.\" arXiv preprint arXiv:2404.08980 (2024).\n\nI'm wondering if the authors can provide some comments on algorithms of this type." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In total, I think this is a good paper, but there are some points that can improve this paper. Please refer to the weaknesses part." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The new stability theory differs from existing ones in terms of assumptions and the form of bounds. I like the separation of adversarial training perturbation $J$ and the evaluation perturbation $\\pi$, which means that the theory in this paper is a more abstract framework and can be applied in many cases.\n- The writing is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides a new stability bound for adversarial training, where the inner maximization perturbation $J$ and the evaluation perturbation $\\pi$ can be different. The introduced term expansiveness can partly explain robust overfitting and experiments are conducted to validate the theoretical results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It seems that the framework in this paper can not provide a proper description for the $Q(c^*)$ term, we need to calculate it according to the concrete choice of $J, \\pi$. However, to make this framework more significant, examples of how to calculate $Q(c^*)$ and how to choose $c^*$ should be added. Note: I mean examples in practice but not examples with overly simple assumptions (such as the assumption on the second moment in lines 293-294 and the assumption in Corollary 5.2 that a term is bounded by $B$ with probability 1). Just like the VC dimension, if we can not calculate the VC dimension of some hypothesis classes, the bound with the VC dimension is meaningless.\n- Typo: in line 149, it should be \"the label space $\\mathcal{Y}$ is finite\"\n- A minor issue: many equations in this paper are numbered, in fact, the equations that are not used later need not be numbered. For example, equation (2) is not used.\n- In lines 87-88, the paper says that \"the bound convergence to a constant, this helps explain the robust overfitting phenomenon\". In fact, a lower bound of the generalization gap that converges to a constant can explain overfitting. However, an upper bound can not because your bound may not be tight enough." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024algorithmic,\ntitle={Algorithmic Stability Based Generalization Bounds for Adversarial Training},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2GwMazl9ND},\nnote={under review}\n}" }, "abstract": { "value": "In this paper, we present a novel stability analysis of adversarial training and prove generalization upper bounds in terms of an expansiveness property of adversarial perturbations used during training and used for evaluation. These expansiveness parameters appear not only govern the vanishing rate of the generalization error but also govern its scaling constant. Our proof techniques do not rely on artificial assumptions of the adversarial loss, as are typically used in previous works. Our bound attributes the robust overfitting in PGD-based adversarial training to the sign function used in the PGD attack, resulting in a bad expansiveness parameter. The peculiar choice of sign function in the PGD attack appears to impact adversarial training both in terms of (inner) optimization and in terms of generalization, as shown in this work. This aspect has been largely overlooked to date. Going beyond the sign-function based PGD attacks, we further show that poor expansiveness properties exist in a wide family of PGD-like iterative attack algorithms, which may highlight an intrinsic difficulty in adversarial training." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "algorithmic stability", "generalization", "adversarial training" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d1e69ced75caa73b96663f9202af3a2fd46e1e7f.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Algorithmic Stability Based Generalization Bounds for Adversarial Training" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2H6KhX1kJr
Transformers and slot encoding for sample efficient physical world modelling
main
Active
Transformers;world modeling;slot attention
learning on time series and dynamical systems
3;3;3;3
5;4;4;4
2;3;1;2
1;2;1;2
1;3;1;3
3
4.25
2
1.5
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My main question is the use of \"slot attention\" - as far as I can understand there is actually no slot attention in this model, am I right? it seems that the corrector just uses cross-attention and not slot attention? (the difference would be the soft-max axis)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Originality:\nThe model presented is a very mild variation on previously published works - using VQ-VAE encodings is nice (though probably requires a bit more analysis) and the general recurrent setup is appealing.\n\nQuality:\nThe proposed model variants (pre and default) are interesting and probably a good step towards analyzing the model's behaviour.\n\nClarity:\nThe paper is nicely structured and well written.\n\nSignificance:\nThe context of the work is important, but see below for criticism." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper present a slotted recurrent network model which uses transformers as the main backbone for \"world modeling\". In this context the resulting model is an \"object centric\" learning model which cross attends into VQ-VAE encoded input frame, updates the current state and then predicts the next state using a transformer. The model is trained for state prediction (with two variants, either next state prediction or current state prediction) and is demonstrated to mildly work better than a single external baseline (STEVE) and one ablation model (decoder only, where there is not explicit state representation, just prediction and decoding). The experiments are run on a physical reasoning task (PHYRE) and the output is a classification readout." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unfortunately the paper suffers from several weaknesses.\n\nExperimental validation:\nThe method is only validated on one task and even on that task results are not very convincing. The models perform very closely to one another and the claims for efficient learning with the model are not well supported.\n\nAnalysis:\nIn general I don't mind when results of a model are not competitive with baselines or ablations as long as there is good analysis of why that is the case, and how this can improve our understanding of the model or problem. Here, however, these are absent - there's very little analysis of what the model learns, how it does that and what determines its performance.\n\nPresentation:\nThe experimental result figures are not to the level I would expect to see in an ICLR paper - raw training curves are fine if they tell a clear story. Here, however, they do not - there is very little signal there to observe. Export quality is also quite low and does not at the level I would expect.\n\nNovelty:\nWhile usually I don't think novelty is a determining factor for a paper, I feel here this is quite lacking and the proposed model is indeed quite close to existing literature (SAVI++, PARTS and more ). These are cited in the paper, so I have no complaints on that side, but given the generally weak results and analysis I think this hurts the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "* __Slot encodings.__ How and where do the authors utilize slot encodings, and what specific benefits do they offer in this context?\n\n* __Experimental design.__ Why do the authors limit their evaluation to a single dataset? Additionally, what is the rationale for selecting STEVE as the sole baseline, excluding other relevant related works?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The authors address an important problem of physical world modeling using structured latent representations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the Future-Predicting Transformer Triplet (FPTT), an architecture aimed at modeling of physical world dynamics from video data. It employs Transformers to learn object-centric representations, enabling the model to predict physical interactions between objects more effectively. The architecture is tested on synthetic video dataset PHYRE. The authors also perform an ablation study to understand the contribution of different components." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* __Missing slot encodings and object-centricity.__ \n\nWhile the paper includes _slot encoding_ in its title, the approach itself appears to lack this feature. As I understand it, $\\Lambda$ was intended to serve as slot encodings, but it is not even referred to as such. From the description, $\\Lambda$ seems more like a standard intermediate representation in transformer layers rather than a distinct slot encoding.\n\nIn Appendix A2, the authors reference [1] for their transformer implementation, where they also mention using four slots. However, the referenced implementation does not include a parameter for the number of slots, leaving it unclear how slot encodings are actually integrated into the proposed approach, or if they are implemented at all.\n\nFurthermore, the authors state that _\"the representation remains opaque and lacks interpretability,\"_ which raises questions about the motivation for using _slot encodings_ in the first place.\n\n\n* __Experimental methodology.__ \n\nFirstly, the authors evaluate their model on a single, very simplistic dataset, while using a relatively large number of parameters. This limited evaluation setup may not provide sufficient empirical evidence to support their claims.\n\nA more significant issue lies in their positioning among related works and choice of baselines. The authors overlook most recent related work (e.g., [2, 3, 4, 5]) and rely solely on STEVE as a baseline, aside from variations of their own approach.\n\n\n* __Presentation.__ \n\nIn addition to unclear explanations of their approach and its novelty, the authors fail to position it effectively within the existing literature, lacking a comparative analysis with prior work.\n\nAll the figures also present issues: they are unnecessarily large, some are in low resolution, and it is often unclear what the authors aim to demonstrate.\n\n\\\nReferences: \n\n\n[1]: Andrej Karpathy. nanoGPT: The simplest, fastest repository for training/finetuning mediumsized GPTs (Generative Pretrained Transformers), 2023. URL https://github.com/karpathy/nanoGPT. \\\n[2]: Nakano, A., Suzuki, M. and Matsuo, Y., 2023. Interaction-based disentanglement of entities for object-centric world models. In The Eleventh International Conference on Learning Representations.\\\n[3]: Villar-Corrales, A., Wahdan, I. and Behnke, S., 2023, October. Object-centric video prediction via decoupling of object dynamics and interactions. In 2023 IEEE International Conference on Image Processing (ICIP) (pp. 570-574). IEEE.\\\n[4]: Wu, Z., Dvornik, N., Greff, K., Kipf, T. and Garg, A., 2022. Slotformer: Unsupervised visual dynamics simulation with object-centric models. arXiv preprint arXiv:2210.05861.\\\n[5]: Daniel, T. and Tamar, A., DDLP: Unsupervised Object-centric Video Prediction with Deep Dynamic Latent Particles. Transactions on Machine Learning Research." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How would the architecture compare against Dreamerv3 or Slotformer in world modeling?\n- What happens if u try to make the decoder only model more efficient by reducing the number of tokens or dimensionality of the token?\n- How would the paper compare against the baselines in benchmarks proposed in Steve or DreamerV3 or SlotFormer?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper focuses on world modeling which is an important problem.\n- The writing and presentation is clean, which makes understanding the paper easy.\n- The paper compares efficiency and accuracy, which helps understand the trade-offs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the challenge of creating sample-efficient models for physical world modeling, focusing on predicting object interactions in dynamic environments.\n\nThe authors propose an architecture that combines Transformers with slot encoding to improve sample efficiency and stability in world modeling. Unlike existing models that operate at the image level, this model incorporates object-based representations, enabling it to capture and predict interactions more accurately.\n\nTheir model, named Future-Predicting Transformer Triplet (FPTT), uses a corrector-predictor-decoder triplet of Transformers. The corrector aligns the internal state representation with the actual video evolution to prevent model drift, the predictor forecasts the next state based on the corrected representation, and the decoder converts this predicted state back into tokens for further training.\n\nExperiments using the PHYRE dataset (a benchmark for physical reasoning) show that FPTT achieves greater sample efficiency and training stability compared to baseline models like STEVE. The model’s structured approach enables it to generalize well in physical environments simulated with basic Newtonian physics.\n\n\nIn summary, the paper presents a architecture that leverages the strengths of Transformers and slot encoding for efficient and stable world modeling, demonstrating improvements in tasks requiring understanding and predicting object dynamics in a physical environment​" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The contribution is not significant having an internal representation of the previous timesteps is common in world modeling architectures, for instance: Dreamer: https://arxiv.org/pdf/2301.04104, Slotformer: https://arxiv.org/pdf/2210.05861. It's unclear to me how this work is a better architecture than Dreamer or Slotformer or other recent works.\n- The evaluations and baselines are weak, the paper only compares against STEVE. I dont think STEVE is a fair comparision as their objective was to get interpretable object representations and not necessarily the metric/benchmark paper uses for evaluation. Further the Decoder-Only model seems to perform as well as the proposed architecture on almost all tasks except efficiency.\n- Lastly the work only compares on a single benchmark, which is not being used in the baseline works such as STEVE. I think a fair thing to do would be to compare on benchmarks shown in prior baselines, so we assume they are tuned well." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- It appears that slot attention (or inverted attention) is absent, with only cross attention being mentioned. Is this an oversight in the explanation, or is it indeed absent? If it's truly absent, how can we be confident that it captures object-level dynamic understanding?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The authors' approach of capturing slot-like internal representations from VQ tokens, instead of CNN embeddings, was intriguing and showed promising results.\n- They propose a novel evaluation protocol for testing world model architecture, utilizing the shared benchmark with different protocols." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a world-modeling architecture that captures object-level interactions in the scene, instead of the scene itself. The architecture consists of three transformer-based models: a corrector, a predictor, and a decoder, alongside a VQ-VAE tokenizer for image encoding.\n\nTo evaluate the proposed model’s performance as a world model, the authors provide a physical reasoning task using the PHYRE benchmark, demonstrating that their model outperforms STEVE, the baseline, in terms of prediction accuracy and sample efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Lack of Novelty and Justification**: The main idea and direction of the paper have been explored in several existing works (OCVT[1], SlotFormer[2]). Although the authors are likely aware of this, they fail to convincingly demonstrate why their approach is unique and necessary for the proposed direction, compared to previous works.\n- **Architecture and Design Choices**: The proposed architecture appears to be a combination of SAVi[3] and STEVE[4] architectures, but with a predictive loss function instead of a reconstructive loss function. While this variant may be promising, the paper lacks sufficient details to justify the design choices, such as thorough ablation studies. Furthermore, it is unclear whether the proposed model can outperform existing works, as it is not comprehensively compared.\n- **Lack of Clarity in Architecture and Experiment Description**: The architecture section of the paper lacks clarity and detail, particularly in the description of the core components: corrector transformer, predictor transformer, and decoder transformer. Although the author provides a high-level overview of these architectural concepts, the explanation is insufficient given the emphasis on this part as the paper's core contribution. To thoroughly understand and investigate the proposed architecture, a detailed formulation of these components is necessary, including their implementation details and mathematical representations.\n \n Furthermore, the experiment section lacks sufficient details about the metrics used in the evaluation. To ensure transparency and reproducibility, it is essential to provide a clear explanation of each metric, including how the metric is calculated and what it represents.\n \n- **Limited Evaluation of Proposed Architecture:** The author only provides a single task to evaluate the proposed architecture, which is insufficient to demonstrate its generality and versatility. To thoroughly assess the world modeling ability of the proposed architecture, it is essential to evaluate it on a diverse range of tasks that require the model to infer and understand the relationships between objects and scenes. Additionally, to facilitate a fair comparison with existing works, the authors may consider including several generation tasks (e.g. OBJ3D[1], CLEVR[5], Physion[6]), as has been done in prior research.\n \n To further demonstrate the effectiveness of the proposed model, the author could compare it with a broader range of baselines, such as SlotFormer, OCVT, SAVi, and other relevant models mentioned in the paper. Although these models are typically used for generation tasks, their predicted representations can be evaluated using the same protocol as the proposed model. Additionally, comparing with image-based world models would illustrate the advantages of object-level world models over their image-based counterparts. This approach would provide a more comprehensive understanding of the proposed model's performance and allow for a more accurate assessment of its strengths and limitations relative to existing approaches.\n \n- **Ablation Results Raise Questions about Proposed Model:** The ablation results indicate that the ‘decoder-only’ model performs comparably to the proposed models. This suggests that VQ-tokenization and predictive loss might be sufficient to drive performance without explicitly enforcing object-level representations. This outcome seems misaligned with the paper's main theme, which emphasizes the importance of object-level representations. Consequently, this misalignment raises questions about the necessity and effectiveness of the proposed model's architecture.\n\n[1] Wu, Yi-Fu, Jaesik Yoon, and Sungjin Ahn. \"Generative video transformer: Can objects be the words?.\" International Conference on Machine Learning. PMLR, 2021.\n\n[2] Wu, Ziyi, et al. \"Slotformer: Unsupervised visual dynamics simulation with object-centric models.\" arXiv preprint arXiv:2210.05861 (2022).\n\n[3] Kipf, Thomas, et al. \"Conditional object-centric learning from video.\" arXiv preprint arXiv:2111.12594 (2021).\n\n[4] Singh, Gautam, Yi-Fu Wu, and Sungjin Ahn. \"Simple unsupervised object-centric learning for complex and naturalistic videos.\" Advances in Neural Information Processing Systems 35 (2022): 18181-18196.\n\n[5] Johnson, Justin, et al. \"Clevr: A diagnostic dataset for compositional language and elementary visual reasoning.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.\n\n[6] Bear, Daniel M., et al. \"Physion: Evaluating physical prediction from vision in humans and machines.\" arXiv preprint arXiv:2106.08261 (2021)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024transformers,\ntitle={Transformers and slot encoding for sample efficient physical world modelling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2H6KhX1kJr},\nnote={under review}\n}" }, "abstract": { "value": "World modelling, i.e. building a representation of the rules that govern the world so as to predict its evolution, is an essential ability for any agent interacting with the physical world. Recent applications of the Transformer architecture to the problem of world modelling from video input show notable improvements in sample efficiency. However, existing approaches tend to work only at the image level thus disregarding that the environment is composed of objects interacting with each other. In this paper, we propose an architecture combining Transformers for world modelling with the slot-attention paradigm, an approach for learning representations of objects appearing in a scene. We describe the resulting neural architecture and report experimental results showing an improvement over the existing solutions in terms of sample efficiency and a reduction of the variation of the performance over the training examples." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Transformers", "world modeling", "slot attention" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/41545b0c8d7fd1d4588c8acde59513165fe992ef.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/01e4912bc09155b6ba09356814fea9905ca5c97d.zip" }, "title": { "value": "Transformers and slot encoding for sample efficient physical world modelling" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2HN97iDvHz
LLM-Powered Predictive Decision-Making for Sustainable Data Center Operations
main
Active
Large Language Models;Generative AI;Sustainability;Real-time decision-making
alignment, fairness, safety, privacy, and societal considerations
3;3;3
5;4;2
2;2;2
1;3;2
3;3;3
3
3.666667
2
2
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What is the prediction error for runtime and energy consumption (also in comparison with baselines)?\n\nIt is well known that the lack of (remaining) run time of jobs is one of the main issues in achieving schedulings that are proven optimal under some aspects, i.e., minimizing waiting time. Can some variant of shortest job first (SJF) be used here?\n\nDifference between simple and greedy alsgotiehms should be better highlighted." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ The idea to leverage an LLM to create a powerful representation instead of hand-crafted features. This also brings a series of positive properties, as listed in the paper.\n+ Good results on the presented data center use case.\n+ Includes discussions on practical problems in applying the scheme." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes using LLMs to predict performance metrics of jobs submitted to a data center. Metrics include runtime, waiting time, and energy consumption. The main idea is to leverage the power of LLMs in creating meaningful representations of complex data, such as the source code, which can then be trained into specific predictions. Based on these predictions, the authors propose two scheduling algorithms, which they apply to a data center use case to show savings of up to approximately 30%." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of details on what is considered a job and its source code. Implicitly (especially in the introduction), authors seem to assume Python scripts for machine learning, but real-world workloads might differ from this assumption. Please be more clear on any assumptions and restrictions on the jobs considered.\n- Authors estimate the model performance metrics solely from the source code. Still, the execution time (and all other performance metrics) for some models, e.g., for LLMs due to their autoregressive nature, depends heavily on the generated output based on the input/prompt and not only on the source code. (see also next point)\n- The paper lacks results on the achieved prediction accuracy of the considered metrics. Here, it would be nice to have some statistics on the achieved error between predicted and real values as well as a comparison with some of the mentioned related works for job prediction.\n- Also, an ablation study to see if the improvement stems more from the smarter scheduling algorithm or from the more precise predictions would have been nice. This would also likely allow to hint into how well the approach might generalize to other data centers that might use a different baseline scheduling." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Thank you for submitting to ICLR. I really like the idea of using LLMs for predictive data center resource allocation. However, the paper seems incomplete and lacks a robust evaluation for the proposed method. Addressing this issue would strengthen the paper. \n\n\nQuestions: \n\nCould you provide more details about the LLM model used in the experiments, including\n- LLM model type, and how was it pre-trained or fine-tuned for this task\n- the dimension of the LLM output representations\n- how to leveage LLM to generate the output representation (eg, prompting method)\n\nWhat is the computational and cost overhead of running the LLM-based prediction framework? How does this compare to the benefits gained in terms of improved resource allocation and reduced energy consumption?\n\nCan you provide more details about the evaluation, including\n- detailed experimental setup\n- data center scale\n- the number and distribution of different task types\n- baseline scheduling algorithms to compare with\n- evaluation metrics\n- ablation study\n\nHow does the framework handle prediction errors? Is there any mechanism to adapt predictions based on actual execution results?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper designs a novel end-to-end solution using LLMs for predictive data center resource allocation. Also, their combination of LLM and probe network reduces the number of training data needed.\n\nTheir framework can generalize to diverse job task types including composite and unseen tasks, making it more flexible than traditional methods that required separate models for different task types.\n\nThe writing is easy to follow and clearly explains their model architecture." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an LLM-powered automatic predictive scheduling system for AI workloads in data center, with the goal of optimizing both performance(job completion time) and energy consumption. The system consists of two main components: 1) An LLM-based predictive model that takes job's source code as input, and predicts its execution time and energy consumption; 2) A decision-making model that uses these predictions for deciding GPU resource allocation to each job. Through collaboration with a data center, the authors demonstrated 32% reduction in energy consumption and 30% decrease in waiting time. The key innovation is using LLMs to generate code representations that enable generalizable prediction across diverse task types." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper lacks information about which pre-trained LLM was used, details about its output representation, and how did they leverage LLM to generate the output representation (eg, prompting method).\n\nThe evaluation section seems to be incomplete. More comprehensive evaluation details are necessary to evaluate whether their proposed solution works.\n\nThe proposed method could cause potential privacy concerns when sending confidential user-submitted code to LLM for analysis. \n\nNo discussion of the computational and cost overhead of running the LLM-based prediction framework." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. My first question is, what is the use-case that you are trying to solve?\n2. What are the accuracy and prediction metrics of your system?\n3. What is the scale of the datacenter you collaborate with?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Thank you for submitting your paper to ICLR. I enjoyed reading the paper as it is well written generally.\n2. The paper covers an important topic that many datacenter operators care about, how to better utilize accelerator resources.\n3. The paper uses data from a data center and I believe is the first paper to suggest a compound AI system with two LLMs to assign GPU resources." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a compound-AI based technique to predict the performance, energy consumption, and other key operational metrics for data center operations. The paper motivates the problem well, showing how a pre-trained LLM can possibly be used to predict the performance of workloads on different hardware types. This can be further used in scheduling of workloads on devices. The authors then device a scheduling optimization problem along with two algorithms to show how such a deployment can help datacenter operators. The authors run simulations based on a dataset acquired from a production system from a small datacenter over a period of about 2 months. The dataset has an aggregate task counts of less than 200 tasks. They adapt the pretrained model using 500 source codes. To label the data (and run their experiments), the authors use two GPU models A100 and A6000." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think the paper however has several shortcomings that I will aim to detail next. The paper is neither strong on the systems side nor on the ML side, and this is the main shortcoming in my opinion. I will detail what I mean by that in the next points:\n1. To start with, I am not entirely sure if for the scale of the problem you define, the LLM is doing much. For an ICLR submission, I think It would have been better to focus more on the ML side of the problem and not the decisions making. After all, you have only provided an overview of the prediction results in the paper in Table 1. However, non of these results show tradtitional metrics that one would expect on the predictions, e.g., accuracy, recall, F1-Score, MAPE, etc. I would like to see some of these aspects for the method. \n2. There is not much novelty in the ML side of the paper, except maybe with the Align-LLM part. However, the authors treat this in passing only, with very little to no evaluations on even how this extra LLM helps. It would help the paper to do an ablation study with Align-LLM. In addition, you effectively have only two classes for your classifier, A100 and A6000. I wonder how your system would expand to a larger system with say 10s of accelerator types?\n3. From a systems perspective, I think there are way too many shortcomings. Since ICLR is not a systems conference, I will only include the following few shortcomings. First of all, you have a total of less than 200 tasks over a period of 2 month. That is about three tasks per day. Since you are running this in simulations, you can scale this up by, e.g., creating a set that resembles the original tasks you have. There are also multiple other public traces now available, e.g., from Alibaba with GPU workloads (https://github.com/alibaba/clusterdata/tree/master/cluster-trace-gpu-v2020#pai_task_table) . That being said, you do not need even GPU traces, you can easily simluate larger systems. \n\n- Second, what is your use-case? A task that runs for short periods? How would you know how long this task runs in a real datacenter unless it is a repetitive workload? Third, how would your system scale with 30+ accelerator types and 10s to 100s of tasks arriving per minute, per second, and per hour?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024llmpowered,\ntitle={{LLM}-Powered Predictive Decision-Making for Sustainable Data Center Operations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2HN97iDvHz},\nnote={under review}\n}" }, "abstract": { "value": "The growing demand for AI-driven workloads, particularly from Large Language Models (LLMs), has raised concerns about the significant energy and resource consumption in data centers. This work introduces a novel LLM-based predictive scheduling system designed to enhance operational efficiency while reducing the environmental impact of data centers. Our system utilizes an LLM to predict key metrics such as execution time and energy consumption from source code, and it has the potential to extend to other sustainability-focused metrics like water usage for cooling and carbon emissions, provided the data center can track such data. The predictive model is followed by a real-time scheduling algorithm that allocates GPU resources, aiming to improve sustainability by optimizing both energy consumption and queuing delays. With fast inference times, the ability to generalize across diverse task types, and minimal data requirements for training, our approach offers a practical solution for data center scheduling. This framework demonstrates strong potential for advancing sustainability objectives in AI-driven infrastructure. Through our collaboration with a data center, we achieved a 32% reduction in energy consumption and a 30% decrease in waiting time." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Generative AI", "Sustainability", "Real-time decision-making" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/47a15a97f863147475d6efc0911d401fcbd0e830.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/a597ac0ec3e436ca69d8f78420a0320877ff1853.zip" }, "title": { "value": "LLM-Powered Predictive Decision-Making for Sustainable Data Center Operations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2HdZPEQUig
Efficient Object-Centric Learning for Videos
main
Active
Object-Centric Learning;Representation Learning;Video;Segmentation;Video Object Segmentation
unsupervised, self-supervised, semi-supervised, and supervised representation learning
1;3;3;5
3;5;5;4
1;2;2;3
2;1;1;2
1;1;2;3
3
4.25
2
1.5
1.75
0.426401
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please address the points brought up in the weaknesses above." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. Significant qualitative results are included, including failure cases.\n2. The technical contribution appears to be novel for VOS." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present Interpreter, a VOS method based on hierarchical slot attention that consists of separate image-level and video-level processing. To compute the image-level attention slots, Interpreter uses implicit slot attention to learn object-centric features from an image-trained backbone. Implicit slot attention is also used at the video level to learn object representations across frames, relying on the Sinkhorn divergence to learn correspondence between sets of slots across different frames. Experiments are conducted on the YTVIS-19 and MOVi-E datasets to compare Interpreter to other slot-attention-based methods. An ablation study is carried out on YTVIS-19 to determine the effect of number of slots and clustering distance threshold." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Issues with Experimental Evaluation.\n\n a. The paper claims Interpreter is a VOS method but performs its evaluation using a Video Instance Segmentation (YTVIS-19) and Video Semantic Segmentation dataset (MOVi-E). If Interpreter is a VOS method then it should be evaluated on VOS datasets such as DAVIS [1], and compared to the state of the art VOS methods, in order to assess the contribution of the work.\n\n b. The paper claims Interpreter targets long videos but the length of the videos in the datasets chosen are on the order of seconds not minutes, making it difficult to verify this claim.\n\n c. Qualitative results are included for Interpreter but not for competing methods, making it difficult to assess the performance quality of Interpreter.\n\n2. Exposition of method lacks mathematic details. In particular, Sinkhorn Divergence is never defined mathematically and the final loss function is not included. This makes it difficult to understand the method beyond a surface level.\n\n3. Related works section lacks mention of query-key-value retrieval-based methods such as STM [2] for VOS, which is a major and important direction for the task. The motivation for using slot attention based methods is not clear.\n\n4. Writing is not direct. For example, it should be explained why Interpreter performs \"unexpectedly well\" in l. 471 and what is \"surprising\" in l. 474. As another example, the phrase in l. 339 \"The last row shows a cute cat.\" seems out of context. \n\n[1] \"The 2017 DAVIS Challenge on Video Object Segmentation\". J. Pont-Tuset, F. Perazzi, S. Caelles, P. Arbeláez, A. Sorkine-Hornung, and L. Van Gool. arXiv:1704.00675, 2017.\n[2] \"Video Object Segmentation using Space-Time Memory Networks\". Seoung Wug Oh, Joon-Young Lee, Ning Xu, Seon Joo Kim. ICCV, 2019." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could the authors clarify the significant discrepancy observed between FG-ARI and mIoU performance in this model? In my view, both FG-ARI and mIoU should be high if object segmentation remains accurate over time.\n\n2. Have the authors considered using only frame-wise slot representations at the second level (where the same slot index per frame corresponds to the same object), rather than applying slot attention at the video level? What would be the implications of this approach?\n\n3. To what extent is the DINOv2 feature extractor crucial for this model? Would the method fail without it?\n\n4. Why is a different number of second-level slots used for the YTVIS-19 and MOVi-E datasets?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written and structured, with clear explanations of the novel hierarchical slot attention mechanism and its advantages in scaling object-centric representation to longer videos.\n\n2. The approach of per-frame slot attention followed by video-level slot attention is both novel and elegant, allowing the model to handle temporal dependencies across the entire video without chunking." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a method called Interpreter, aimed at efficient, unsupervised video-level object-centric representation learning. Interpreter introduces a hierarchical slot attention architecture where image-level representations are compressed first, then video-level representations are derived from them using a relaxed optimal transport objective, Sinkhorn Divergence, for unsupervised segmentation. This approach circumvents the typical computational load associated with reconstructing frame-level feature maps, allowing Interpreter to process longer videos effectively. Experiments show that Interpreter achieves strong results on the YTVIS-19 dataset and synthetic datasets like MOVi-E." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The second-level slot number does not stay under ten (8 for YTVIS-19), which contradicts the paper’s claim of handling extensive temporal context effectively (L101).\n\n2. Results on the DAVIS-17-unsupervised dataset are absent, and performance on metrics (FG-ARI and mIOU) shows considerable variation across different benchmarks, suggesting limitations in the model’s generalizability.\n\n3. The discussion around FG-ARI and mIoU metrics lacks sufficient depth, especially in explaining the model’s inability to perform consistently across both benchmarks. It remains unclear why the method does not yield strong outputs on both metrics concurrently​." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* How is the Interpreter more efficient tham previous work? (e.g. STEVE, BA, VideoSAUR).\n* How does the model compare to previous work if normalised for the pre-trained architecture? E.g. BA uses ViT-s/8, while ViT-B/14 is used here.\n* Interpreter is developed with long videos in mind. What is the definition of “long” in this work and how is this reflected in the experimental setup?\n* How would approach compare to move naive objectives, e.g. matching the slots with Hungarian matching and minimising the corresponding distance (e.g. L1/L2)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* I like the work’s technical contribution, the Sinkhorn divergence. However, I’d encourage the authors to include more details (what’s behind SH function in (2)).\n* The approach is technically sound. It makes a lot of sense to represent a video with a compact set of slot tokens and computing the segmentation through attention propagation, as described in ll. 206-215.\n* Fig. 1 provides a great overview for the approach, which helps in following the technical details. (Remark: It could’ve been more compact and use vectorised graphics). \n* I enjoyed that the text does not stop after the mixed quantiative results, but instead makes a good effort to analyse and explain them." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work introduces an unsupervised approach to object-based segmentation of video sequences. The approach comprises two stages. The first stage follows previous work and trains an autoencoder that decomposes an input image into a set of slot tokens. In the second stage, another autoencoder learns to represent the set of slot tokens, extracted from each frame in a video, with a more compact set of video-level slot tokens. To train this autoencoder, the approach leverages Sinkhorn divergence, which establishes a (relaxed and differentiable) correspondence between the set of predicted tokens and the input set. The final output -- a temporally consistent segmentation -- is the result of attention propagation, which relies on the similarity between image-specific slots and the video-level slots. The results on YouTube-VOS and synthetic MOVi-E demonstrate impressive segmentation quality, but the quantitative results are a bit mixed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The exposition, especially the technical part, feels way to congested. I would have preferred more technical details in Sec. 3.2 than the Figures 2-4 loading two full pages, which feel a bit like space-fillers. For example, the work does not really explain how the Sinkhorn divergence is computed in Eq. (2), nor does it really explain the architecture of the encoders/decoders in the two stages of training, etc. \n* The results are obviously mixed: On YouTube-VOS the approach discriminates between the objects well, but falls behind on foreground-background segmentation and vice-versa on MOVi-E. I like that the text discusses these weaknesses, but the analysis would have been more convincing with more informative qualitative examples (including the ground truth and the output from previous work).\n* The title falls short on the promise of efficiency. Perhaps the method is efficient, but I did not find convincing arguments or corresponding experiments to support this point.\n* The experiments are bit too brief. I would be curious to see the approach with another pre-trained backbone and dataset (e.g. DAVIS)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "**Q1. What are the main architectural differences between Interpreter and VITA?**\n\nIn terms of architectural design, what distinguishes Interpreter from VITA? Could you specify the unique aspects of Interpreter’s approach, especially in how it addresses hierarchical design and temporal context aggregation?\n\n**Q2. Is there any statistical basis for the analysis beyond observations on a few samples?**\n\nBeyond observations from a limited set of examples, does the paper offer any statistical foundation for its analysis? For instance, is there evidence that specific factors like object movement or motion changes hinder the performance of slot-based approaches? A more comprehensive investigation into such cases could help substantiate the findings.\n\n**Q3. Are additional ablation studies provided to validate the proposed method’s effectiveness?**\n\nApart from the current experiments, are there further ablation studies examining key factors, such as the influence of varying the number of slots (K) or the impact of end-to-end fine-tuning in the second stage?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper introduces a straightforward design for unsupervised video object segmentation using slot attention. This architecture demonstrates remarkable performance on the real-world dataset YTVIS-19." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a hierarchical slot attention approach for handling temporal context in video segmentation. To achieve this, it incorporates a video-level slot that aggregates temporal information across all frame-level slots. Additionally, to smoothly apply the video-level slot for video representation prediction, the paper proposes an attention map propagation technique. For loss calculation, Sinkhorn Divergence is utilized. With these components, the proposed model, Interpreter, achieves state-of-the-art performance on the YTVIS-19 dataset in terms of mIoU and on the MOVi-E dataset in terms of FG-ARI." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1. Limited Architectural Contribution**\n\nThe primary contribution of Interpreter lies in its hierarchical architecture, a concept previously introduced in the video instance segmentation method, VITA [1]. Similar to VITA, Interpreter employs a hierarchical design where video-level queries aggregate temporal context from frame-level queries. Apart from differences in target tasks and objective functions, the overall architectural design remains largely similar to that of VITA.\n\n**2. Insufficient Experimental Support**\n\nAdditional experiments are necessary to validate the proposed methods. In Tables 1 and 2, as noted by the authors, results on the MOVi-E dataset show a trend that significantly deviates from results on YTVIS-19. Section 4.3 discusses specific cases with limited examples to interpret these discrepancies. However, since these two sets of results exhibit opposite trends, it remains challenging to conclude that the proposed method is generally applicable. To address these contradictions, further analysis—such as statistical investigation—would be beneficial. Additionally, only two ablation studies are presented, even for critical hyperparameters, and key factors like the effect of varying the number K are not explored.\n\n**3. Limited Readability**\n\nThe overall structure of the paper hinders readability and comprehension. In particular, the experimental section is challenging to follow, as it combines main experimental results, qualitative findings, and ablation studies within the same section, making it difficult to discern the purpose and implications of each individual experiment. Furthermore, Figures 3 and 4 display segmentation results without the original samples, which complicates the reader’s ability to fully interpret the analysis.\n\n[1] Heo, Miran, et al. \"Vita: Video instance segmentation via object token association.\" Advances in Neural Information Processing Systems 35 (2022): 23109-23120." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a novel method for efficiently learning object-centric representations over videos and achieve state-of-the-art video object segmentation performance on YTVIS-19." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024efficient,\ntitle={Efficient Object-Centric Learning for Videos},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2HdZPEQUig},\nnote={under review}\n}" }, "abstract": { "value": "This paper introduces a method for efficiently learning video-level object-centric representations by bootstrapping off a pre-trained image backbone, which we term Interpreter. It presents a novel hierarchical slot attention architecture with local learning and an optimal transport objective that yields fully unsupervised video segmentation. We first learn to compress images into image-level object-centric representations. Interpreter then learns to compress and reconstruct the object-centric representations for each frame across a video, allowing us to circumvent the costly process of reconstructing full frame feature maps. Unlike prior work, this allows us to scale to significantly longer videos without resorting to chunking videos into segments and matching between them. To deal with the unordered nature of object-centric representations, we employ Sinkhorn divergence, a relaxed optimal transport objective, to compute the distance between unordered sets of representations. We evaluate the resulting segmentation maps on video instance segmentation in both realistic and synthetic settings, using YTVIS-19 and MOVi-E, respectively. Interpreter achieves state-of-the-art results on the realistic YTVIS-19 dataset and presents a promising approach of scaling object-centric representation learning to longer videos." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Object-Centric Learning", "Representation Learning", "Video", "Segmentation", "Video Object Segmentation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a0ca16772d5fe0b949efa9455797c694464d3f4f.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Efficient Object-Centric Learning for Videos" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2HjRezQ1nj
Combining Text-based and Drag-based Editing for Precise and Flexible Image Editing
main
Active
Computer Vision;Generative Model;Diffusion Model;Image Editing.
generative models
3;5;6;8
5;3;4;4
2;3;3;3
2;3;2;3
2;2;3;4
5.5
4
2.75
2.5
2.75
-0.392232
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. What are some common failure cases for the editing, especially if the text and local edits conflict. \n2. How are the number of iterations for denoising fixed for drag editing and how do the impact change with fewer to larger iterations. \n3. One of example is shown to incorporate masks for editing, can it be explained how masks are incorporated in this framework ?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Novel Approach to combine local and global gradient: Building on text inversion methods to combine text and drag signals, CLIPDrag enables pixel-level control, offering both specific and contextually aware edits.\n2. Efficient Convergence: The fast point-tracking method improves the editing process by guiding handle points toward their target positions faster.\n3. Extensive Ablations: The paper has ablations for all different components such as point tracking, GLMS and controls with edit and text showing clear performance gain. \n4. Qualitative Results: The papers presents representative set of results allowing easy intuition and help with clarity of the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces CLIPDrag, a novel image editing approach that integrates both text-based and drag-based controls to achieve more precise and flexible edits. Traditional text-based editing provides general guidance but often lacks specificity, while drag-based editing offers local control but can be ambiguous without context. CLIPDrag addresses these issues by using text as global guidance for overall image context and drag points for fine-grained, localized control. The model leverages a global-local motion supervision (GLMS) system and a fast point-tracking (FPT) method to streamline and accelerate the editing process. The paper is well written and easy to understand, the paper has comprehensive experimental results which show CLIPDrag outperforms both traditional drag- and text-based methods in accuracy and image fidelity. The detailed ablations make the hypothesis clear. The paper presents an interesting path for image editing and is theoretically grounded which should be shared within the community" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The need for identity preservation beyond citing DragDiffusion is not shared, given the improvement of base models, the intuition behind it is lacking. \n2. Gradient accumulation is discussed assuming the latent code is continuous, formulating why the gradient manipulation will still lead to plausible images is unclear. \n3. Assumption around using nearest neighbors in FPT moving monotonically towards target is not explained, given the optimization is highly non linear." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please provide examples with and without drag edit the following examples. \"The sculpture is smiling and not showing his teeth.\" and \"The sculpture is smiling and not raising his head\"." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "For the first time, the paper provides an algorithm that integrates text-guided editing with drag-guided editing. The proposed editing algorithm attempts to provide more precise global editing and reduce ambiguity in local editing. The independent guidance or supervision of text and drag is combined interestingly by disentangling the global gradient that is perpendicular and parallel to the local gradient." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a Text-Drag Editing framework to address text-based and drag-based editing limitations. To achieve this, the authors introduce global-local motion supervision that integrates the semantic aspects of text with drag guidance. They utilize a novel approach of gradient fusion, combining gradients from text and drag conditioning based on their directional alignment to provide a unified gradient." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of Comprehensive Review of Diffusion-Based Image Editing Literature:\nThe paper does not provide an adequate overview of diffusion-based image editing methods. A more thorough review of recent approaches in diffusion-based image editing is necessary to strengthen its background and situate the proposed method within the broader field. Specifically, the authors should consider discussing recent methods, such as \nSINE: SINgle Image Editing With Text-to-Image Diffusion Models (Zhang et al., CVPR 2023), \nPaint by Example: Exemplar-based Image Editing with Diffusion Models (Yang et al., CVPR 2023), \nFlexiEdit: Frequency-Aware Latent Refinement for Enhanced Non-Rigid Editing (Koo et al., ECCV 2024), \nand RegionDrag: Fast Region-Based Image Editing with Diffusion Models (Lu et al., ECCV 2024). \nIncorporating these examples will provide a more robust foundation and context for the reader, enabling a clearer understanding of how the current approach builds upon or diverges from existing work.\n\n2. Unconvincing Example in Figure 1:\nThe example provided in Figure 1 does not convincingly illustrate the motivation of the study. The intention seems to highlight the limitations of drag-based and text-based editing approaches, yet the figure only demonstrates an instance where drag-based editing is ineffective. A more persuasive example might involve a scenario where drag-based editing produces a subtle change—such as adjusting a subject's smile—which could then be further refined by the proposed text-drag editing method to achieve a more detailed, natural effect. This change would clarify the benefits of text-drag editing over existing methods.\n\nAdditionally, the similarity between the proposed method's results and traditional drag-based editing in Figure 1 and the statute example raises questions about the added benefit of the proposed approach. If these similarities are not intentional, a different example or refinement of the illustrations might better demonstrate the unique advantages of the proposed method.\n\n3. Handling of Distinct Effect Regions in Text-Based and Drag-Based Editing\nThe paper does not adequately explain how it manages distinct effect regions associated with text-based and drag-based editing despite these methods likely targeting different areas in an image. Clarifying how these regions are defined, integrated, or adjusted during editing would provide more specificity and improve understanding of the algorithm's functionality. This discussion is crucial to distinguish the contribution of the combined editing approach.\n\n4. Suggested Comparative Experiments for Method Validation\nComparative experiments should include scenarios where text-based editing is applied after drag-based editing and vice versa to illustrate the proposed method's effectiveness better. This comparison would help demonstrate the practical advantage of combining both methods in the proposed approach and establish whether there are meaningful improvements when they are applied sequentially.\n\n5. Limited Novelty in Gradient Combination Approach\nThe novelty presented in Equation (6), which combines the two editing approaches by decomposing one gradient into the other perpendicular component and then summing them, seems linear, and it is conceivable that a non-linear combination may provide a more effective result. Including alternative approaches as comparative experiments would strengthen the paper's case for its approach or help contextualize its performance relative to existing methods.\n\nThe paper introduces a combined text-drag editing approach but lacks a comprehensive literature review, convincing examples, clarity regarding region specificity, and evidence of sufficient novelty. Addressing these areas would help elevate the study’s contributions and clarify its position within diffusion-based image editing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed.", "Yes, Discrimination / bias / fairness concerns", "Yes, Privacy, security and safety", "Yes, Legal compliance (e.g., GDPR, copyright, terms of use)", "Yes, Potentially harmful insights, methodologies and applications", "Yes, Responsible research practice (e.g., human subjects, data release)", "Yes, Research integrity issues (e.g., plagiarism, dual submission)", "Yes, Unprofessional behaviors (e.g., unprofessional exchange between authors and reviewers)", "Yes, Other reasons (please specify below)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The comparisons in this paper in insufficient, why only DragDiff is compared to in this paper? More comparisons should be added." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe motivation is clear and effective, combining text and drag editing to leverage the strengths of both approaches, achieving more precise edits.\n2.\tThe Global-Local Gradient Fusion method is innovative, merging global text and local drag gradients to enhance editing quality, with experiments showing notable improvements in performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "CLIPDrag combines text and drag-based controls to improve image editing, using text for broad guidance and drag points for precise adjustments. The author introduces Global-Local Motion Supervision, which combines gradients from both text and drag inputs, and Fast Point Tracking to speed up convergence. This method eliminates common issues like vagueness in text-only edits and ambiguity in drag-only edits." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe illustration in Figure 2 is unclear in terms of workflow. If CLIP guidance is applied, the latent space should ideally be converted to the pixel domain to align with CLIP’s processing. However, the diagram uses SD rather than a VAE.\n2.\tCLIPDrag lacks comprehensive quantitative comparisons with other methods in image editing. The current evaluation only includes DragDiff in Figure 6, which is insufficient.\n3.\tThe ablation study also lacks more detailed quantitative comparisons. In Figure 8, the visual differences between (b) and (c) are subtle, making it hard to discern the impact of changes." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In the context of drag-based editing where maintaining the original identity of objects while manipulating specific features is a major challenge, your manuscript suggests the integration of text-based inputs to guide the editing process. Could you elaborate on how the addition of text signals specifically contributes to preserving the object's original identity during the edit? Additionally, are there specific conditions or types of text prompts that particularly enhance this preservation aspect within the CLIPDrag framework?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Innovative Integration**: The paper presents a compelling approach by combining text and drag inputs to guide image editing. This dual-input strategy addresses the limitations of each method when used independently, potentially offering more controlled and precise edits.\n2. **Technical Depth**: The introduction of GLMS shows a deep understanding of the challenges in image editing, particularly in handling the complexities associated with combining different types of editing signals.\n3. **Experimental Validation**: Extensive experiments, including ablation studies, demonstrate the effectiveness of CLIPDrag against state-of-the-art methods. The results are well-presented and support the claims of improved performance in terms of both precision and ambiguity resolution." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript introduces CLIPDrag, a novel method that integrates text-based and drag-based signals for image editing, leveraging both for precise control and reduced ambiguity. The method utilizes Global-Local Motion Supervision (GLMS) and Fast Point Tracking (FPT) to enhance the image editing process, aiming to outperform existing methods by combining the strengths of both editing approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Novelty of FPT**: The paper should acknowledge that searching for handle points along the path from handle points to targets has been previously explored in methods like DragGAN and DragDiffusion. To clarify the unique contributions of FPT, the authors should provide side-by-side comparisons of point searching strategies, highlighting any improvements or distinctions in their approach.\n\n2. **Comprehensive Comparisons**: While the paper compares CLIPDrag with some existing methods, it would benefit from more extensive comparisons or discussions with recent techniques such as InstantDrag, LightningDrag, StableDrag, and RegionDrag. Although these methods may use different training approaches or inputs, incorporating their text-supervision signals could demonstrate CLIPDrag's ability to address ambiguities present in these methods, showcasing its generalizability. Additionally, these methods should be thoroughly discussed in the related work section to provide a more complete context.\n\n3. **Performance Metrics**: The paper should include a discussion or report on inference time comparisons. This information is crucial for understanding the practical applicability of CLIPDrag in real-world scenarios and how it compares to other methods in terms of computational efficiency.\n\n4. **User Input Optimization**: While the text prompt is provided in DragBench, it's worth noting that the original DragGAN paper did not require text input. The additional text prompt in CLIPDrag may increase user effort. To address this, the authors could explore incorporating vision-language models like GPT-4V to automatically interpret the input image (as shown in the first column of Figure 4). This approach could significantly reduce user burden while maintaining the benefits of text-guided editing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024combining,\ntitle={Combining Text-based and Drag-based Editing for Precise and Flexible Image Editing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2HjRezQ1nj},\nnote={under review}\n}" }, "abstract": { "value": "Precise and flexible image editing remains a fundamental challenge in computer vision. Based on the modified areas, most editing methods can be divided into two main types: global editing and local editing. In this paper, we choose the two most common editing approaches (\\ie text-based editing and drag-based editing) and analyze their drawbacks. Specifically, text-based methods often fail to describe the desired modifications precisely, while drag-based methods suffer from ambiguity. To address these issues, we proposed \\textbf{CLIPDrag}, a novel image editing method that is the first to combine text and drag signals for precise and ambiguity-free manipulations on diffusion models. To fully leverage these two signals, we treat text signals as global guidance and drag points as local information. Then we introduce a novel global-local motion supervision method to integrate text signals into existing drag-based methods by adapting a pre-trained language-vision model like CLIP. Furthermore, we also address the problem of slow convergence in CLIPDrag by presenting a fast point-tracking method that enforces drag points moving toward correct directions. Extensive experiments demonstrate that CLIPDrag outperforms existing single drag-based methods or text-based methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Computer Vision", "Generative Model", "Diffusion Model", "Image Editing." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fcbad8308b7043bc870c5f6be924bfdfd97d3999.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Combining Text-based and Drag-based Editing for Precise and Flexible Image Editing" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2IBdk8cUdC
Topo-Field: Topometric mapping with Brain-inspired Hierarchical Layout-Object-Position Fields
main
Active
Robotic scene understanding;Neural scene representation;Hierarchical representation;Topometric map
applications to robotics, autonomy, planning
3;5;5;5
5;3;3;4
1;2;2;2
1;2;2;2
1;3;2;1
4.5
3.75
1.75
1.75
1.75
-0.870388
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "It would be very important if you can justify those points in the weakness part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors tackle the problem of hierarchical robotic scene understanding, which is an interesting and important topic\n2. The proposed LOP is bio-inspired, to me this concept seems interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Topo-Field, a framework designed to enhance mobile robot navigation by integrating detailed semantic information about layouts, objects, and their positions (LOP) into a neural field representation. Interestly, such structure is inspired by the role of postrhinal cortex neurons on spatial layout. By querying a learned NeRF, Topo-Field constructs a semantically rich yet computationally efficient topometric map for hierarchical robotic scene understanding. Experimental results demonstrate its effectiveness in tasks like position inference, localization, and planning, bridging the gap between detailed scene understanding and efficient robotic navigation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Unclear descriptions of target feature processing in Sec 4.1**\n1. How do you know if a 3D point belongs to the object, or the background? Do you use the GT annotations from the dataset? (the Matterport3D you show has that information I believe?)\n2. For the background features, you will get only a single feature for each image. How do you fuse those features from different views? \n3. Also, isn’t it making more sense to take per-pixel CLIP features from models like LSeg/OpenSeg, and fuse that information?\n\n**Unclear descriptions of neural scene encoding in Sec 4.2**\n1. Related to the questions above. In this section you mention that there are object-level local features and layout-level region features and MHE seems to be a good representation for learning such hierarchy. However, how exactly do you learn these two set of features respectively under MHE? No details there\n2. To learn MHE or NeRF in general, you need to actually shoot a ray for each pixel and sample along the ray. The final features are the weighted sum of all values along the ray, with volume rendering. How do you make sure your features on the 3D surface point are exactly the feature you render? \n\n**Unclear Topometric Mapping in Sec 4.3**\n1. Line 309, what is ${C_t, S_t}$? What are the differences to ${C_R, S_R}$ (I know this is the embeddings for region) in Line 304, and ${C_I, S_I}$ in Line 314? You did not specify them before. It is confusing and making it hard to understand\n2. Figure 3 (b) does not really match with what you write in “localization with text/image query” between Line 306-318. In the figure, all you get are the per-point features, and try to match with query features, omitting many important details in your description. \n3. “Matching” in Figure 3 is never really discussed. What kind of matching? Do you mean calculating the cosine similarity among the features, and take the one with the highest score?\n\n**Text query localization in experiments**\n1. How do you decide the similarity threshold for the bounding box? Do you need to choose a different threshold for each text query? My own experience is that, it is not really possible to get a single threshold for every query.\n2. One more thing: once you have the right threshold, how exactly do you get the bounding boxes out from thresholding? \n3. What are the “samples” in Table 1?\n4. How many queries are you considering for each scene, and how do you obtain the GT? Same question applies to Table 3 as well.\n\n**Image query localization in experiments** \nif I understand correctly, you show the heatmap of the query. You claim that “Topo-Field constrains the localization results to a smaller range in the exact region”. However, that is not really true to me. If you look at the washbasin in the bathroom, you also have many points highlighted in other regions, like kitchen, and even some points in the bedroom. In such a case, how can you get such good numbers in Table 3? \n\n**Ablation study**\nHow come your ablation in Table 4 is only evaluating the region prediction accuracy, which does not even require most parts of your methods (objects, the graph you build, etc). Why not evaluate on other things as well? And even that, your default strategy seems not outperform much over any of baselines, even the very simple baseline 1 in some scenes. \n\n**Writing** \n- Overall I think the writing is not good since many things are not justified well. \n- There are so many cases when the author writs (), a space is not added before, e.g. L214 …mapping(SLAM), L259 Multi-layer Perceptron(MLP), etc." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In line 235: what is C and S? I assume it is the output of CLIP and Sentence-BERT? How are the regions r_p defined? \n- In line 239: what is m in this equation? \n- In page 5, line 241: It would seem that the partitioning of the space requires human labeling? If that is the case, it is a significant limitation of the approach. \n- In line 242: Could you clarify the sentence \"the predicted implicit representation outputs are targeted to match the features from the pre-trained models separately\", what it means in practice or how this is achieved. I assume this is what is described in 4.2, but it would be good to make it unambiguous if that is the case, as F is not referenced in that section. \n- Could you make the description in Section 4.2 more specific and formal? The only description of inputs/outputs and process we are provided is via the diagram in figure 1, it would be good to have a proper formal description of the process, description of the architecture, and format/dimensionality of inputs and outputs for each component, as well as a formal algorithm\n- In line 254: Could you provide a more in-depth argument for using MHE? The computational cost of standard NeRFs is well known, but is MHE the only possible solution? How does it compare with other fast approaches discussed in the literature, like, for example, Gaussian Splatting? \n- In line 258: Could you describe the mapping in more formal terms? Fig. 2 only provides a schematic description of the process. \n- In line 268: How is the similarity between E_pi and {C_R, S_R} calculated? It would be good to have a formal equation for this operation." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The combination of structural and semantic information in a way that is efficient for robotic system to query and plan on is a critical problem for robotics. \n- The approach seems to perform very well on the evaluation, clearly outperforming the presented baselines on those tasks. \n- The proposed approach is also reasonable in computational terms, as all experiments were performed on a single GPU (no information is given on training time though)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This article proposes a novel approach for encoding scene information into a topometric map, for improving localisation and planning. The proposed approach is based on a Layout-Object-Position (LOP) approach. Layout information from knowledge of the environment's rooms. Object information from semantic segmentation (Detic) and a joint encoding of the segmented object patch using clip and of the object-region labels using Sentence-BERT. Finally, position information is produced by a 3D reconstruction of the scene using Multi-scale Hashing Encoding (MHE). This information is combined into a single Topometric map coined Topo-Field. \nThe proposed method is evaluated for the inference of position attributes and localisation and appears to clearly outperform the presented baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The description of the approach is lacking specifics, and the reader has to infer the architecture and information flow from the provided diagrams rather than formal description in mathematical and algorithmic terms. \n- It is not fully clear how the partitioning of the environment (location) into rooms is performed and how well it would generalise to new environments. \n- The motivation of the work from neuroscience is interesting, but remains very vague. Little discussion is provided on how well the proposed approach may model the neural structures it claims to be inspired by. \n- The performance is very good compared to the discussed baselines, but it would seem that the proposed appraoch also benefits from significantly more task-specific information for those tasks (ie, the room information is provided directly). This is not a critical issue in my view, but it would be good to discuss the limitations of the presented baselines and issue of fairness of comparison some more. \n- I note that the reference for Reimers & Gurewych should probably cite the published version of the article rather than the pre-print." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The experimental results are impressive when compared to baseline performances; however, it is unclear whether the benchmarks used are new proposed by the authors or following existing ones, which raises concerns about the fairness of the evaluation. What are the primary factors driving the significant improvements?\n\nComputation: the paper mentions a large batch size of 12,544. It would be helpful to clarify what specific data is contained within this batch size." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper addresses a compelling problem in semantic mapping and its applications for enabling robots to navigate real-world environments. The experimental results demonstrate impressive improvements in performance. Additionally, the supplementary materials, such as the code snippets and prompts, enhance the understanding of the proposed method's details and implementation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a method for training a neural implicit field that utilizes supervisory signals from pre-trained foundation models to capture semantic features. The proposed model is applicable to several critical downstream tasks in robotics, including text/image query localization, semantic navigation, and path planning. Experimental results demonstrate significant improvements in performance metrics, supported by qualitative evidence." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite its potential, the system heavily relies on various input types, such as annotated room maps and camera poses, as well as off-the-shelf object detection methods for generating bounding boxes and masks. This dependence poses challenges in real-world applications, where inaccuracies in these inputs can lead to errors. Additionally, the system's reliance on ChatGPT complicates debugging and explanation when errors occur in complex real-world environments.\n\nEncoding semantic information and supervising it with pre-trained features alleviates some annotation burdens; however, this approach is already a common practice in the field of implicit representation for semantic mapping [1][2]. The overall system resembles a large engineering project, making it challenging to distill its theoretical contributions.\n\n[1] V. Tschernezki, I. Laina, D. Larlus and A. Vedaldi, \"Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D Image Representations,\" 2022 International Conference on 3D Vision (3DV), Prague, Czech Republic, 2022, pp. 443-453, doi: 10.1109/3DV57658.2022.00056.\n[2] Zhu, Siting, et al. \"Sni-slam: Semantic neural implicit slam.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\nTable 4 could benefit from clearer labeling, where Baselines 1-4 are not explicitly defined. A reference to Figure 7 could help.\n\nThe authors frequently reference the postrhinal cortex from the biological literature, but the connection to the proposed method is not clearly articulated. Topological mapping is indeed a common computer vision task relevant to navigation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "With the issues addressed above, the authors should revise the paper accordingly." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "+ The idea of constructing a topometric map using the implicit neural field is interesting" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper targets an interesting problem of topometric mapping but is not ready for publishing. The quality is poor regarding writing, organization, annotations, and experimental setups." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The writing is far from satisfactory. The corresponding authors should revise the manuscripts besides the abstract and introduction.\n+ Though the paper proposes a Topo-field to integrate layout-object-position, this representation is not clearly presented in Sec. 3. The definition of the topometric map (or the graph structure in Eq. 3) is vague and hard to follow, and the generation of the graph from dense field F (L199) is unclear. Note that the implicit neural field F is similar to previous methods with a distilled feature field, the novelty and the contribution of the proposed method are unclear.\n+ The hierarchical structure of point-object-room is common in scene graph generation. However, no relevant work (e.g., CLIO, HOV-SG) is referred to in the related work section or the experiments section.\n+ Multiple annotations are not formally defined in the paper (e.g., the functions $C_t, S_t$). The training stage in Sec. 4.4 should be carefully revised to make it clear.\n+ The experimental setups lack clear demonstration, and comparisons against recent methods are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024topofield,\ntitle={Topo-Field: Topometric mapping with Brain-inspired Hierarchical Layout-Object-Position Fields},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2IBdk8cUdC},\nnote={under review}\n}" }, "abstract": { "value": "Mobile robots require comprehensive scene understanding to operate effectively in diverse environments, enriched with contextual information such as layouts, objects, and their relationships. While advancements like Neural Radiance Fields (NeRF) offer high-fidelity 3D reconstructions, they are computationally intensive and often lack efficient representations of traversable spaces essential for planning and navigation. In contrast, topological maps generated by LiDAR or visual SLAM methods are computationally efficient but lack the semantic richness necessary for a more complete understanding of the environment.\nInspired by neuroscientific studies on spatial cognition, particularly the role of postrhinal cortex (POR) neurons that are strongly tuned to spatial layouts over scene content, this work introduces Topo-Field, a framework that integrates Layout-Object-Position (LOP) associations into a neural field and constructs a topometric map from this learned representation. LOP associations are modeled by explicitly encoding object and layout information, while a Large Foundation Model (LFM) technique allows for efficient training without extensive annotations. The topometric map is then constructed by querying the learned NeRF, offering both semantic richness and computational efficiency.\nEmpirical evaluations in multi-room apartment environments demonstrate the effectiveness of Topo-Field in tasks such as position attribute inference, query localization, and topometric planning, successfully bridging the gap between high-fidelity scene understanding and efficient robotic navigation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Robotic scene understanding", "Neural scene representation", "Hierarchical representation", "Topometric map" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f20a3f3e9466efaa3a6b6cd80e4329fd31b9307d.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/328f1c520b9baeb43a71cf066b7043781fbdc581.zip" }, "title": { "value": "Topo-Field: Topometric mapping with Brain-inspired Hierarchical Layout-Object-Position Fields" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2IUO0Iq5Bq
Fast Tensor-Based Multi-View Clustering with Anchor Probability Transition Matrix
main
Active
Multi-view clustering;Fast clustering
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;5;5
5;5;4;4
1;3;3;3
2;2;2;2
2;3;3;3
4
4.5
2.5
2
2.75
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1. This paper is easy to follow, and the idea is straightforward.\n\nS2. The introduction about the framework is clear, and the equations are well presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose a Fast Tensor-Based Multi-View Clustering with Anchor Probability Transition Matrix ((FTMVC-APTM) method. Within this model, to reduce the computational complexity, the relationships between data points and the selected anchors in different views are captured, and recorded by the bipartite similarity graphs. Based on these probability graphs, the cluster labels from anchors to samples are transferred, and the membership matrices can be obtained without the need for post-processing. To further exploit complementary information across views, the membership matrices are stacked into a tensor and contrained by a Schatten p-norm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. Overall, the idea of this paper is straightforward and clear. However, the novelty of FTMVC-APTM is limited, and the presented motivations/problems have been referred and solved.\n\nW2. The used datasets are too small, and the experiments provided in this paper are not convincing to show the superiority of the proposed method.\n\nW3. The running time comparison experiment is missing.\n\nW4. More recent fast multi-view clustering methods should be introduced and compared in the experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tThe sample size of the existing dataset is small and the authors should increase the experiments on large-scale datasets.\n2.\tThe available experimental results are not sufficient to support the authors' opinion. I suggest the authors to add some visualization or other experiments.\n3.\tCompared to existing methods, the authors' innovation is unclear. I suggest that the authors should carefully consider the motivation and contributions." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThis paper is well organized.\n2.\tThis paper implements an exploration of fast tensor clustering.\n3.\tThe proposed methodology is somewhat enlightening." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, a tensor-based method is proposed to solve the MVC problem. The authors propose a simple and efficient method and verify the rationality and superiority of the method through experimental results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have some concerns about the paper, as follows:\n\n1. This paper is also limited by its innovative. Affiliation matrix is not a new method, it has been widely used[1,2]. Tensor Schatten-p norm[3,4] are also a common way to deal with low rank. So, the innovation made by the authors is more in the sense of incremental.\n\n[1] Zhao, J. B., & Lu, G. F. (2022). Clean and robust affinity matrix learning for multi-view clustering. Applied Intelligence, 52(14), 15899-15915.\n\n[2] Li, X., Zhang, H., Wang, R., & Nie, F. (2020). Multiview clustering: A scalable and parameter-free bipartite graph fusion method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1), 330-344.\n\n[3] Xie, Y., Gu, S., Liu, Y., Zuo, W., Zhang, W., & Zhang, L. (2016). Weighted Schatten \n-norm minimization for image denoising and background subtraction. IEEE transactions on image processing, 25(10), 4842-4857.\n\n[4] Li, X., Ren, Z., Sun, Q., & Xu, Z. (2023). Auto-weighted tensor schatten p-norm for robust multi-view graph clustering. Pattern Recognition, 134, 109083.\n\n2. The experimental results in this paper are inadequate. For example, the authors emphasize that their method enhances the interpretability of clustering. However, this needs to be verified experimentally. The superior performance of clustering alone may not provide effective support.\n\n3. In addition, the authors emphasize that their method requires only linear complexity and has a fast computational speed. However, the sample size of the dataset used is small, and I suggest the authors to increase their experiments on large-scale datasets such as AwA[5] or Youtube[6].\n\n[5] https://cvml.ista.ac.at/AwA/\n\n[6] https://www.cs.tau.ac.il/~wolf/ytfaces/" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. This paper claims that this method is more interpretable than other complex multi-view clustering methods. Specifically, how does the membership matrix generated by the probability matrix help explain the final clustering structure?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed method is clearly explained and easy to understand.\n\n2. This paper carefully analyzes the computational complexity of the method and illustrates the potential advantages of FTMVC-APTM in data scale expansion.\n\n3. Experimental results on eight multi-view datasets demonstrate its effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new multi-view clustering method called Fast Tensor Multi-view Clustering Based on Anchor Probability Transformation Matrix (FTMVC-APTM). The method of directly calculating the membership matrix using the probability matrix avoids complex post-processing and enhances clustering interpretability. The nuclear norm and Schatten p-norm regularization are introduced to ensure the balance and robustness of the clustering results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main contributions of this paper is to combine the anchor probability transformation matrix and the Schatten p-norm regularization of the multi-view tensor structure. However, these ideas are not new in the field of multi-view clustering, and the combination of anchor selection, tensor structure and probability matrix has been applied in some methods[1][2].\n[1] Nie, Feiping, et al. \"Fast clustering with anchor guidance.\" IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).\n[2] Yu, Weizhong, et al. \"Multi-View Fuzzy Clustering Based on Anchor Graph.\" IEEE Transactions on Fuzzy Systems (2023).\n\n2. This paper lacks ablation experiments on the key design of using probability matrix to calculate membership matrix. Given that this method is a core contribution of FTMVC-APTM, conducting relevant ablation experiments will help evaluate the actual impact of this strategy on the model performance.\n\n3. Although this paper demonstrates the superior performance of FTMVC-APTM on multi-view datasets, the scale of these datasets is relatively limited (the number of samples ranges from a few hundred to a few thousand), which fails to fully verify the performance of the method on large-scale data. It is recommended to supplement the experiments on larger datasets, such as the YTF dataset and the Caltech dataset.\n\n4. It is recommended that the authors appropriately increase the visualization results of clustering to help readers more intuitively understand the performance and clustering structure of the proposed FTMVC-APTM method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "A good framework for this paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a Fast Tensor-Based Multi-View Clustering with Anchor Probability Transition Matrix ((FTMVC-APTM) to address some key challenges in multi-view clustering like lacking interpretability, and high computational complexity from large-scale data. Extensive experiments on various datasets are conducted to demonstrate the effectiveness and efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tInnovative may not be enough for such a conference, this work simply combines a lot of work, for example, nuclear norm and Schatten p-norm regularization are both very common regular terms, and the authors don't discuss in depth why they use these two items, for example why Schatten p-norm regularization, there are many new low-rank tensor norm [1][2].\n2.\tThe article is poorly expressed, for example, whether the author employs Schatten p-norm or weighted tensor Schatten p-norm. the introduction states the Schatten p-norm, but Eq.6 uses the weighted tensor Schatten p-norm in [3]. These are two completely different concepts. If you use the weighted tensor Schatten p-norm, how did you determine the weight values for the different views?\n3.\tThis work states “fast tensor-based multi-view clustering”, but the dataset is only 4k in size and there is no runtime comparison, which is hard to believe!\n4.\tThe author states “Each experiment was replicated 5 times”, so why do the results in Table 3 not include variance?\n5.\tIn Figure 2, the performance always reaches best when anchor rate=1, which means the anchor is useless, and the complexity is also O(n^2logn), This result proves that the work proposed by the authors is not valid, At least it contradicts the author's “fast” statement.\n\n\n[1] Guo J, Sun Y, Gao J, et al. Logarithmic Schatten-$ p $ p Norm Minimization for Tensorial Multi-View Subspace Clustering[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(3): 3396-3410.\n\n[2] Ji, Jintian, and Songhe Feng. \"Anchor structure regularization induced multi-view subspace clustering via enhanced tensor rank minimization.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\n\n[3] Gao, Quanxue, et al. \"Enhanced tensor RPCA and its application.\" IEEE transactions on pattern analysis and machine intelligence 43.6 (2020): 2133-2140." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024fast,\ntitle={Fast Tensor-Based Multi-View Clustering with Anchor Probability Transition Matrix},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2IUO0Iq5Bq},\nnote={under review}\n}" }, "abstract": { "value": "Multi-view clustering effectively integrates information from multiple data representations, yet current methods face key challenges. They often lack interpretability, obscuring how clusters are formed, and fail to fully leverage the complementary information across views, limiting clustering quality. Additionally, large-scale data introduces high computational demands, with traditional methods requiring extensive post-processing and manual tuning.To address these issues, we propose a novel multi-view clustering approach based on probability transition matrices. By selecting anchor points and constructing bipartite similarity graphs, we can capture the relationships between data points and anchors in different views and reduce computational complexity. Through probability matrices, we efficiently transfer cluster labels from anchors to samples, generating membership matrices without the need for post-processing. We further assemble these membership matrices into a tensor and apply a Schatten \\(p\\)-norm constraint to exploit complementary information across views, ensuring consistency and robustness. To prevent trivial solutions and ensure well-defined clusters, we incorporate nuclear norm-based regularization. Extensive experiments on various datasets confirm the effectiveness and efficiency of our method." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multi-view clustering", "Fast clustering" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4f6cce2849435384f8c6e704c1009c8da59dd5d0.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/9ea226b38a34a2dcae33c178dcf139521d8e9c68.zip" }, "title": { "value": "Fast Tensor-Based Multi-View Clustering with Anchor Probability Transition Matrix" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2IhkyiF3to
Mutual Information Preserving Neural Network Pruning
main
Active
structured pruning;model compression;mutual information
other topics in machine learning (i.e., none of the above)
3;3;5;5
4;4;3;4
3;2;2;3
2;2;2;2
3;2;2;2
4
3.75
2.5
2
2.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As mentioned in Weakness, I have posed concrete questions for authors. Thanks." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors provide a detailed analysis on the motivation and how the method works, especially on how the mutual information is preserved. And they conduct a lot of experiments to demonstrate the effectiveness of their methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose MIPP to enable real-time pruning, whole-layer pruning and global re-training guarantees for improving the performance of network pruning. Through comprehensive experimental evaluation, they demonstrate that MIPP can effectively prune networks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Experimental settings and result presentation is not clear. \n - What is untrained network, trained network, pretrained network meaning in Figure 2 and Figure 4? \n - What is LC in Figure 3?\n - In Figure 2, for column 3 and 4, seems the proposed method does not perform well significantly than the baselines. \n - Besides, the latest baseline is year 2022, is there any recent works in 2023 or 2024 to compare?\n\n2) in Figure 4, to my best knowledge, state-of-the-art ResNet50 for ImageNet achieves about 76% accuracy however the proposed approaches can achieve nearly 88% accuracy. Can you explain the settings in detail and what is the percentage of parameters reduced and what is the MACs reduced? \n\nIn summary, it is quite unclear how is the comparisons between SOTA and proposed approaches. Besides, some common metrics in comparisons are missing, e.g., FLOPS, #params, MACs. Besides, the baselines seems out-dated. If the authors could address my concern, I can improve my rating." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Can this approach work on other types of architectures like ViT?\n* How it may perform with different activation functions?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ The motivation of the paper is clear - global pruning approaches do have their limitations and the proposed approach can effectively avoid those. \n+ The idea of looking at the MI between activations of adjacent layers is interesting. It also makes sense to consider nodes that can maintain such MI. \n+ The theoretical analysis seems to make some good points of the observations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a pruning approach based on mutual information between the activations of adjacent layers. The proposed approach has been evaluated on a number of models and datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed approach has only been tested on some early architectures. It is not clear how this can be generalized to other models and datasets? \n- It is also not clear if the proposed approach is sensitive to different activation functions.\n- The comparison between baselines seems to be quite limited to only a few approaches." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can the author compare MIPP in PaI and post-trained pruning in separate settings with SoTA methods in recent years, such as 2023 or 2024?\n2. Most experiments are performed on small datasets and networks; showing the pruning task on more computationally intensive structures or datasets, such as the Efficentnet B7 and Imagenet1K tasks, would be better. As the author claimed, the methods work for trained networks; pruning on a network that is pre-trained in a large-scale dataset such as Imagenet21K could be interesting. \n3. Will this method work in Vision transformers?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper provides an interesting perspective on neural network pruning. It considers the activations of the downstream layers, which allows pruning on trained and untrained networks; the idea is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Mutual Information Preserving Pruning (MIPP), an activation-based pruning method that maintains mutual information between adjacent layers in neural networks, ensuring retrainability. Unlike traditional methods, MIPP dynamically selects nodes based on their contribution to information transfer, addressing limitations such as layer collapse and lack of adaptability. Experimental results demonstrate that MIPP outperforms state-of-the-art pruning techniques on various vision models, including ResNet50 on ImageNet, with implementation details to be released upon publication." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Authors claim the method is compared with state-of-the-art techniques, yet most literature is from before 2022; many recent works, such as PHEW or NPB in Pruning at Initialization, and indeed, there are more works on pruning on trained networks in recent years. I strongly suggest that authors provide more valid reviews of recent works.\n2. Although the method is interesting because it works for both trained and untrained networks, the motivation for this is not clear. \nFor PaI, tasks aim to find networks before training to reduce training costs, while post-trained pruning aims to preserve the best performance on trained networks. Is MIPP better than SoTA methods on each side?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The overall writing is clear for effective visualizations and well-structured presentation.\n2. The paper conducts a wide range of experiments to validate the algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces MIPP (Mutual Information Preserving Pruning), a structured pruning method for neural networks. The key idea is to preserve mutual information between adjacent layers' activations during pruning by selecting nodes that transfer entropy to the subsequent layer. \nThe method operates by iteratively pruning from outputs to inputs, using transfer entropy redundancy criterion (TERC) with MI ordering to select nodes. Comprehensive experiments validate MIPP's effectiveness on both trained and untrained networks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some important references [a] are missing, which makes the novelty of the paper questionable. For example, [a] is also about using the mutual information to do filter pruning, what is the difference? Whether the proposed paper achieve higher performance? Why and How?\n\n2. The compared method is rather old. The authors claim \"For models trained on datasets smaller than ImageNet, we compare the performance of our method to SynFlow (Tanaka et al., 2020), GraSP (Wang et al., 2022), ThiNet (Luo et al., 2017) and SOSP-H (Non- nenmacher et al., 2022),\". Why not include paper published in 2024 [b].\n\n3. Performance is not good. In [a], the performance on ResNet-50 on ImageNet is much higher than Thinet. Why just compare Thinet in Figure 6?\n\n[a] Enhancing CNN efficiency through mutual information-based filter pruning, Digital Signal Processing 2024\n[b] Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mutual,\ntitle={Mutual Information Preserving Neural Network Pruning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2IhkyiF3to},\nnote={under review}\n}" }, "abstract": { "value": "Model pruning is attracting increasing interest because of its positive implications in terms of resource consumption and costs. A variety of methods have been developed in the past years. In particular, structured pruning techniques discern the importance of nodes in neural networks (NNs) and filters in convolutional neural networks (CNNs). Global versions of these rank all nodes in a network and select the top-$k$, offering an advantage over local methods that rank nodes only within individual layers. By evaluating all nodes simultaneously, global techniques provide greater control over the network architecture, which improves performance. However, the ranking and selecting process carried out during global pruning can have several major drawbacks. First, the ranking is not updated in real time based on the pruning already performed, making it unable to account for inter-node interactions. Second, it is not uncommon for whole layers to be removed from a model, which leads to untrainable networks. Lastly, global pruning methods do not offer any guarantees regarding re-training. In order to address these issues, we introduce Mutual Information Preserving Pruning (MIPP). The fundamental principle of our method is to select nodes such that the mutual information (MI) between the activations of adjacent layers is maintained. We evaluate MIPP on an array of vision models and datasets, including a pre-trained ResNet50 on ImageNet, where we demonstrate MIPP’s ability to outperform state-of-the-art methods. The implementation of MIPP will be made available upon publication." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "structured pruning", "model compression", "mutual information" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/cd9ee12bddf5dba5c38e3f6d806372eb8edf7b2e.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Mutual Information Preserving Neural Network Pruning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2IoFFexvuw
Online Reward-Weighted Fine-Tuning of Flow Matching with Wasserstein Regularization
main
Active
Flow Matching;Reinforcement Learning;Wasserstein Regularization;Exploration-Exploitation Trade-off;Fine-Tuning;Generative Model
reinforcement learning
5;5;6
4;2;4
3;2;3
3;2;2
3;2;2
5.333333
3.333333
2.666667
2.333333
2.333333
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The problem of finetuning conditional flow matching models is of general interest to the community. How to preserve the generation diversity and avoid model collapse is a challenging problem.\n\n- Combining the reward-weighted matching loss and the Wasserstein distance regulatization seems to be empirically effective. The experimental results look good.\n\n- There are quite a few theoretical justifications for the proposed method. Although I didn't check carefully, I find them to be quite reasonable claims." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a way to finetune conditional flow matching models to maximize some user-defined reward function. Specifically, the paper combines two techniques: (1) reward-weighted conditional flow matching; and (2) a constraint that bounds the pretrained model and the finetuned model. The work gives some theoretical analyses to justify the proposed method is grounded and some experiments also show its effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The contribution of the paper seems ad-hoc to me. There is quite little connection between the reward-weighted matching loss and the Wasserstein regularization. I find both techniques independent of each other, so I find the motivation of the work quite weak. Could the author elaborate more on why these two techniques should be used together (other than empirically well-performing)?\n\n- Given that the reward-weighted matching loss and the Wasserstein regularization are unrelated contributions, I will be interested to see how much gain each individual component contribute to the performance gain? Could the authors conduct some ablation study?\n\n- I find it less convincing for the performance gain, since there is no compelling baselines for comparison. For example, the paper claims that the Wasserstein regularization performs well. How about other discrepancy measure? How is the Wasserstein distance a good choice here? I think more discussion on the motivatiojn will help the reader gain more insights. \n\n- Whlle I am no expert in this domain, I am wondering whether there are other stronger baselines to compare to. The problem this paper studies doesn't seem to be new, so I think there will be some other finetuning methods for comparison, say [Imagereward: Learning and evaluating human preferences for text-to-image generation, NeurIPS 2023].\n\n- The experiments are relatively small-scaled. I don't know how the proposed method scales with the size of the model/dataset. Could the authors conduct some experiments to study the scaling performance of this finetuning technique?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Besides the points raised in the weakness section:\n\n1. It is probably better to also show quantitative metrics like diversity scores (e.g., feature pairwise distances) and FID scores.\n2. In Eqn 10, it is probably more aesthetic to write \\theta_\\text{ft} and \\theta_\\text{ref} (for the subscripts), instead of \\theta_{ft} and \\theta_{ref}.\n3. W2 distance is great, but I wonder if it makes a big difference if one instead uses KL divergence (both theoretically and empirically)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper theoretically analyzes probably one of the most intuitive methods of reward reweighting, and by introducing a regularization loss on the finetuned distribution, shows that this naive method can be extended to the online setting. To support the claims, the paper does sufficient amount of experiments on different small-scale image datasets and different reward functions. Especially, the paper shows that their online variant is better than the offline one.\n\nCompared to baselines like DDPO that requires one specify the number of sampling steps for finetuning, the proposed method finetunes a model in the way very similar to flow matching -- to sample images from a \"data\" distribution and some random t in [0,1], and to compute the matching loss." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a method to perform reward finetuning of flow-based models. The idea starts with the reward-weighted version of the standard flow matching loss (i.e., doing simple importance sampling) and, to remove the dependency on pretraining datasets and to perform online training, changes the sampling distribution from the original data distribution to the finetuned sampling policy. Such a strategy proves very prone to overfitting, as the finetuned distribution collapses into a single mode if it is trained for too many epochs. Therefore, the authors further proposes to regularize the sampling policy to be not too far away from the pretrained one (using a Wasserstein distance). The paper discusses some theoretical results like the asymptotic behaviors of the proposed methods and empirically show that the proposed method can be applied to finetuning of flow matching models pretrained on MNIST and CIFAR-10." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper does not compare the proposed method against any other methods, for instance DDPO (running PPO for diffusion finetuning). While one may argue that DDPO is not designed for continuous flow models, one eventually samples from CNFs with some discretization and can therefore construct an MDP for DDPO finetuning, and not to mention some more recent methods. On flow matching, there is a very recent paper [1] that does reward finetuning for flow matching (though this paper should be considered as a concurrent one). There also exist some more recent papers for reward finetuning to compare with, and I feel that showing at least one of them will be great.\n\nThe proposed method seems to be a bit sensitive (in theory) to hyperparameter tuning due to its online nature. It is a bit unsatisfactory that the resulted distribution (Eqn 12 in the paper) is dependent on the number of epochs. While in practice it is not a super big concern, an objective that guarantees convergence to a specific distribution (e.g. P_pretrained(x) * exp(lambda * r(x)) / Z) is generally considered better.\n\nMany of the baselines are tested on large-scale models like StableDiffusion, and many of them can converge in a reasonably fast speed on simple reward functions like Aesthetic Score used in DDPO. The paper fails to show results in these more realistic settings (though it probably requires some compute, but one might be able to find a smaller model to do experiments).\n\n[1] Adjoint Matching: Fine-tuning Flow and Diffusion Generative Models with Memoryless Stochastic Optimal Control\n. Carles Domingo-Enrich, Michal Drozdzal, Brian Karrer, Ricky T. Q. Chen. https://arxiv.org/abs/2409.08861" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What kind of reward functions can be fine-tuned without collapsing by the proposed W_2 regularization method? \n2. Is this method capable of performing fine-grained fine-tuning tasks, such as controlling specific semantic parts of images?\n3. Why not use W_1 distance for regularizing?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.The paper effectively identifies and addresses some key issues in fine-tuning continuous flow-based generative models, such as policy collapse and computational inefficiency.\n2. The introduction of the online reward-weighting mechanism and Wasserstein-2 distance regularization is well-suited for flow matching models that balances exploration and exploitation, mitigating the policy collapse problem.\n3. The theoretical analyses are rigorous and provide a solid foundation for the proposed method. The empirical results across various tasks are convincing and demonstrate the method's effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a new online reinforcement learning (RL) fine-tuning method for continuous flow-based generative models, named Online Reward-Weighted Conditional Flow Matching with Wasserstein-2 Regularization (ORW-CFM-W2). It addresses the challenges of policy collapse and high computational costs associated with traditional fine-tuning methods. The authors propose integrating RL within the flow matching framework, utilizing an online reward-weighting mechanism to focus on high-reward regions and a Wasserstein-2 distance regularization to balance exploration and exploitation. The paper provides theoretical analyses and empirical results across various tasks, demonstrating the effectiveness of the proposed method in achieving optimal policy convergence with controlled trade-offs between reward maximization and the generation capacity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Potential Overemphasis on Theoretical Analysis:\nWhile the theoretical underpinnings are robust, the paper might overly focus on the theoretical aspects at the expense of practical considerations. Balancing the presentation (e.g.section 4.6 to the appendix) to include more case studies could make the findings more relatable to a broader audience.\nLack of Comparative Analysis with Other Regularization Techniques:\nThe paper introduces W_2 distance regularization but does not compare its effectiveness with other potential regularization methods. Including such comparisons could strengthen the paper's contribution by positioning it within the broader landscape of regularization strategies.\nNarrow Empirical Validation:\nThe empirical validation is commendable, but the paper could benefit from testing the method across a wider range of datasets (e.g. CeleA face dataset) and tasks to further establish the generalizability and robustness of the approach." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel and theoretically sound method for fine-tuning flow matching generative models using a reward-weighted mechanism and Wasserstein-2 regularization to optimize user-defined rewards while preventing overoptimization." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024online,\ntitle={Online Reward-Weighted Fine-Tuning of Flow Matching with Wasserstein Regularization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2IoFFexvuw},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in reinforcement learning (RL) have achieved great success in fine-tuning diffusion-based generative models. However, fine-tuning continuous flow-based generative models to align with arbitrary user-defined reward functions remains challenging, particularly due to issues such as policy collapse from overoptimization and the prohibitively high computational cost of likelihoods in continuous-time flows. In this paper, we propose an easy-to-use and theoretically sound RL fine-tuning method, which we term Online Reward-Weighted Conditional Flow Matching with Wasserstein-2 Regularization (ORW-CFM-W2). Our method integrates RL into the flow matching framework to fine-tune generative models with arbitrary reward functions, without relying on gradients of rewards or filtered datasets. By introducing an online reward-weighting mechanism, our approach guides the model to prioritize high-reward regions in the data manifold. To prevent policy collapse and maintain diversity, we incorporate Wasserstein-2 (W2) distance regularization into our method and derive a tractable upper bound for it in flow matching, effectively balancing exploration and exploitation of policy optimization. We provide theoretical analyses to demonstrate the convergence properties and induced data distributions of our method, establishing connections with traditional RL algorithms featuring Kullback-Leibler (KL) regularization and offering a more comprehensive understanding of the underlying mechanisms and learning behavior of our approach. Extensive experiments on tasks including target image generation, image compression, and text-image alignment demonstrate the effectiveness of our method, where our method achieves optimal policy convergence while allowing controllable trade-offs between reward maximization and diversity preservation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Flow Matching", "Reinforcement Learning", "Wasserstein Regularization", "Exploration-Exploitation Trade-off", "Fine-Tuning", "Generative Model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/acb752e5f96e9fadaab150049f0aab5b8f4d2a39.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/955bc0fb0b9a7e6545e2c0f3cad0f26510818912.pdf" }, "title": { "value": "Online Reward-Weighted Fine-Tuning of Flow Matching with Wasserstein Regularization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2J18i8T0oI
Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures
main
Active
Mechanistic Interpretability;Sparse Autoencoders;Universality;State Space Models
interpretability and explainable AI
5;5;6;8
4;3;3;3
3;2;3;4
2;2;3;3
3;2;3;3
6
3.25
3
2.5
2.75
-0.471405
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why did you choose OpenWebText as your primary dataset for analysis? How might the choice of OpenWebText as the dataset influence your results? Have you tested if the feature similarities hold across different domains (e.g., code, mathematics, or structured data)? Would analyzing domain-specific text reveal different patterns of architectural universality?\n\n2. Have you performed any statistical significance tests to support your claims of feature similarity and universality?\n\n3. How generalizable are your findings to other tasks beyond language modeling?\n\n4. Can you provide more details on why the \"Off-by-One\" motif exists in Mamba models?\n\n5. Is there a risk that the Sparse Autoencoder pre-processing itself may impose a degree of alignment between features in Transformers and Mambas? Could the sparsity constraint inadvertently enhance apparent similarity?\n\n6. How do you expect your findings to scale to larger models? Did you observe whether model size impacts universality between architectures? Could smaller or larger versions of Transformers and Mambas exhibit different degrees of feature similarity?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The identification of the \"Off-by-One motif\" in Mamba models is a unique contribution that highlights nuanced differences between architectures.\n\n- The introduction of a complexity-based interpretation for understanding feature similarity differences is innovative.\n\n- The circuit-level analysis of Mamba models, revealing structural analogies with Transformers, is good and adds depth to the study. The validation of feature similarity and its correlation with universality further strengthens the study.\n\n\n- The findings of this paper have implications for the field of neural network interpretability. By demonstrating that different architectures can converge to similar algorithms and features, the study provides valuable insights into the generalizability of mechanistic findings across models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the \"universality hypothesis\" in mechanistic interpretability, which suggests that different neural network architectures may converge to implement similar algorithms when tasked with analogous objectives. The authors focus on two mainstream architectures for language modeling: Transformers and Mambas. They propose using Sparse Autoencoders (SAEs) to extract interpretable features from these models and demonstrate that a significant portion of features are shared between the two architectures. The paper validates the correlation between feature similarity and universality and delves into the circuit-level analysis of Mamba models, finding structural analogies with Transformers, particularly in induction circuits. \n\nThe paper's contributions include:\n- Introduction of a novel metric to isolate and quantify feature universality in the context of architectural variations.\n- Empirical evidence shows that Transformer and Mamba models learn similar features through the application of SAEs.\n- Circuit analysis of Mamba models reveals structural analogies and nuanced differences compared to Transformer circuits.\n- Support for the universality hypothesis by demonstrating cross-architecture feature similarity and identifying the \"Off-by-One motif\" in Mamba models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the paper focuses on Transformers and Mambas, it would benefit from a broader examination of additional architectures. Including a more diverse set of models, such as recurrent neural networks (RNNs) or convolutional neural networks (CNNs), could strengthen the universality hypothesis by offering a more comprehensive understanding of feature similarity across a wider range of neural networks. This would enhance the generalizability of the findings.\n\t\n- The paper utilizes OpenWebText for correlation analysis but does not discuss how the choice of dataset might affect the results. A more detailed examination of the potential biases and limitations introduced by the dataset choice would provide a clearer context for the findings and ensure that the results are not overly dependent on a specific dataset.\n\n- The claims of feature similarity and universality would be more robust if supported by statistical significance tests. Including such tests would provide stronger evidence for the observed correlations and enhance the credibility of the conclusions.\n\n- SAE-related technical gaps:\n - The paper does not include ablation studies on SAE hyperparameters (dictionary/code size, training duration, etc.). Conducting these studies would help to understand the sensitivity of the results to different hyperparameter settings and ensure the robustness of the findings.\n - There is no discussion of how SAE reconstruction quality relates to feature similarity. Addressing this relationship would provide insights into the effectiveness of SAEs in isolating interpretable features and validate the methodology used.\n\n- The use of GPT-4 for complexity scoring lacks rigorous validation. The paper does not provide inter-rater reliability metrics or comparisons with human annotations, nor does it discuss potential biases in the automated scoring. \n\n- The paper provides a limited exploration of why the \"Off-by-One\" motif exists in Mamba models. A deeper investigation into the underlying reasons for this motif would enhance the understanding of the structural differences between the Mamba and Transformer models and provide more insights into the universality hypothesis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Do Transformers have such layer-specific behavior when it comes to inductive ability?\n* Is there a way to empirically verify the claims in Sec 6.2?\nAppendix D: How does the feature mapping between RWKV and Mamba look like?\n* Sec 6.1: Is the layer-17 phenomenon robust to random initializations? I.e., if one retrains the SSM with another seed, would layer 17 still be the key in induction?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* This paper studies an important problem.\n* It makes good use of sparse autoencoders for analysis.\n* The experiments in this paper are well implemented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the mechanistic similarity between language model structures (Mamba, RWKV, Transformer). The authors focus on their Universality, a hypothesized property that suggests different neural architectures implementing similar algorithms on similar tasks.\n\nIn the first part of the experiment section, they use Sparse Autoencoder (SAE) as the major tool for their analysis. The representations from two LM architectures are taken to train SAEs. The latent vectors in the SAEs, in which each dimension corresponds to various syntactic and semantic phenomena, exhibit mechanical similarities, and it is possible to find a matching between the vectors from different LM architectures.\n\nIn the second part, the authors study the induction behavior of two LM architectures. They found that the 17th layer of Mamba is the most important for the LM’s inductive ability.\n\nI think this paper studies an important problem and is well executed. I found the experiments in this paper to be well implemented. My only concern is with the role of circuit analysis experiments. They are indeed very interesting but I’m not sure how they contribute to building a mechanistic analogy between SSMs and Transformers. Do Transformers have such layer-specific behavior when it comes to inductive ability? Is there a way to empirically verify the claims in Sec 6.2?\n\nMinor:\n\nAppendix D: How does the feature mapping between RWKV and Mamba look like?\nSec 6.1: Is the layer-17 phenomenon robust to random initializations? I.e., if one retrains the SSM with another seed, would layer 17 still be the key in induction?\nLine 179: missing section reference.\nLine 861: missing space between ‘Universality’ and ‘is’" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The role of circuit analysis experiments is unclear.\n* The claims made in Section 6.2 need to be empirically supported.\n* (Minor) The size of LMs is limited. Only ~100m models are used in the experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- While the heatmap matrix in Figure 3c is mainly diagonal, I can see that there is a cluster of features located in the last layer of the Pythia and distributed fairly uniformly in the middle layers of Mamba. Can the authors clarify the meanings of these features?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper is clearly written. \n- I appreciate the idea of using skylines, as it helps support the authors' claims. \n- The results are interesting and useful for further research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an exploration of the Transformer and Mamba models through mechanistic interpretability methodology. Despite the architectures being very different, the features and circuits of these models turn out to be very similar." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I couldn't identify any specific weaknesses. However, below are some suggestions that could enhance this work from my perspective:\n\n\n- It would be interesting to explore more circuits through SAE, as suggested in [1] (see Section 4.3). However, it is unclear where SAE should be placed within the Mamba architecture to achieve similar features.\n- While the Pearson correlation appears to be a natural choice for measuring feature similarity, it assumes that the feature space has linear properties. It might be worthwhile to explore other correlation measures, such as distance correlation, which could potentially yield better results.\n- A clear statement clarifying that MPPC refers to the maximum Pearson correlation between models' features is needed to improve understanding.\n\n[1] Interpreting Attention Layer Outputs with Sparse Autoencoders (Kissane et al.)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What justified the choice of plain SAEs over more performant variants such as [Gated](https://arxiv.org/abs/2404.16014) or [Top-K SAEs](https://openai.com/index/extracting-concepts-from-gpt-4/)? It is currently hard to gauge the impact of this choice on the obtained results, and whether findings could have been different if improved SAE variants were tested." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The universality evaluation pursued in this paper is timely and relevant, given recent advances in non-Transformer architectures for language modeling. The baseline and skylines employed in this work are well-motivated and provide helpful reference points for the analysis. The analysis of feature correlation based on complexity is also interesting, showing convincing proof that most commonalities across architectures are found for simpler SAE features. Overall, the figures are designed clearly and compellingly to support the findings detailed in the main body of the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work investigates the similarity of sparse autoencoders (SAE) features and induction circuits between a Transformer-based and a comparable Mamba-based language model trained on the same dataset. Results show that many features from Transformers SAEs show a high max pairwise Pearson correlation with Mamba SAE features, with their depth along model layers roughly matching across the two models. The correlations found for cross-architecture comparison are compared to a neuron baseline and two skylines obtained from different models and SAE training seeds, showing that the cross-architecture comparison falls only slightly short of skylines. Authors further examine the correlation between cross-architectural matching features and their complexity, finding that features with the most overlap are generally simpler and more monosemantic. Finally, the authors briefly investigate the similarity between induction circuits in the same architectures using path patching, finding a similar mechanism mediated by the convolutional operation of the Mamba architecture. Notably, the information mixing necessary for the induction operation is performed earlier in Mamba (\"off-by-one\" mixing)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Novelty of the findings** While to my knowledge, this is the first work evaluating the similarity of SAE features between Transformers and Mamba models, other works such as [Paulo et al. 2024](https://arxiv.org/pdf/2404.05971) and [Sharma et al. 2024](https://arxiv.org/abs/2404.03646) already showed that many interpretability approaches such as logit lenses, steering vectors, and probes produce similar results across a variety of LLM architectures (Mamba, Transformer and RWKV). It is, hence, not particularly surprising that such findings extend to SAE decompositions of model activations.\n\n**Generality of findings for larger models** Authors experiment only with 2 tiny models with ~100M parameters each. This can be reasonable in light of the requirements (same training data, effort of training SAEs on each layer), but these systems are significantly smaller than those used by [Paulo et al. 2024](https://arxiv.org/pdf/2404.05971) for comparable cross-architecture interpretability experiments. Notably, larger checkpoints for both models used by the authors are publicly available, including the same training data for control, and could have been used to further prove the generality of the reported results. Importantly, without further experiments, it cannot be excluded that the limited capacity of tiny models might be the main motivation behind the high similarity features and circuits across the two architectures, and this could not be the case for more capable models with e.g. 1B or 7B parameters.\n\n**Multiple Comparison and Correlation Analysis without Hypothesis Testing** The maximal correlation of feature activation patterns with other (24576 x # of layers) features is bound to be quite high due to the enormous amounts of comparisons. In Section 4.4, no hypothesis is formulated regarding the expected similarity of features found across the four tested variants, and consequently, no significance measure for the correlation coefficients is reported. As a result, conclusions regarding the similarity of Mamba and Pythia SAE features are ambiguous (e.g. the statement \"[...] our neuron baseline almost exhibits zero sign of similarity between Mamba and Pythia\" at line 268 does not agree with Figure 3a, where at least 15% of neurons exhibit a correlation > 0.4). To make the analysis more convincing, a clear hypothesis regarding the degree of similarity in resulting SAE features should have been formulated and tested for baseline, experiment, and skylines, each including a correction procedure such as the Bonferroni method to account for multiple comparisons.\n\n**Minor formatting/clarification points:**\n\nLine 135: The mention \"$F_a$ and $F_b$ are some kinds of operation function.\" is too generic in this context. The purpose of these functions should be specified, and at least one example of functions used for this purpose should be provided.\n\nLine 179: Broken reference.\n\nFigure 1 is too tight with the inline text, making the distinction between caption and main body text unclear.\n\nSection 4.1: title is ungrammatical. I imagine you meant something like \"Searching for / In search of Interpretable Primitives\".\n\nLine 202: Clarify that you mean all features for all SAEs across all model layers (it becomes clear only from Figure 3c later in the paper)\n\nLine 263: The acronym MPPC is never introduced alongside its meaning.\n\nFigure 5: The mention \"both Model Seed Variant and Cross-Arch SAE MPPC exhibit correlation, while one in SAE Seed Variant is weaker\" in the caption is not very meaningful, since the trends for all three variants are pretty similar. For part (b), the mention \"scores ranging from 1 (No) to 2 (Yes)\" is confusing: it would be better to say \"Distribution of MPCC for polysemantic (1) and monosemantic (2) auto-generated feature labels.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024towards,\ntitle={Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2J18i8T0oI},\nnote={under review}\n}" }, "abstract": { "value": "The hypothesis of \\textit{Universality} in interpretability suggests that different neural networks may converge to\nimplement similar algorithms on similar tasks. In this work, we investigate two mainstream architectures\nfor language modeling, namely Transformers and Mambas, to explore the extent of their mechanistic similarity.\nWe propose to use Sparse Autoencoders (SAEs) to isolate interpretable features from these models and show\nthat most features are similar in these two models. We also validate the correlation between feature similarity\nand~\\univ. We then delve into the circuit-level analysis of Mamba models\nand find that the induction circuits in Mamba are structurally analogous to those in Transformers. We also identify a nuanced difference we call \\emph{Off-by-One motif}: The information of one token is written into the \nSSM state in its next position. Whilst interaction between tokens in Transformers does not exhibit such trend." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Mechanistic Interpretability", "Sparse Autoencoders", "Universality", "State Space Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/76d8c95ea4bf7e03eee8ccfd3dbd6ccf16747a40.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Towards Universality: Studying Mechanistic Similarity Across Language Model Architectures" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2JN73Z8f9Q
MultiMedia-Agent: A Multimodal Agent for Multimedia Content Generation
main
Active
multimodal agent;video generation
foundation or frontier models, including LLMs
3;3;5;5
4;4;3;4
1;2;3;2
2;2;2;2
1;3;3;2
4
3.75
2
2
2.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Refer to weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I believe the advantages of this type of article are self-evident and are enough to impact the industry. Therefore, compared to the advantages, I hope to discuss more about the missing parts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "I am very motivated by this article because it indeed addresses a real issue. However, as with other studies in this research path, the validation of the experiments is very weak. I hope the authors can discuss this in the discussion period." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- This article seems more like a prototype design rather than a complete paper, as it lacks many implementation and experimental details.\n- I didn't see any examples, nor did I see any supplementary materials provided for demonstration (did I miss something?).\n- How is the success rate validated? How is success defined?\n- I understand A stands for audio, and V stands for video, but what does AV-V mean? What is the task? What is the goal? Does it require - - human involvement, as the paper mentions human alignment as a contribution?\n- What are Plan1, Plan2, and Plan3? What are the differences?\n- What do Agent1, Agent2, and Agent3 represent? What is their significance?\n- What does Average steps mean? Is fewer better?\n- What are the differences in the success rate between Tables 4 and 5?\n- Each task seems to have different input/output formats. How are they validated separately?\n- The images look very rudimentary, and some of the text is even unclear." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As mentioned as above, although I like the idea of this paper and enjoy reading it, I will have to reject it. To make this paper in a better shape for publihsment, I would recommend as below.\n\n1. Have a more thorough study on relevant studies and try to include them in the comparison/evaluation section.\n2. Improve evaluation metrics and include more convincing experimental results on the superiority of the framework." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The originality of this paper is worth noting. The idea of applying the Skill Acquisition Theory into the design of the framework is inspiring. Using information theory as a design guide when implementing multi-agent is a good thought other than just adding multiple iterations naively. I personally feel this is a really interesting idea and definitely would like to see more related work in future.\n2. The paper's structure is very clear and easy to follow. The quality of the overall presentation is pretty good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduced a multi-agent large language model (LLM) framework based on the Skill Acquisition Theory that supports any-to-any styled generation, including text, image, audio, and image. Such framework was evaluated based on model-based preference evaluation metrics. As the result of evaluation, the framework's best version (i.e. with 3 stages included) was able to show comparative performance with GPT-4o while the overall success rate is lower. In summary, this study was able to propose a relatively good multi-agent LLM framework with multiple components, which showed comparative performance as GPT-4o in certain aspects based on the metrics that the paper claimed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unfortunately, I will have to vote for reject for this paper as it has some fundamental flaws in its evaluation. \n\n1. The evaluation metrics is not a solid one. Although the idea of this paper might looks theoretically beautiful, its experiments lack convincing support. For content generation, especially when LLM is involved, there has been tremendous excellent studies where multiple kinds of evaluation have been introduced. For example, when handling artistic or abstract content generation (e.g. music/audio/image), it would be hard to solely rely on LLM model evaluation as LLM model could have certain problems such as LLM hallucination and unfortunately these problems are still pending on being solved/studied. Therefore, subjective evaluation is currently still necessary to evaluate generated content especially for **content matching task**, such as AB test, ranking test, or rating test. This could easily and intuitively show the advantage of each model based on a large group of experts/users' feeling/rating. Several studies on recent top conferences have made remarkable examples on such kind of evaluation such as [1][2][3].\n2. The comparative study is not solid enough. This paper only compare the framework with GPT-4o. To make this paper in a better shape for publishment, it will need to include more relevant model/framework for comparison. \n\n[1] Yue, Xiang, et al. \"Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n[2] Deng, Qixin, et al. \"ComposerX: Multi-Agent Symbolic Music Composition with LLMs.\" arXiv preprint arXiv:2404.18081 (2024).\n[3] Guo, T., et al. \"Large Language Model based Multi-Agents: A Survey of Progress and Challenges.\" 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024). IJCAI; Cornell arxiv, 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tLarge Language Models (LLMs) can exhibit unpredictable behavior, so showing examples of failure cases for the MultiMedia-Agent would add depth and transparency. Analyzing these cases could provide insight into potential improvements.\n\n2.\tHas the success rate of the MultiMedia-Agent been quantified? Understanding the model’s reliability across different types of content generation would strengthen the case for its practical application and offer a valuable metric for future benchmarking. Did the authors notice any bias issues during content generation?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe paper is well-structured and easy to follow, making its technical concepts accessible to readers, which enhances understanding and supports the proposed research’s coherence.\n\n2.\tThe topic of multimedia content automation is timely and has high relevance, especially given the expanding demand for digital content across various domains, from marketing to education. This research holds considerable potential for real-world application, promising efficiency and scalability in daily content creation tasks.\n\n3.\tThe authors’ attempt to specialize in multimedia content generation represents an innovative approach that could fill an important gap in automated content creation, potentially providing richer, multi-modal outputs beyond current text-based LLM capabilities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a multimedia content generation agent, referred to as the MultiMedia-Agent, which is designed to automate complex multimedia content creation tasks. The authors position their agent as a system capable of outperforming existing generative AI models in this space, including GPT-4o. Through comparative analysis, they argue that their proposed MultiMedia-Agent generates higher-quality multimedia content, offering a better media outputs compared to GPT-4o." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe framework appears to primarily leverage existing technologies without significant structural innovation. It’s unclear if the advancements lie in model architecture or simply application. Expanding on how the MultiMedia-Agent advances beyond the foundational technologies would strengthen the paper.\n\n2.\tThe comparison with GPT-4o raises concerns, as GPT-4o is not explicitly designed for multimedia content generation. This choice limits the comparative relevance, as the study might benefit from benchmarking against more specialized or similar frameworks in multimedia generation. Adding such comparisons would enhance the credibility of the proposed system's advantages.\n\n3.\tI am a bit concerned about the evaluation metrics the authors proposed. It seems to be that most of the metrics are based on GPT-4o. It will be more convincing if the authors can show the evaluation from GPT-4o truly aligns with human perceptions.\n\n4.\tMinor typographical errors appear in the text, including the abstract. For instance, in the abstract, “the our approaches” should be revised to “our approaches” to maintain professionalism and clarity.\n\nMinor Suggestions:\n•\tIncluding citations for comparison methods in Table 1 would allow readers to trace back the origins and contexts of these models, lending credibility and clarity.\n•\tEnsure consistent use of terms, such as “GPT4o” or “GPT-4o,” for a more polished presentation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Why not use ImageBind for task formulation or evaluation?\n2. What is the meaning of success rate? Does a failed plan mean it is not executable due to incorrect parameters, does not use the correct tools, or anything else?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The general idea of multimodal generation agent with tool using and planning is interesting.\n2. The proposed method covers a wide range of tasks and tools." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a multimedia agent for content generation agent. It first proposes a data generation pipeline, a tool library and several evaluation metrics. A two-stage correlation plan curation method and a three-stage training pipeline are proposed according to the skill acquisition theory. The authors conduct experiments to compare its performance against GPT-4o." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper is poorly written. The experiments are also not convincing.\n\n1. The only compared baseline is GPT-4o, which is not specifically designed for most of the tasks. More baselines should be added, such as those in Table 1, even if they are not applicable for all of the tasks. It is also not clear how the GPT-4o baseline is implemented for other tasks like audio generation or video generation.\n2. Samples of generation are not sufficient. The only provided demo is Figure 9, where the audio is indirectly presented as text descriptions. Demos for other tasks are not given.\n3. The experiment improvements are trivial. Besides, there are only 10 queries for each of the tasks in the validation set. What are the confidence intervals? Are the results statistically significant?\n4. The explanations of \"longer plans\" and \"fewer steps\" should not be concluded directly, but supported by additional experiments showing the average length of steps of each model.\n5. What is the metric in Table 6? What are the meanings of the metrics in Table 7 and the two tables in the appendix? Key explanations are missing. Also, why are the tasks in Table 7 all text generation tasks? Shouldn't they be \"xx-V\"?\n6. What are the details of the tools? Table 8 is not sufficient as entries like \"audio_to_text\", \"text_to_image\" are not detailed enough. For instance, what underlying models or algorithms are used? What are the input/output specifications and any key parameters?\n7. What are the details of the metrics in Section 3.3.1? The current description is not enough for reproducibility.\n8. Many typos and grammar mistakes throughout the paper." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper proposes a multimedia content generation agent with a data pipeline and tools. A two-stage strategy optimizes content plans, showing improved alignment with human preferences." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024multimediaagent,\ntitle={MultiMedia-Agent: A Multimodal Agent for Multimedia Content Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2JN73Z8f9Q},\nnote={under review}\n}" }, "abstract": { "value": "With the advancement of AIGC (AI-generated content) technologies, an increasing number of generative models are revolutionizing fields such as video editing, music generation, and even film production. However, due to the limitations of current AIGC models, most models can only serve as individual components within specific application scenarios and are not capable of completing tasks end-to-end in real-world applications. In real-world applications, editing experts often work with a wide variety of images and video inputs, producing multimodal outputs---a video typically includes audio, text, and other elements. This level of integration across multiple modalities is something current models are unable to achieve effectively. However, the rise of agent-based systems has made it possible to use AI tools to tackle complex content generation tasks.\nTo deal with the complex scenarios, in this paper, we propose a multimedia content generation agent system designed to automate complex content creation. Our agent system includes a data generation pipeline, a tool library for content creation, and a set of metrics for evaluating preference alignment. Notably, we introduce the skill acquisition theory to model the training data curation and agent training. We designed a two-stage correlation strategy for plan optimization, including self-correlation and model preference correlation. \nAdditionally, we utilized the generated plans to train the MultiMedia-Agent via a three stage approach including base/success plan finetune and preference optimization. The comparison results demonstrate that the our approaches are effective and the MultiMedia-Agent can generate better multimedia content compared to GPT4o." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "multimodal agent", "video generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/57e6e1b0cb352160ef4f8a52a96d1964d99afd8e.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MultiMedia-Agent: A Multimodal Agent for Multimedia Content Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2JXe3RprGS
Turn-by-Turn Driving Navigation: Leveraging Sequence Model for Real-time Audio Instructions
main
Active
Turn-by-Turn Navigation; Deep Learning; Sequence Models
applications to computer vision, audio, language, and other modalities
3;3;3
3;5;3
2;2;1
2;1;1
2;2;1
3
3.666667
1.666667
1.333333
1.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. The navigation capabilities targeted by this research are effectively addressed by existing navigation apps, which can already provide lane-level guidance with real-time audio instructions. This research does not clearly establish a novel problem or significant gap in current technology.\n\n2. The authors assert that theirs is the first real-world application of deep learning for audio navigation. However, similar problems have been thoroughly researched and resolved in the NLP field, with deep learning applications already prevalent. Thus, the claimed contributions seem overstated.\n\n3. The summary of contributions lacks specificity, offering mostly general points without a clear overview of the work. This makes it difficult for readers to grasp the precise focus and innovations in the research.\n\n4. The Related Work section references too few sources and does not include recent advancements in the field. Although the authors highlight using deep learning to solve their problem, they fail to reference relevant studies on deep learning in audio navigation, a significant oversight.\n\n5. The \"Problem Formalization\" section is inadequately explained. Readers cannot clearly understand the input-output flow, and while Table 9 in the appendix offers some clarification, the choice to use intermediate features as inputs adds unnecessary complexity, making the initial inputs and outputs unclear.\n\n6. Authors state that large language models (LLMs) are unsuitable for TBT audio instruction, opting instead for a transformer-based approach. However, this claim lacks sufficient rationale, given that many LLMs perform well on similar tasks and are widely used in both academia and industry. The authors neither justify nor experimentally validate why LLMs would be unsuitable for this task.\n\n7. Proposed method appears overly generic and largely involves combining existing model components without introducing novel ideas. This approach lacks sufficient originality to merit publication at a conference like ICLR.\n\n8. It is unclear how the authors have knowledge of GPT’s exact architecture, given that it is a black-box model. Furthermore, considerations such as model size, inference speed, training time, and computational cost, which are crucial for real-time applications, are not discussed.\n\n9. Important components in the methods section, such as Deep CrossNet and GPT Decoder, are not adequately described. This lack of detail leaves readers uncertain about how these components function within the model.\n\n10. The experiments are disorganized and limited in scope. There is a lack of strong baselines and comparisons to recent, relevant work, making it difficult to ascertain whether the method achieves SOTA performance. The experiments also lack ablation studies, visualizations, and key information.\n\n11. During driving, overly detailed instructions may be distracting, as drivers may not want or need continuous audio prompts. This issue of instruction density is not addressed.\n\n12. Minor language errors persist, such as a missing space between \"Figures 3(b)\" and \"3(c)\" on line 422, which reflects a lack of careful proofreading.\n\n13. The paper lacks novelty, as indicated by the outdated citations and few references to recent research, which suggests that the work does not align with the current cutting-edge.\n\nOverall, this paper is structured more like a technical report than a research paper. Given its organization and limited scientific contribution, it does not yet meet the standard for acceptance at a conference." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "This paper introduces a deep learning model for turn-by-turn navigation, offering two key benefits: more adaptive, context-aware audio guidance and reduced navigation errors. The model’s sequence-based design and cloud-edge setup ensure real-time, precise directions, greatly enhancing driver support​." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a deep learning model for turn-by-turn navigation, enhancing traditional audio guidance systems by making instructions more adaptive and context-aware. Using a sequence-based approach and a cloud-edge setup, it delivers real-time, precise directions, reducing navigation errors and easing driver cognitive load." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Please refer to Questions section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What new information does the background section provide? Most of it repeats content from the introduction, and the remaining parts would be more appropriate in the methodology section.\n2. The user study protocol is not clearly described. It’s unclear if 100 drivers actually drove the car following the navigation instructions, or if they merely judged the instructions by listening to them. If the evaluation was only auditory, it is inconclusive to determine the effectiveness of the proposed method.\n3. What roles do the GPT decoder and CrossNet network play in the proposed framework?\n4. There are no details on data preprocessing or how the embedding features are extracted.\n5. The paper lacks details on the layer-by-layer structure of the framework and how data is processed through each stage.\n6. How is accuracy calculated in the ablation study?\n7. what are the statistics of user study?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Spent lots of resources in developing this method, yet have to see the benefit of that." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose a method to optimize turn-by-turn navigation instructions for drivers using a deep learning approach. They tested their method on a custom-created dataset and analyzed results through real-world A/B testing. The authors claim to be the first to investigate this problem and report making significant progress in this area." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The quality of the paper is quite poor. The text is lengthy yet fails to convey the main message effectively. Key terms, such as \"yaw rate\" and \"seesaw effect,\" are either undefined or introduced too late in the paper. The related work section is missing, and relevant literature is not cited. The final paragraph of the background offers no new information and merely summarizes the introduction.\n\nThe methodology is difficult to understand, lacking motivation for using such a complicated framework and failing to clarify what advantages it provides. Important details are relegated to the appendix rather than included in the main text. The paper is mostly written in the passive voice, with vague statements like, “To address the challenges in generating real-time, context-aware instructions, we model the audio instruction in TBT driving navigation as a multi-task learning problem. Enables the model to optimize the necessary components for generating the audio.” They repeatedly use the term \"context-aware\" without explaining what it actually means." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Is there a specific advantage to using geometric averaging of different loss functions in this task? Was there any comparison made with arithmetic averaging?\n2. The paper mentions that the HMM-based instruction policy was used during the data collection phase. Does this imply that the supervision signal for training the model comes from this algorithm?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper presents a novel application of deep learning.\n2. The paper is well-strutured and easy to read." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a deep-learning based method for generating audio navigation instructions for drivers. The proposed modelA neural is constructed based on the Transformer architecture and a mixture of experts (MOEs), along with a multi-objective loss function for training. Experimental results indicate that, compared to HMM-based methods, the proposed approach achieves higher subjective scores." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of novelty. There are few innovative designs observed in the network architecture or training.\n2. The experimental section lacks a broader comparison. Audio navigation instructions are a common feature in mapping applications, and there are likely many established methods available. The paper only compares with one HMM-based method, which was proposed in 2012 and is not novel.\n3. The ablation study does not provide valuable insights. The experimental results indicate that removing most modules solely (e.g. the MOE module) from the network does not lead to significant performance degradation, which raises concerns about potential redundancy in the network design." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A novel deep learning framework using sequence models to generate real-time audio instructions for turn-by-turn navigation, which is the first large-scale application of deep learning in driving audio navigation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024turnbyturn,\ntitle={Turn-by-Turn Driving Navigation: Leveraging Sequence Model for Real-time Audio Instructions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2JXe3RprGS},\nnote={under review}\n}" }, "abstract": { "value": "Turn-by-turn (TBT) navigation systems are integral to modern driving experiences, providing real-time audio instructions to guide drivers safely to destinations. However, existing audio instruction policy often rely on rule-based approaches that struggle to balance informational content with cognitive load, potentially leading to driver confusion or missed turns in complex environments. To overcome these difficulties, we first model the generation of audio instructions as a multi-task learning problem by decomposing the audio content into combinations of modular elements. Then, we propose a novel deep learning framework that leverages the powerful spatiotemporal information processing capabilities of Transformers and the strong multi-task learning abilities of Mixture of Experts (MoE) to generate real-time, context-aware audio instructions for TBT driving navigation. A cloud-edge collaborative architecture is implemented to handle the computational demands of the model, ensuring scalability and real-time performance for practical applications. Experimental results in the real world demonstrate that the proposed method significantly reduces the yaw rate compared to traditional methods, delivering clearer and more effective audio instructions. This is the first large-scale application of deep learning in driving audio navigation, marking a substantial advancement in intelligent transportation and driving assistance technologies." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Turn-by-Turn Navigation; Deep Learning; Sequence Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7736a80b44c9df5956806c27cd27acdeffe855c6.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/8b458cc90538bf25d30061c28133356ae75a02d4.zip" }, "title": { "value": "Turn-by-Turn Driving Navigation: Leveraging Sequence Model for Real-time Audio Instructions" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2JihLwirxO
ParaSolver: A Hierarchical Parallel Integral Solver for Diffusion Models
main
Active
Diffusion Models;
generative models
5;6;8
3;4;3
4;4;3
3;4;3
3;4;3
6.333333
3.333333
3.666667
3.333333
3.333333
-0.188982
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. See the Weaknesses section above. These must be addressed.\n\n2. The language in the paper hinders the presentation occasionally. For instance, the second paragraph of the related work section (Section 2) was challenging to read, primarily due to strange use of passive voice. There are similar issues throughout the paper. I suggest reframing to active voice wherever possible to improve clarity.\n\n3. Section 4.2, below equation (9): What is the \"reverse of Jacobian matrix\"? Do the authors mean the inverse? \n\n4. The authors separately explore tolerance and speedup in the results. I'd like to know which tolerance leads to the best speedup without compromising visual results. The authors should add a new graph with this extra information." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "There has been a surge of recent interest in fast parallel sampling of diffusion models. The state of the art for parallel sampling, to the best of my knowledge, appears to use Picard iterations to solve the nonlinear system of equations. The authors of this work make a few important contributions, all of which serve to accelerate convergence: (1) they use Newton's rootfinding method, which converges quadratically to the root for smooth enough functions; (2) they leverage the banded structure of the Jacobian to accelerate their solver; (3) they come up with a good initialization for Newton so it in fact converges; (4) they batch their parallel sampling and denoising so that it only happens within a sliding window." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present an approach to accelerating the inference of diffusion probabilistic models (DPMs). They transform the problem of sequential sampling of DPMs into one of solving banded nonlinear equations. The Jacobian of the nonlinear system, required by Newton's method for rootfinding (aka Newton-Raphson) is unit block-lower-banded (1 on the diagonal, bands below), allowing for efficient parallel solution through a simple recurrence relation. The authors also present an initialization procedure that accelerates convergence. Finally, they combine this framework with a sliding window technique to conduct parallel iterations only a subset of the points. The combined approach is then evaluated on StableDiffusion-v2 and the LSUN Church pixel-space diffusion model, and demonstrates large speedups on inference without a loss in visual quality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Newton's method for rootfinding converges rapidly only if the function one is rootfinding on is sufficiently smooth. The authors should discuss the smoothness properties of the nonlinear system and how it impacts the convergence of the Newton solver, and also comment on the theoretical guarantees and limitations of their approach in this context.\n\n2. If the nonlinear residual for the nonlinear system has a complicated landscape, Newton can easily get stuck. The state of the art in optimization is to either use trust-region Newton methods or use quasi-Newton. The authors skirt around this issue altogether and count on their results and experiments to drive their point home. It would be useful to see the loss landscape as a function of, say, two of the most \"important\" unknowns (determined for instance by PCA) or the eigenvalues of the kernel matrix of the neural tangent kernel to determine if Newton is the right choice for this problem. Alternatively, if the authors could justify why these failure modes don't occur in PDMs, that would also suffice.\n\n3. How are Equation 12 and 13 justified? If the Jacobian term in the paragraph below Equation 11 is expensive to compute, why not approximate it? Newton's convergence rate requires at least an estimate for the Jacobian. Using the identity matrix instead effectively reverts Newton to a first-order method. Did the authors experiment with alternatives? Please provide theoretical/empirical justification for using the identity matrix approximation and discuss any experiments you conducted with alternatives.\n\n4. Rootfinding can be inherently unstable. Did the authors investigate other alternatives, such as optimization-based methods? Why did the authors choose one over another?\n\n5. This is minor, but I would've picked a less generic name for the paper. \"ParaSolver\" could imply a large number of things, but this is mainly a Newton-based parallel solver for PDMs. Consider a name change." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I have only a few minor comments:\n- Please improve Figure 1 by using higher resolution, adding axis labels, and using a consistent font and style with the other figures in the paper.\n- Figure 5: There is, to me, no discernible difference between the images for different N. Can the authors comment on this? Why do we see such a clear difference for DDPM, but not for DDIM? It would be good if the authors either (1) provide a quantitative analysis of the differences between results for different N, if they exist, or (2) explain why DDIM results are less sensitive to N compared to DDPM.\n- In the Table 1 & 2, the results are ordered as DDPM, DDIM, DPMSolver, whereas in the figures the order is DDIM, DPMSolver, DDPM. I would appreciate some reordering to make it consistent." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I believe this paper is generally well written and makes an relevant contribution to the field.\n\nAll claims are well supported by experiments, and the analyses appear sound." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present an interesting extension of previous work for inference in DPMs. The general idea is to formulate the solution to the ODE or SDE not as a sequential integration, but instead look at it as solving a set of nonlinear equations, done either via fix-point iteration or utilizing root finding algorithms. While this class of approaches does not improve the computational effort per se, it can lead to reduced wall-clock time by using less evaluation points compared to what is necessary when sequentially integrating the differential equation.\n\nThe paper proposes a unified framework that encompasses previous approaches as extreme cases. This results in a set of banded nonlinear equations. One key insight of the authors is to realize and proof that the banded system posses a unique and unbiased solution. They then further utilize the Newton method of root finding to accelerate the fix-point iterations. For this, one needs to calculate the Jacobian matrix. This, in general, is computationally prohibitive. An approximation scheme is proposed, where only the diagonal of the Jacobian is used, and the off-diagonal terms are set to unity. This results in an only modest increase in function evaluations over a sequential solution, indicating in addition to the reduced wall-clock time only a small increase in computational cost.\n\nThe achieved scores are on par with previous methods. A sizeable speed up in terms of wall-clock time is achieved leading to a better user experience. This is done without an massive increase in computational cost." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I believe the differences to the established methods utilizing fixed-point iterations for DPMs and their advances such as utilizing the Anderson acceleration used in previous work could be made clearer. It is currently not clearly mentioned that ParaTAA utilizes an conceptually similar idea. Albeit of course the approaches are different, they share common ideas which do not become clear without reading the literature carefully. I would encourage the authors to authors to rework the related work section and mention the differences to the other works more clearly.\n- It would be interesting to see by how much the number of necessary iterations to reach the threshold decreases by utilizing the Newton method. I recommend the authors include an ablation study showing how the number of iterations and convergence are affected by (1) using the Newton method vs. fixed-point iteration, and (2) approximating vs. fully computing the Jacobian, on a toy problem." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. The reviewer's main question about the design of the ParaSolver algorithm is the claim in lines 244-247 of the paper. Specifically, the authors proposed to approximate the Jacobian term $\\frac{\\partial}{\\partial \\hat{X}^{(k)}_{t_n}}\\Phi(t_{n+1}, t_n, \\hat{X}^{(k)}_{t_n})$ with the identity matrix in the original update rule (11). Could the authors discuss which specific parts in the cited papers on Jacobian-free backpropagation (lines 245-246) actually used similar techniques? Furthermore, would it be possible for the authors to provide some mathematical intuitions on why the identity matrix should work here? Is it possible to derive some error bounds via numerical analysis?\n\nReferences:\n\n[1] Tang, Z., Tang, J., Luo, H., Wang, F. and Chang, T.H., 2024, January. Accelerating parallel sampling of diffusion models. In Forty-first International Conference on Machine Learning.\n\n[2] Shih, A., Belkhale, S., Ermon, S., Sadigh, D. and Anari, N., 2024. Parallel sampling of diffusion models. Advances in Neural Information Processing Systems, 36.\n\n[3] Luhman, E. and Luhman, T., 2021. Knowledge distillation in iterative generative models for improved sampling speed. arXiv preprint arXiv:2101.02388.\n\n[4] Meng, C., Rombach, R., Gao, R., Kingma, D., Ermon, S., Ho, J. and Salimans, T., 2023. On distillation of guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14297-14306).\n\n[5] Salimans, T. and Ho, J., 2022. Progressive distillation for fast sampling of diffusion models. arXiv preprint arXiv:2202.00512.\n\n\n[6] Xu, Y., Deng, M., Cheng, X., Tian, Y., Liu, Z. and Jaakkola, T., 2023. Restart sampling for improving generative processes. Advances in Neural Information Processing Systems, 36, pp.76806-76838.\n\n[7] Song, Y., Dhariwal, P., Chen, M. and Sutskever, I., 2023. Consistency models. arXiv preprint arXiv:2303.01469." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. This paper has provided a complete literature review of related work on accelerating diffusion models via parallel sampling. Also, both the theoretical and algorithmic results in this paper are presented in a relatively clear way to follow. \n\n2. A complete set of large-scale numerical experiments on the Imagenet and LSUN-Church datasets are included to justify the acceleration achieved by the proposed ParaSolver algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a framework that generalizes the sequential sampling process of diffusion models as solving a system of banded nonlinear equations. Applying the Newton-Raphson method to solve the nonlinear equations then yields a corresponding parallel sampling algorithm for diffusion models. By utilizing the unit-diagonal structure of the banded nonlinear equations' Jacobian matrices, the authors further simplified the updating rules of the parallel algorithm. Extensive numerical experiments were also conducted to show that the ParaSolver algorithm proposed in this paper can indeed accelerate the inference time of diffusion models compared to existing implementations based on parallel sampling." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors mentioned in lines 366-367 that the ParaTAA algorithm proposed in [1] needs to be implemented for comparison as it has yet to be integrated into the Diffusers library. However, given that there are only a few empirical works on combining parallel sampling with diffusion models, the reviewer thinks it would be essential for the authors to implement ParaTAA and use it as one extra baseline. Moreover, it might also be necessary for the authors to compare ParaSolver with approaches that accelerate diffusion models from other aspects, such as knowledge distillation [3-5], restart sampling [6], and self-consistency [7]. Furthermore, the authors should consider releasing the code used for implementing the ParaSolver algorithm. \n\n2. There are some minor issues regarding the presentation of the paper. For instance, the phrase \"to fast construct a set of more precise initial values that conform to the Definition 1\" in lines 296-297 doesn't seem quite right. It can be possibly rephrased as \"to construct a set of more precise initial values that conform to the Definition 1 quickly\". Moreover, the authors might also consider adding a few figures to illustrate the ParaSolver algorithm more vividly, just as what has been done in previous work [2]." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A parallel sampling algorithm for accelerating inference of diffusion models" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024parasolver,\ntitle={ParaSolver: A Hierarchical Parallel Integral Solver for Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2JihLwirxO},\nnote={under review}\n}" }, "abstract": { "value": "This paper explores the challenge of accelerating the sequential inference process of Diffusion Probabilistic Models (DPMs). We tackle this critical issue from a dynamic systems perspective, in which the inherent sequential nature is transformed into a parallel sampling process. Specifically, we propose a unified framework that generalizes the sequential sampling process of DPMs as solving a system of banded nonlinear equations. Under this generic framework, we reveal that the Jacobian of the banded nonlinear equations system possesses a unit-diagonal structure, enabling further approximation for acceleration. Moreover, we theoretically propose an effective initialization approach for parallel sampling methods. Finally, we construct ParaSolver, a hierarchical parallel sampling technique that enhances sampling speed without compromising quality. Extensive experiments show that ParaSolver achieves up to 12.1× speedup in terms of wall-clock time. The source code will be publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion Models;" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/096c62b67c3e353f8a89dc7cafc33ca4a985db55.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ParaSolver: A Hierarchical Parallel Integral Solver for Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2KWZjdFwmh
StEVE: Adaptive Optimization in a Kronecker-Factored Eigenbasis
main
Active
KFAC;EKFAC;Natural Gradient Descent;Adam;Optimization;Stochastic Optimization
optimization
3;3;3;8
3;3;4;4
3;1;2;3
3;2;2;4
2;3;3;2
4.25
3.5
2.25
2.75
2.5
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Why is KFAC so much slower per step compared to EKFAC? E.g. in Figure 1, both KFAC and EKFAC perform 100 epochs, yet KFAC requires roughly 3x the wall-clock time.\n- Could you add a short paragraph providing a complexity analysis of the computational and memory requirements of StEVE compared to Adam and EKFAC? In my understanding, it should be very similar to EKFAC in both time per step and memory, with the additional memory of a second EMA (for the first moment). Is this correct?\n- Line 19: Should it be \"StEVE\" instead of \"EVE\"?\n- Suggestion: Both Section 1 and Section 2 extensively describe existing work. Only at the bottom of page 4, do you start describing your own method. If you compress Sections 1 and 2, you have more space to present your method, which I think would strengthen your paper.\n- In the paragraph starting at line 79, I think it might be worth mentioning and discussing Shampoo [e.g. 3] and related methods. Shampoo recently won the AlgoPerf: Training Algorithms competition and seems to be a practically relevant non-diagonal method (with likely use in training Gemini models).\n- Line 88: George et al. should probably be a parencite or citep.\n- Line 91: It should probably be \"Due to the expensive nature of [] computing [the] KFE \".\n- Line 127: There should probably be a space before the citation.\n- In Adam's equation, I think there is something missing for the EMA of the second momentum. Either a second gradient after the element-wise multiplication or rather a square (since you mention squaring below).\n- Also just below the equation (line 148) you mention \"vector-multiplication of $\\epsilon$. Do you mean \"addition\"? I don't see where $\\epsilon$ is multiplied.\n- Is there a reason that Section 1 uses $\\mathbf{P}$ as the preconditioner (line 45) and in Section 2 you use $\\mathbf{A}$ (line 116) instead?\n- Line 197: I think $USU$ should also be bolded, since you use bold-face for matrices, no?\n- Line 198: Is this sentence missing a \"to\", i.e. \"which is to say converting the gradient [to] $\\mathbf{A}$'s Eigenbasis\"?\n- In Algorithm 1, you could highlight the differences between StEVE and EKFAC, e.g. by coloring lines that changed.\n- Line 270: There is a double \"against\".\n- Line 271: \"Epoch Count\" and \"Wall-Clock Time\" should probably both be lowercase.\n- The figures, and especially the legends are relatively small and thus hard to read.\n- In the figures, try using a consistent coloring/legend. For example, Adam is yellow in Figure 1 but in Figure 2 KFAC is yellow. This makes it hard to quickly compare across figures. The colors are also relatively similar (yellow, orange, red, pink) and thus hard to distinguish.\n- Is there a reason to not compare to KFAC and EKFAC for the ViT on CIFAR-100?\n\n[3] Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, Yoram Singer; \"Towards Practical Second Order Optimization for Deep Learning\"; OpenReview 2021; <https://openreview.net/forum?id=Sc8cY4Jpi3s>" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper presents a novel and interesting approach to incorporating an Adam-style update into the EKFAC optimizer. The resulting algorithm is clearly described in Algorithm 1 and is rather straightforward to implement. The paper also provides code for the new optimizer (as well as the experiments). It presents an extensive introduction and background section and thus an accessible explanation of the method.\n- Faster neural network training is a crucial research topic and any progress in this area is of great interest to the entire deep learning community.\n- The paper not only focuses on the number of steps but also considers the - practically much more relevant - wall-clock runtime." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces StEVE, a novel deep learning optimizer that combines aspects of Adam and KFAC. Specifically, they modify EKFAC, which corrects the eigenvalues of the KFAC approximation, by adding Adam's bias-corrected first and second moment estimators. The authors show that StEVE achieves faster training to a target performance in both step count and wall-clock time compared to Adam, KFAC, and EKFAC, on three different deep learning problems on CIFAR-10, CIFAR-100, and Tiny ImageNet." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The empirical evidence for StEVE is too weak to be convincing. As there are now hundreds of deep learning optimizers, the empirical burden of proof of superiority is quite high, especially for optimizers like StEVE who are mostly motivated by their empirical performance. I believe the currently provided experiments don't provide enough evidence to convince people to adopt it in practical applications, for the following reasons:\n\n- Most importantly, the hyperparameter selection seems to be performed in an opaque and potentially unfair way. Apparently, no hyperparameter tuning was performed, e.g., with all optimizers sharing the same learning rate. Yet, the selected learning rate differs between experiments (e.g. 0.001 for CIFAR-10 and 0.00005 for CIFAR-100). How was this chosen? I suspect that these choices work well for StEVE, but not the compared baseline. A more meaningful comparison would be to either tune the hyperparameters for each method on each test problem independently (using the same budget) or use fixed hyperparameters for all methods that are shared across all test problems. The latter would be a \"hyperparameter-free\" optimization and would require different baselines, e.g. Schedule-Free [1].\n- All experiments are done on small problems, with CIFAR-100 being the largest. Also, all are from the same data domain and task, namely image classification.\n- No learning rate schedule was used. I don't think a constant schedule is a very practical choice.\n- Overall, the baselines seem to be very weak, likely due to inefficient hyperparameter choices (see the first point).\n- The target performances seem rather impractical, e.g. only 44% on Tiny ImageNet and 46% on CIFAR-100. This is far from the performance that one can achieve on these datasets (with the used models) and thus not a performance practitioners care about. This is relevant because optimizers that can quickly achieve a low performance can be quite different from optimizers that achieve a more competitive performance quickly.\n\nWithout a more rigorous evaluation, I doubt that the method will have a significant impact. I suggest having a look at [2], which describes a protocol for comparing deep learning optimizers. Although running the full benchmark might be too computationally expensive, following some of the described practices could significantly strengthen the empirical evidence for StEVE and thus demonstrate its strength more convincingly.\n\n[1] Aaron Defazio, Xingyu Alice Yang, Harsh Mehta, Konstantin Mishchenko, Ahmed Khaled, Ashok Cutkosky; \"The Road Less Scheduled\"; arXiv 2024; <https://arxiv.org/abs/2405.15682>\n\n[2] George E. Dahl et al.; \"Benchmarking Neural Network Training Algorithms\"; arXiv 2023; <https://arxiv.org/abs/2306.07179>" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How was the learning rate for the experiments chosen?\n- Why weren't the learning rates set individually for each competing method?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The paper gives a good introduction to relevant prior work and the contributions are adequately positioned in the context of prior work.\n- The idea for the method is well-motivated, lifting the adaptive Adam scheme to a Kronecker-factored eigenbasis. To my knowledge, this idea has not been explored before and is original.\n- The method shows promising initial results in the experimental framework of the paper.\n- The paper is generally well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an optimization method for deep learning, which performs Adam-style adaptation in a Kronecker-factored eigenbasis. The proposed method is evaluated empirically against vanilla Adam as well as other Kronecker-factored optimizers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The proposed method is a straight-forward combination of existing ideas. No supporting theory is provided. In my opinion, such a paper needs a very detailed and fair experimental comparison to warrant publication at ICLR. Unfortunately, the quality of the experiments is subpar. To mention just a few issues I see\n- Experiments are run with a single random seed.\n- All methods use the same learning rate and it is not explained where that learning rate value comes from. This is not adequate for an empirical comparison of different optimizers.\n- Experiments use a constant learning rate instead of established learning rate decay schedules." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1: I would like to draw your attention to concurrent work SOAP [3], which seems closely related as it also uses Adam inside a second order preconditioning approach, Shampoo [2]. This doesn’t lower the originality of your proposal, being concurrent work that you likely couldn’t know about at the time of submission. But given the relatedness of the approaches, I am interested to know how you would contrast them? What can you highlight as the differences / anticipated benefits & limitations of STEVE=EKFAC+Adam v.s. SOAP=Shampoo+Adam ?\nAlso, how do they compare in memory and compute complexity? \nThis discussion could become part of a fleshed out related works section. \n\nQ2: Algo lines 16 and 17 eigendecomposition(...): what are the expectations over B and T? Can you provide more details on how these expectations are computed/estimated/tracked? (I suggest to also update the algo box to provide this additional level of detail, as well as main text l 283 “running averages”)\n\nQ3: Training loss curves associated with your test accuracy curves.\nCan you include these (in supplementary if space is insufficient in main)\nDo the higher test accuracies also correspond to lower training losses? Please discuss.\n\nQ4: What are the test accuracy and training loss reached by all algos at the max number of iterations you used?\n\nQ5: Sensitivity to hyperparameters?\nDo you have evidence that your optimizer outperforming Adam does not require extensive fine-tuning of (additional?) hyper-parameters. E.g. how sensitive is it to recompute frequency?\nSimilarly you write l291 “The other methods did not converge at this learning rate”, but would thay at other rates?\n\nFurther clarifying suggestions:\n- L 200 “as the critically important eigenvalues … are not preserved by the approximation” -> needs more explanation.\n- The explanation of EKFAC and in particular the KFE in paragraph line 202 is too dense. This is the algo that you build on, so please try to lighten expand and clarify.\n- BUG towards end of update equation for Adam’s $v_{t+1}$ line 139, missing a square?\n- Curves: please use more easily distinguishable colors than different shades of red! (given the chance, make them color-blind friendly, see e.g. https://davidmathlogic.com/colorblind, and/or use different line styles)\n- Figure 3 is missing KFAC and EKFAC. \n\n\nTypos and English fixes:\n- Abstract L19: “EVE” -> “STEVE”\n- L 148: “vector-multiplication of $\\epsilon$ are done element-wise”. I see no vector multiplication of $\\epsilon$ ???\n- L 161: “is taking” -> “is taken”\n- L 175: “reduces” -> “which reduces”\n- L 198: “converting” -> “changing to”\n- L 270: “against against” \n- L 283: “running averages”, computed how exactly?\n- L 285: $\\alpha$ has never been defined. You should at least say what it is and does in KFAC/EKFAC.\n- L 291: “The other methods did not converge at this learning rate” -> do you mean they did to reach the target accuracy? What about at other learning rates? Did you hyper-optimize over it, and how sensitive are the methods to it? \n- L 382: “of the Fisher” -> “of the empirical Fisher”\n- L 391: “Other directions to take the work are to investigate the potential of the improvements that have been made over Adam in the KFE such as proper weight decay or Nesterov momentum.” -> “Future work should also investigate the potential of using, in the KFE, other improvements that have been made over Adam, such as proper weight decay [ADD REFERENCE] or Nesterov momentum [ADD REFERENCES].\n\n\n\n[1] Benchmarking Neural Network Training Algorithms, Dahl et al. 2023 https://arxiv.org/abs/2306.07179\n\n[2] Shampoo: Preconditioned stochastic tensor optimization. V. Gupta, T. Koren, Y. Singer. ICML 2018\n\n[3] SOAP: Improving and Stabilizing Shampoo using Adam. Vyas et al. September 2024.\nhttps://arxiv.org/abs/2409.11321" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Originality: The optimizer developed in the paper is novel: an original combination of the strengths of EKFAC and Adam. \n\n- Significance: This is a significant and timely contribution in the context of a heightened interest in more efficient optimization methods for deep learning (e.g. [1,3]) . The development of better off-the-shelf optimizers suitable for training deep learning models is an essential component for driving progress in the field, as can be seen in the wide adoption of Adam. In spite of their theoretical superiority, non-diagonal second order methods have struggled to manifest practical superiority for training standard deep learning models over their simpler diagonal counterparts. That the proposed method manages to convincingly beat Adam on deep network training tasks, both in number of epochs and in wallclock time, is thus significant. It showcases the potential of the approach and warrants the attention of the community.\n\n- Clarity: Motivation, background, and the proposed method are clearly explained (except for minor glitches, see below). This is in part thanks to a clear algorithm box. I also appreciate that readily usable pytorch code is given in the supplementary for reproducibility. Experimental setup and methodology are also briefly but clearly explained.\n\n- Quality: The approach is well-motivated, appears sound and well implemented, and the presented experiments convincingly support the claim of superiority of the developed optimizer." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work proposes STEVE, a novel optimization method that combines the strengths of the Adam optimizer (cheap tracking and adaptation to diagonal second order properties) and EKFAC (amortized better approximation of full second order). This is achieved by applying Adam, not in original parameter space, but in the Kronecker-Factored Eigenbasis (KFE) i.e. the “preconditioning” basis used by KFAC and EKFAC. Experiments on image classification tasks with ResNet-50 and ViT architectures show significantly faster optimization compared to Adam, both in number of epochs and wall-clock time." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Missing a more thorough recent related works discussion.\nRelated work pertaining to background is well covered, but the paper is missing a section discussing later advances in second order optimization methods for deep learning. See [1,2,3] and first question below for starting pointers.\n\n- The experimental analysis could have been pushed further: to include also training loss curves, and an evaluation and discussion of the relative sensitivity to hyperparameters. (see questions section for details).\n\n- Somewhat limited scope and scale of experimental evaluation. \nWhile I value the experimentation on 2 different deep architectures ResNet50 and ViT and 2 image datasets, a more extensive experimentation on a larger variety of tasks and datasets would help to more solidly establish the advantage of the approach. See e.g. deep net training benchmark [1]. \n\n- Paper would benefit from a little more polishing. \nSome (minor and easily fixable) clarity issues. See questions part for a list and suggested improvements." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What is the memory and computational complexity of the proposed opt? \n\nHow frequently is the preconditioner updated? Shampoo updates every 100 iterations, PSGD updates every 10 iters. It would be good to see how often the precond must be updated and how it effects performance. \n\nVariance bars? A\n\nThe claim that the proposed opt significantly outperforms (40% reduction in all clock time) Adam in fig 1 seems not true based on wall clock time. It seems at the end of training Adam ends at a higher accuracy, and Adam actually matches StEVE only a few hundred seconds later. Since the authors do not show variance bars we have no way of knowing if this is a legit speedup. \n\nFurthermore, with the extra memory needed to train with StEVE one could easily boost batch size for Adam and see an improvement in performance." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper adopts and extends the style of thinking seen in EKFAC to apply diagonal corrections to KFAC." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Adamize diagonal corrections to KFAC in a similar way to EKFAC" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper opts to use SGD as the base opt for KFAC and EKFAC. The official and unofficial codebases for KFAC allow and some actual suggest to use Adam as the base opt for KFAC and EKFAC. This is because it's well known that when we operate using Adam as the base opt that drives E/KFAC it works better. \n\nThe authors say: \n\n\"However, instead of using only the second moments, STEVE maintains bias-corrected exponential moving averages of both the first and second moments of the gradients in the KFE, estimated in the same manner as in Adam. By combining the benefits of the Kronecker-factored approximation with the\nadaptive moment estimation of Adam, STEVE aims to achieve faster convergence.\" \n\nWhile the opt in this pape is not exactly Adam as the base opt driving E/KFAC it is in similar vain, as such it would have been helpful to have ran experiments with SGD and Adam as the base opts for E/KFAC so we could see if there is a delta. \n\nAnother weakness is not comparing to Shampoo which is an alternative kronecker factorized optimizer that has become quite popular recently due to its strong performance at Google. Furthermore the same way this paper proposes Adamized diagonal 1st and 2nd moment corrections to KFAC, SOAP proposes this for Shampoo. As such this paper should really compare to those methods. \n\nFurthermore, PSGD Affine or Kronecker factorized has been shown to outperform E/KFAC as well as Shampoo/SOAP and should be compared as well for this paper to be complete. \n\nAnother weakness is the use of a ViT for cifar datasets. The images are too small for patches to make sense and so it generally doesn't do well. Something like Keller's modded-nanoGPT would be a good place to show the performance of the opt since it's been benchmarked against all the latest curvature informed optimizers." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Adam in the Kronecker-Factored Eigenbasis of the Empirical Fisher is faster than Adam, KFAC, and EKFAC" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024steve,\ntitle={St{EVE}: Adaptive Optimization in a Kronecker-Factored Eigenbasis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2KWZjdFwmh},\nnote={under review}\n}" }, "abstract": { "value": "Adaptive optimization algorithms such as Adam see widespread use in Deep Learning. However, these methods rely on diagonal approximations of the preconditioner, losing much information about the curvature of the loss surface and potentially leading to prolonged training times. We introduce StEVE (Stochastic Eigenbasis-adaptive Variance Estimation), a novel optimization algorithm that estimates lower order moments in the Kronecker-Factored Eigenbasis (KFE). By combining the advantages of Adam over other adaptive methods with the curvature-aware transformations of methods like KFAC and EKFAC, StEVE leverages second-order information while remaining computationally efficient. Our experiments demonstrate that EVE achieves faster convergence both in step-count and in wall-clock time compared to Adam, EKFAC, and KFAC for a variety of deep neural network architectures." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "KFAC", "EKFAC", "Natural Gradient Descent", "Adam", "Optimization", "Stochastic Optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/85a957d69aa7c429db050696472e27bbc72df236.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1eef9637bea57793c0f3a399e883efd7a18446b5.zip" }, "title": { "value": "StEVE: Adaptive Optimization in a Kronecker-Factored Eigenbasis" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2L1OxhQCwS
Transformers versus LSTMs for electronic trading
main
Active
transformer;LSTM;electronic trading
learning on time series and dynamical systems
3;3;3;3;3;5
5;3;4;4;3;3
1;1;2;3;2;3
1;1;2;2;1;3
2;3;1;2;2;3
3.333333
3.666667
2
1.666667
2.166667
-0.4
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What are the fundamental architectural characteristics that make LSTM models more effective for differential sequences compared to Transformers?\n2. Can you provide deeper analysis to support the generalizability of your findings of LSTM vs Transformer?\n3. How does financial time series forecasting different from other time series forecasting (like weather, traffic, etc.?)\n4. To address the remaining limitations identified in *Weaknesses*: a) Could you provide detailed model implementations and hyperparameter configurations? b) How would decomposition techniques benefit Transformer architectures? c) Please include comparisons with state-of-the-art models" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper addresses a relevant and significant question by comparing LSTM and Transformer models in financial time series forecasting.\n2. The experimental setup is extensive and provides substantial data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This research compares the effectiveness of Transformer and LSTM architectures in financial forecasting. The study examines both model types using high-frequency trading data and introduces DLSTM and a finance-specific Transformer. Results show that Transformers only slightly outperform in absolute price predictions, while LSTMs showing more reliable performance overall." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper lacks code and detailed implementation information for both the Transformer and LSTM models, which limits reproducibility.\n2. The novelty of the proposed approach is limited. While the authors introduce a DLSTM model to improve performance, the idea of decomposition was previously explored in models like DLinear [1], diminishing the originality of the contribution. Beyond the comparative analysis, additional innovation is also limited.\n3. The decomposition strategy appears to be applied only to the LSTM model. For a fair comparison, a decomposition approach for the Transformer model should also be included. In Table 3, DLSTM significantly outperforms LSTM, which suggests that a decomposed Transformer might also show improved results.\n4. The paper does not include several state-of-the-art (SOTA) Transformer-based models, such as PatchTST [2], Crossformer [3], and iTransformer [4], in the comparison, which limits the comprehensiveness of the analysis.\n5. The statement \"Transformer-based models exhibit only a marginal advantage in predicting absolute price sequences, whereas LSTM-based models demonstrate superior and more consistent performance in predicting differential sequences such as price differences and movements\" requires further investigation. A deeper analysis into the underlying causes of this observed difference is missing, which weakens the interpretability of the results.\n\n[1] Zeng, Ailing, et al. \"Are transformers effective for time series forecasting?.\" Proceedings of the AAAI conference on artificial intelligence. Vol. 37. No. 9. 2023.\n\n[2] Nie, Yuqi, et al. \"A Time Series is Worth 64 Words: Long-term Forecasting with Transformers.\" The Eleventh International Conference on Learning Representations.\n\n[3] Zhang, Yunhao, and Junchi Yan. \"Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting.\" The eleventh international conference on learning representations. 2023.\n\n[4] Liu, Yong, et al. \"iTransformer: Inverted Transformers Are Effective for Time Series Forecasting.\" The Twelfth International Conference on Learning Representations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Dataset diversity and generalizability: Can you provide more insights into the choice of using only Binance LOB data for a single cryptocurrency pair in your experiments? How do you expect the proposed DLSTM model and the comparative analysis between LSTM-based and Transformer-based models to perform on a wider range of financial instruments, such as stocks, forex, or other cryptocurrencies, as well as data from multiple exchanges? Providing results on more diverse datasets could strengthen the claims of generalizability and robustness of the findings.\nAblation studies and component contributions: Can you conduct ablation studies to investigate the individual contributions of the time series decomposition approach in the proposed DLSTM model? It would be helpful to compare the performance of DLSTM with and without this specific modification to assess its impact on the model's effectiveness. Additionally, can you provide a more detailed analysis of the adapted Transformer-based models' architecture for the movement prediction task, highlighting the importance of each proposed change?\nModel interpretability: Can you elaborate on the interpretability of the proposed DLSTM model and the adapted Transformer-based models? How do these models compare with other LSTM-based and Transformer-based models in terms of interpretability? Providing insights into the factors driving the models' predictions and their relative importance could be valuable for understanding the models' decision-making process and enhancing trust in their applications for electronic trading.\nHyperparameter tuning and model selection: Can you provide more details on the hyperparameter tuning process and model selection criteria used for the various models in your experiments? Specifically, what approach was used for hyperparameter optimization (e.g., grid search, random search, Bayesian optimization), and which hyperparameters were tuned for each model? Additionally, how were the validation sets or cross-validation techniques employed in the model selection process?\nRobustness to market conditions: Have you considered evaluating the performance of the proposed DLSTM model and the comparative analysis between LSTM-based and Transformer-based models under different market conditions, such as periods of high volatility, market crashes, or significant news events? Demonstrating the models' ability to generalize and adapt to various market scenarios could provide a more comprehensive assessment of their robustness and practical applicability in electronic trading." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: The study offers a novel perspective on the application of LSTM-based and Transformer-based models in financial time series forecasting, specifically in the context of electronic trading using high-frequency LOB data. The authors introduce a new LSTM-based model, DLSTM, which creatively combines LSTM with a time series decomposition approach inspired by the Autoformer architecture. This innovative integration of existing ideas allows DLSTM to outperform other models in the mid-price movement prediction task.\nQuality: The paper demonstrates a high level of quality in its experimental design and analysis. The authors conduct a comprehensive comparative study across three prediction tasks (mid-price prediction, mid-price difference prediction, and mid-price movement prediction), using a diverse range of LSTM-based and Transformer-based models. The experiments are well-structured, and the results are thoroughly analyzed, providing valuable insights into the performance of different models in each task.\nClarity: The paper is well-written and easy to follow. The authors provide clear explanations of the problem formulation, the proposed DLSTM model, and the experimental setup. The use of tables and figures enhances the clarity of the results, making it easy for readers to compare the performance of different models across various metrics and prediction horizons.\nSignificance: The findings of this study have significant implications for the application of deep learning models in financial time series forecasting, particularly in the context of electronic trading. The authors demonstrate that while Transformer-based models may excel in certain aspects of mid-price prediction, LSTM-based models, especially the proposed DLSTM, exhibit superior and more consistent performance in tasks such as mid-price difference prediction and mid-price movement prediction. The incorporation of trading simulations with and without transaction costs further highlights the practical significance of the proposed DLSTM model for real-world trading scenarios.\n\nMoreover, the paper's adaptation of existing Transformer-based models' architecture to better suit the demands of the movement prediction task showcases the potential for further improvements in this domain. By incorporating both past and projected mid-price data, followed by a linear layer and softmax activation, the authors demonstrate a creative approach to enhancing the performance of Transformer-based models in financial time series forecasting.\nIn summary, the paper's originality, quality, clarity, and significance make it a valuable contribution to the field of financial time series forecasting using deep learning models, offering new insights and directions for future research in this domain." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "5348_Transformers_versus_LSTMs\npdf\nLT\nHere is a new paper needs to be reviewed. Summary*\nBriefly summarize the paper and its contributions. This is not the place to critique the paper; the authors should generally agree with a well-written summary.\n*\n\nSummary\n\nThis paper conducts a comparative study between LSTM-based and Transformer-based models for financial time series forecasting, specifically in the context of electronic trading using high-frequency limit order book (LOB) data. The authors investigate the performance of these models across three prediction tasks: mid-price prediction, mid-price difference prediction, and mid-price movement prediction.\n\nFor the mid-price prediction task, the study finds that Transformer-based models like FEDformer and Autoformer achieve lower prediction errors than LSTM-based models. However, the authors note that the practical utility of these results for high-frequency trading is limited due to insufficient prediction quality.\n\nIn the mid-price difference prediction task, LSTM-based models demonstrate superior performance and robustness compared to Transformer-based models. The canonical LSTM achieves the highest R^2 of around 11.5% within about 10 prediction steps, while state-of-the-art Transformer models struggle to effectively process difference sequences.\n\nThe paper's main contribution lies in the mid-price movement prediction task, where the authors introduce a novel LSTM-based model called DLSTM. This model integrates LSTM with a time series decomposition approach inspired by the Autoformer architecture. DLSTM significantly outperforms all other models in classification metrics and proves its effectiveness in trading simulations, particularly when transaction costs are considered.\n\nAdditionally, the authors adapt the architecture of existing Transformer-based models to better suit the demands of the movement prediction task. They incorporate both past and projected mid-price data, followed by a linear layer and softmax activation, to determine price movements.\n\nOverall, the study highlights that while Transformer-based models may excel in certain aspects of mid-price prediction, LSTM-based models, particularly the proposed DLSTM, demonstrate consistent superiority and practicality in financial time series prediction for electronic trading." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the paper presents valuable insights and contributions, there are a few areas that could be improved or require further clarification:\n\nLimited dataset diversity: The experiments in this study are conducted using LOB data from a single cryptocurrency pair (BTC-USDT or ETH-USDT) on one exchange (Binance). To demonstrate the generalizability of the proposed DLSTM model and the comparative analysis between LSTM-based and Transformer-based models, it would be beneficial to include a wider range of financial instruments, such as stocks, forex, or other cryptocurrencies, as well as data from multiple exchanges. This would strengthen the paper's conclusions and provide a more comprehensive assessment of the models' performance across diverse financial time series.\nLack of ablation studies: While the paper introduces the novel DLSTM model, which integrates LSTM with a time series decomposition approach, there is a lack of ablation studies to investigate the individual contributions of each component. For example, the authors could compare the performance of DLSTM with and without the time series decomposition to assess the impact of this specific modification. Additionally, a more detailed analysis of the adapted Transformer-based models' architecture for the movement prediction task would provide valuable insights into the effectiveness of the proposed changes.\nLimited discussion on model interpretability: Interpretability is a crucial aspect of financial time series forecasting models, especially in the context of electronic trading, where understanding the factors driving the model's predictions is essential for risk management and decision-making. The paper could benefit from a more in-depth discussion on the interpretability of the proposed DLSTM model and the adapted Transformer-based models, as well as a comparison with the interpretability of other LSTM-based and Transformer-based models.\nHyperparameter tuning and model selection: The paper does not provide a detailed description of the hyperparameter tuning process and model selection criteria for the various models used in the experiments. It is essential to discuss the approach used for hyperparameter optimization, such as grid search, random search, or Bayesian optimization, and the specific hyperparameters tuned for each model. Additionally, the authors could provide more information on the model selection process, such as the use of validation sets or cross-validation techniques.\nRobustness to market conditions: The experiments in this study are conducted using LOB data from a specific time period (e.g., July 2022). To demonstrate the robustness of the proposed DLSTM model and the comparative analysis between LSTM-based and Transformer-based models, it would be valuable to evaluate the models' performance under different market conditions, such as periods of high volatility, market crashes, or significant news events. This would provide a more comprehensive assessment of the models' ability to generalize and adapt to various market scenarios.\n\nAddressing these weaknesses would further strengthen the paper's contributions and provide a more comprehensive and robust analysis of the proposed DLSTM model and the comparative study between LSTM-based and Transformer-based models in financial time series forecasting for electronic trading." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What specific modifications were made to the Transformer architecture to adapt it to financial prediction tasks?\n\nCan the authors elaborate on the metrics used to evaluate the models' performance? What criteria were significant in determining the practical utility of the models?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Relevant Application: The use of LSTM and Transformer models for financial predictions on LOB data is timely and relevant given the growing interest in high-frequency trading and predictive models in finance.\n\nComparative Scope: The study covers multiple models and tasks, providing a broad comparison between LSTM- and Transformer-based architectures on real-world financial data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores the use of Transformer and LSTM-based models for financial time series forecasting tasks using high-frequency limit order book (LOB) data. A new LSTM-based model, DLSTM, is proposed alongside a modified Transformer architecture tailored for financial predictions. The study compares these models across three tasks: mid-price prediction, mid-price difference prediction, and mid-price movement prediction. Results suggest that Transformer-based models offer only marginal improvements in specific tasks, while LSTM models, particularly DLSTM, are more reliable in predicting mid-price differences and movements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unconvincing Novelty: The paper lacks substantial novelty. The DLSTM model is essentially a combination of existing methods, such as time series decomposition and LSTM layers, without a clear innovation. Similarly, the Transformer modifications are incremental and do not provide a compelling improvement. As a result, the contributions seem incremental and insufficiently distinct from existing work in financial time series forecasting.\n\nInterpretability Issues: The added complexity of Transformer-based models raises interpretability concerns, especially given the unclear benefit over simpler LSTM-based models. Without a more interpretable mechanism or explanation for its performance gains, the model’s added complexity appears unnecessary.\n\nInsufficient Performance Gain for Complexity: The study demonstrates only marginal improvements from the proposed Transformer modifications over traditional LSTMs, particularly in mid-price prediction. Despite the significant computational complexity introduced by Transformer-based models, the improvements are minimal and do not convincingly justify their adoption for practical trading applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "If possible, include LOB data from the LOBSTER dataset, to increase the generalizability of the experiment. If possible, include latest transformer based model (e.g. iTransformer, PatchTST). Recommend to use benchmarking frameworks such as LOBFrame or LOBCAST in the experimental design to ensure that the results can be more comparable to existing studies. A more detailed discussion of the specific differences and advantages of DLSTM over other temporal decomposition methods (e.g., DLinear) could be added. could also include some ablation studies. include code for reproducibility." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Even relatively simple LSTM models perform well in financial time series forecasting tasks, compared with transformer-based model." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper compares the performance of LSTM and Transformer models in financial time series forecasting (limit order book data). They compared with FEDformer, Autoformer, Informer, Reformer, Transformer and LSTM. The main results show that Transformer has a slight advantage in predicting absolute price series, but the LSTM model performs more consistently and accurately in the prediction of price changes and price movements. In addition, the paper introduces DLSTM inspired by DLinear and Autoformer." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing quality of the paper is low, especially the format of literature citation is not uniform and some of the citations are not standardized in formatting and arrangement. \n2. The experimental setup lacks comparison with the frameworks and standards widely used in the current research field and fails to demonstrate the advantages of the selected model. For example, the authors failed to cite and use the latest limit order book (LOB) benchmark frameworks, such as LOBFrame (https://github.com/FinancialComputingUCL/LOBFrame) and LOBCAST (https://arxiv.org/abs/2308.01915), both of which are open source frameworks currently widely used for Limit Order Book Forecasting. In addition, the authors did not include some of the latest Transformer based models (e.g., iTransformer and PatchTST), which have demonstrated advantages in terms of performance and efficiency in time series forecasting. Comparing these latest models would make the experimental results more convincing and practical.\n3. The experimental data used in this paper is limit order book data from three cryptocurrencies, which, although suitable for high-frequency forecasting tests, is not representative of the financial market, and the volatility and noise characteristics of the cryptocurrency market are quite different from those of the traditional financial market. Data from LOBSTER (https://lobsterdata.com/) are more common and widely used in the literature currently." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to questions to be addressed, the weaknesses section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The structure and logic of the paper is well organized. \n\nThe experimental setup, description, and analysis are clearly stated with sufficient detail." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors conduct a comparative analysis of various LSTM-based and Transformer-based models for multiple financial prediction tasks using high-frequency limit order book data. They introduce a novel LSTM-based model called DLSTM and a newly designed Transformer-based model specifically tailored for financial predictions. Their results reveal that Transformer-based models offer a slight advantage in predicting absolute price sequences. However, LSTM-based models show superior and more consistent performance in predicting differential sequences, such as price differences and movements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors compare Transformers and LSTMs, concluding that LSTMs have advantages in multiple electronic trading tasks. However, the selection of Transformer-based models is limited to earlier studies (prior to 2023) and does not include recent state-of-the-art (SOTA) works, such as those mentioned in references [1], [2], and [3]. Notably, Liu et al. [2] claim significant improvements on similar tasks. Excluding these recent studies makes it premature to conclude that Transformer-based models underperform compared to LSTMs. Additionally, there is insufficient evidence to assert that the authors' proposed DLSTM model is the optimal choice for this application. Could you please include comparisons with some of these SOTA results to more robustly justify the conclusion?\n\n[1] Garza, A., Challu, C., & Mergenthaler-Canseco, M. (2023). TimeGPT-1. arXiv preprint arXiv:2310.03589.\n\n[2] Liu, Y., Hu, T., Zhang, H., Wu, H., Wang, S., Ma, L., & Long, M. (2023). itransformer: Inverted transformers are effective for time series forecasting. arXiv preprint arXiv:2310.06625.\n\n[3] Das, A., Kong, W., Sen, R., & Zhou, Y. (2023). A decoder-only foundation model for time-series forecasting. arXiv preprint arXiv:2310.10688.\n\n2. The authors' conclusion lacks novelty and largely aligns with the findings and conclusions of Zeng et al. [4] It appears to apply established approaches and conclusions to domain-specific practices. While retaining empirical relevance, the study does not offer methodological breakthroughs.\n\n[4] Zeng, A., Chen, M., Zhang, L., & Xu, Q. (2023, June). Are transformers effective for time series forecasting?. In Proceedings of the AAAI conference on artificial intelligence (Vol. 37, No. 9, pp. 11121-11128).\n\n3. The experimental setup could be made more representative by incorporating additional metrics such as Mean Absolute Scaled Error and Relative Mean Absolute Error." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.Given the limited dataset used and the lack of detailed experimental information (settings of baselines), I am very concerned about the reliability of this paper's conclusions. How would you address or demonstrate the robustness of your findings under these limitations?\n\n2.How do you explain the significant differences in experimental results with and without transaction costs? What factors contribute to this discrepancy?\n\n3.What are the specific advantages of your time series decomposition method compared to other decomposition approaches, and why do these advantages arise?\n\n4.Other questions can refer to the weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "(1) This research presents findings that compare the performance of two types of models.\n(2) It successfully highlights the weaknesses in current measurement metrics.\n(3) Interesting task definition." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This research examines the performance differences between Transformer-based models and LSTMs across three cryptocurrency limit order book data prediction tasks. It also introduces DLSTM, a LSTM-based model, and a Transformer-based model redesigned for financial forecasting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.Baseline Selection Rationale: The paper does not clearly explain why specific Transformer and LSTM variants, such as Autoformer and FEDformer, were chosen in the comparison. It remains unclear if these variants have unique advantages for financial time series forecasting. Providing additional theoretical support or rationale for model selection would enhance the scientific basis of this choice.\n\n2.Data Risk: The study only tests on a single asset (BTC-USDT), lacking a broader dataset. This limited scope may mean the model’s performance does not generalize well to other financial data. Testing on a single asset is insufficient to comprehensively assess the model’s generalizability.\n\n3.Lack of Experimental Details: The paper lacks adequate details on the experimental setup, especially regarding hyperparameter settings and baseline model architectures. This omission makes replication challenging and affects the reliability of the results. Sufficient information is not provided to ensure a fair comparison among baseline models.\n\n4.Unclear Result Interpretation: The paper does not adequately explain the significant differences in performance between experiments with and without transaction costs. Lacking theoretical support or data analysis, it's hard for me to understand the causes behind these variations under different settings.\n\n5.Limited Community Contribution: Time series decomposition, used in this study, appears to be a common approach, closely resembling classical time series decomposition methods. It is unclear how this study provides any specific advantage over the standard decomposition methods.\n\n6.Although the paper points out shortcomings in MSE and MAE metrics, it fails to propose a robust method to address these deficiencies.\n\n7.Some capitalization inconsistencies, eg. in line 034 Self-attention mechanism." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Various LSTM-based and Transformer-based models are compared on multiple financial prediction tasks based on high-frequency limit order book data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024transformers,\ntitle={Transformers versus {LSTM}s for electronic trading},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2L1OxhQCwS},\nnote={under review}\n}" }, "abstract": { "value": "The rapid advancement of artificial intelligence has seen widespread application of long short-term memory (LSTM), a type of recurrent neural network (RNN), in time series forecasting. Despite the success of Transformers in natural language processing (NLP), which prompted interest in their efficacy for time series prediction, their application in financial time series forecasting is less explored compared to the dominant LSTM models. This study investigates whether Transformer-based models can outperform LSTMs in financial time series forecasting. It involves a comparative analysis of various LSTM-based and Transformer-based models on multiple financial prediction tasks using high-frequency limit order book data. A novel LSTM-based model named DLSTM is introduced alongside a newly designed Transformer-based model tailored for financial predictions. The findings indicate that Transformer-based models exhibit only a marginal advantage in predicting absolute price sequences, whereas LSTM-based models demonstrate superior and more consistent performance in predicting differential sequences such as price differences and movements." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "transformer", "LSTM", "electronic trading" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/06f4232517e3c80aef7d6c683719114e1f037413.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Transformers versus LSTMs for electronic trading" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2L4PTJO8VQ
Descent with Misaligned Gradients and Applications to Hidden Convexity
main
Active
optimization;gradient descent;hidden convexity
optimization
6;6;6;8
3;3;2;4
3;3;3;3
3;2;3;3
3;2;3;3
6.5
3
3
2.75
2.75
0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is very well written and easy to follow. Moreover, the mathematical analysis is sound and clear as far I have checked. The idea of misaligned stochastic vectors is quite intuitive and as far as my knowledge goes it paves the way for dealing with a useful practical methodology for structured biased stochastic gradients." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on the case where the stochastic oracle produces feedback which is not necessarily unbiased. More precisely, it introduces the notion of misaligned stochastic gradients in order to capture the lack unbiasedness in several practical scenarios. To that end, the authors test their theoretical machinery for the optimization problems with hidden convexity (also studied in Sakos 2024 and references therein) and provide an algorithmic method which exhibit $\\mathcal{O}(\\varepsilon^{-3})$ iteration complexity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Concerning this paper I have two main concerns/questions:\n\n1. The almost sure boundedness of the biased gradients seems to be a quite restrictive statistical assumption. As far as my knowledge this type of assumption is usually used in methods which are run with adagrad-type step-sizes (see for example Levy 2017). Thus, my question is two-fold: Does this statistical assumption hold in practice and secondly do the authors believe that it is an artefact of the analysis or the method in order to overcome it ?\n\n2. The paper lacks a numerical comparison with other methods which consider biased gradients like the Stich 2020 paper. My question concerns the fact that the compression scheme presented in the said paper seems to cover the case of an \"relative bias\" (an analogy to Polyak's relative noise) in the sense that the bias vanishes when we approach a solution. To that end, some simple calculations may show that under this condition the second assumption in oracle & assumptions may be recovered. So, I think that a more thorough discussion is needed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Does the analysis in Section 3 have any connection with the analysis seen when using self-concordant barriers? The assumption that two matrices $A_t$ and $A_{t+1}$ do not change much is quite similar to saying that two successive Hessians do not (which is essentially what self-concordance captures). If the authors believe there could be a connection to this, it would be useful to add that to the paper and add pointers to the literature on interior-point methods, where this notion is used; if not, then it would still help to clarify why it differs. \n\n2. The assumption in Section 4 that the inner product of the true gradient and expected gradient (from the oracle) is lower bounded by the square of the true gradient norm is identical to that in Beznosikov et al (as the authors themselves note). Can the authors explain what exactly they do differently to improve the $\\epsilon^{-4}$ rate to $\\epsilon^{-3}$? Could they point to a specific step in their proof where they use this inner product assumption in a better manner?\n\n3. There was a recent paper https://arxiv.org/abs/2304.08596 by Shu, Ramachandran, and Wang, which also talks about hidden convexity. I think it would be useful to cite the paper if the way the phrase \"hidden convexity\" is used is the same. If not, it would be helpful to clarify the differences." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I think it's a well-motivated paper with a coherent set of results. I like that the analysis in Section 3, though, simple, is complete and step-by-step. The authors are also quite honest about differences with prior work, though they could do a better job explaining why they do better than existing work in similar assumptions (see Questions)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies oracle-based optimization in three settings. First, when the oracle returns gradients that are misaligned with the true gradients in a specific manner: the expectation of the returned gradient is positively correlated with the true gradient (in terms of the inner product). Second, for more specific applications, they strengthen this assumption, and require that the lower bound not just be nonzero, but that it is at least the square of the true gradient. Third, for their setting of hidden convexity, they use the standard unbiased estimator assumption. \n\nThe paper provides improved rates of convergence under all three settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My only complaint is that the paper's introduction suggests that the only assumption made is that the inner product of the true gradient with the expected gradient provided by the oracle is positive: however, this seems to hold only in Section 3. In Section 4, this is strengthened to say that the lower bound is the squared norm of the true gradient (an assumption same as Beznosikov et al), and in Section 5, it's further strengthened to be simply an unbiased estimator. Is my understanding accurate? If so, the \"misaligned\" description used throughout the introduction applies only to Section 3, and for the other results, there already exists standard terminology for those assumptions, so assigning them a new name wouldn't be the right thing to do. \n\nMy recommendation to the authors is to please clarify all the assumptions (for each of the different settings studied) in the introduction, so as to avoid any confusion. \n\nFurther, it would be useful to have a better understanding of what specific difference in the analyses in Section 4 lead to the improved rates as compared to existing work under this assumption (see Questions)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "-- All the convergence results are presented in expectation. I’m wondering how hard it is to obtain ``with high probability’’ performance guarantee? \n\n-- In line 113, ``note that this is equivalent to the condition that …”, this seems to require that f is differentiable. Otherwise, the gradient of f may not exist, and instead the bound holds for all subgradients." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is overall well written, with clearly presented setups, algorithms, and performance. In addition, the correlated stochastic oracle studied, as pointed out on page 2, might have broad applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies stochastic convex optimization where the stochastic gradient oracle is biased but correlated with the true gradient. The proposed algorithms achieve the following performances: for Lipschitz, convex objectives and slowly varying bias, the rate is O(N^{-1/2}); for Lipschitz, smooth convex objectives and general correlated stochastic gradient oracle, the rate is O(N^{-1/3}). The results are applied to problems with hidden convexity, which achieves a rate O(N^{-1/3})." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-- The paper is closely related to the stochastic optimization literature. Although the authors have cited many relevant works, the exact known results are missing in this paper. It might help the readers better appreciate the significance of the results by providing more details and/or comparisons with existing setups and known upper/lower bounds on the convergence rate.\n\n-- The assumptions of each theorem are stated at the beginning of each corresponding section (informally). It might be better to present them more formally, either as Asssumption 1/2/3, or stated directly in the theorems. \n\n-- In terms of significance, it is unclear how tight the bounds are. Would it be possible to derive some lower bounds from known results for other related problems? This would greatly help the readers appreciate the significance of the results. In addition, it seems that the analysis is relatively standard. Could the authors provide more comparisons with existing proofs for stochastic convex optimization, or related problems/setups?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can the authors explain the intuition behind the update in line 6 in algorithm 2? It seems like you just want to consider the update in the orthogonal direction but I can't quite understand why? Is it simply to reduce the norm of the update (and not considering the update along $x\\_t$ helps) or is there something fundamental that is going on there?\n\n2. Does strong-convexity help for algorithm 2 and 3? In other words, if we are given the additional information of strong convexity, how much does that help improve the error? Particularly for alg 2, if it does not help then what exactly from the analysis in Demidovich et al, 2023 does not work out in this case?\n\n3. I am not sure if this will work, but might be worth a try: If I understand correctly, the term $f(x_t) - f(x^{\\star})$ in line 413 cancels out with the term $-\\frac{\\eta\\_t \\alpha \\| g\\_t \\| }{3}$ in line 416. I think with an appropriate change of constants, we can retain a fraction of the negative term in line 416 and carry forward it to the equation in line 421. Now, if $\\|g\\_t\\| \\leq \\frac{C}{\\sqrt{t}}$ for some $C > 0$, then we are at a point with small gradient. Otherwise, it will cancel out the $\\frac{1}{\\sqrt{B_t}}$ term in the equation in line 421 with $B_t = t + 1 + k$. This might help you achieve optimal error rate. Of course this needs to be checked but it might be helpful to address the sub-optimality gap." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I like the paper overall. It uses simple but interesting additions to existing strategies for optimization to design the algorithm with biased gradient estimates which often yield optimal performance. I think the results are sufficiently novel and interesting and improve upon the best known results so far." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper considers the problem of first-order stochastic optimization where the learner can only access a stochastic gradient whose expectation is only guaranteed to be correlated with (but not equal to) the true gradient at the queried point. The authors consider three different settings commonly encountered in machine learning problems where the learner can only access biased gradients. For each of the three settings, they propose a new algorithm and provide its analysis." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I don't see any glaring weakness in the paper but I have some questions listed in the next section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024descent,\ntitle={Descent with Misaligned Gradients and Applications to Hidden Convexity},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2L4PTJO8VQ},\nnote={under review}\n}" }, "abstract": { "value": "We consider the problem of minimizing a convex objective given access to an oracle that outputs \"misaligned\" stochastic gradients, where the expected value of the output is guaranteed to be correlated with, but not necessarily equal to the true gradient of the objective. In the case where the misalignment (or bias) of the oracle changes slowly, we obtain an optimization algorithm that achieves the optimum iteration complexity of $\\tilde O(\\epsilon^{-2})$; for the more general case where the changes need not be slow, we obtain an algorithm with $\\tilde O(\\epsilon^{-3})$ iteration complexity. As an application of our framework, we consider optimization problems with a \"hidden convexity\" property, and obtain an algorithm with $O(\\epsilon^{-3})$ iteration complexity." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "optimization", "gradient descent", "hidden convexity" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2deb0563dac383dba496d34721a09eaa47660267.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Descent with Misaligned Gradients and Applications to Hidden Convexity" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2L7KQ4qbHi
Concept forgetting via label annealing
main
Active
Concept forgetting;Privacy;Bias;Computer Vision (CV)
alignment, fairness, safety, privacy, and societal considerations
1;3;3;8
3;3;4;4
2;1;2;3
2;2;2;3
1;3;2;3
3.75
3.5
2
2.25
2.25
0.676716
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- I have doubts about whether the algorithm has achieved a good experience effect. Firstly, it is because of the lack of enough competitors. Secondly, it is about the trade-off between concept violation and accuracy: if a concept is forgotten, the network should theoretically achieve better performance on other concepts.\n- Have you considered the trade-offs between increasing the number of iterations (E) and maintaining model accuracy?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper proposes a novel and interesting problem referred to as concept forgetting. The task is set to forget a specific undesired concept without degrading the general ability. It is similar to the opposite counterpart of catastrophic forgetting but has not been well studied.\n- The coherent text and the smooth transitions strengthened the readability of this paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To enhance the safety and responsibility of machine learning, this paper introduces a new task, concept forgetting. To achieve the goal of forgetting specific concepts while retaining the general ability of the original model, authors develop an iterative two-stage algorithm. The core idea of the algorithm is to ensure zero concept-violation on the newly created dataset by redistribution and relabeling." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It’s difficult to understand the explanation of Algorithm 1, eg. in line 311 to line 315.\n- As shown in Table 1, there is still an obvious reduction in test accuracy. I recommend more analysis of the reasons." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Here are my concerns:\n- The differences between concept forgetting and machine unlearning are mentioned at the end of section 2. The authors should clarify this differences much earlier, in the introduction.\n- Regarding definition 1: 'c' represents a class label or a feature?\n- Regarding LAN algorithm: Why do you need to assign pseudo-labels? How do you deal with errors in pseudo-label assignment? Why don't just remove the classifier head corresponding to the removed concept?\n- The problem of concept forgetting relies not only in retraining the classifier. The knowledge associated with it is implicitly embedded into the network's weights. How do you remove the information related with the concept being forgotten from the network's weights? I have not seen any discussion about this. If you retrain the network with the remaining data (after extracting the concept to forget), then this solution is trivial. What if the original data (used to train initially the network) is no longer available?\n- Section 5.5: What means multi-level concept forgetting? Do you assume data is multi-labeled?\n- In the experimental results, compare your approach against some methods from the current state of the art." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper addresses a very relevant topic nowadays related with data privacy, which is represented by machine unlearning\n- The paper presents a novel approach for concept forgetting in deep neural networks\n- The related work covers most of the relevant paper in the field" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel approach for concept forgetting in deep neural networks. For this purpose, they introduce a two-stage iterative algorithm called Label Annealing (LAN). In the first stage, pseudo-labels are assigned to the samples by annealing or redistributing the original labels based on the current iteration’s model predictions. In the second stage, the model is fine-tuned on the dataset with pseudo-labels. They also introduce a novel metric called 'concept violation' that measures how much the model forgets a specific concept. The proposed algorithm has been validated across various models and datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- the paper is difficult to read, the clarity of both text and figures should be significantly improved\n- the experimental validation is limited and not convincing. The authors compare their approach against 3 baselines, and none of them is related with concept forgetting" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "(Clear problem definition)\nCan the author explain the purpose of the algorithm with a real-world example? I did not intuitively grasp the goal of concept forgetting. For instance, I am curious about a plausible purpose, such as removing privacy-sensitive information.\nFurthermore, the issue I mentioned in the weaknesses section, where the optimal solution for concept forgetting changes if the entire dataset changes, indicates that concepts may not be fully removed when a larger, pristine global dataset exists beyond the given dataset. I am curious about the author's assumptions regarding the entire dataset in this context.\n\n(Justification of the measure)\nAdditionally, while concept violation appears to be a reasonable measure, it does not necessarily reflect whether concept forgetting has truly been achieved. Cross-entropy loss is a good measure for classification tasks, but for models trained with techniques like label smoothing, the loss can increase independently of accuracy. Similarly, I believe that concept violation cannot be considered a perfect measure. Since concept violation is a measure introduced by the author, it requires thorough analysis from multiple perspectives; however, in the submitted paper, it is only used as a measure without further analysis. It seems necessary to include a qualitative analysis in the experiments demonstrating that low concept violation indeed addresses the intended purpose of concept forgetting. In addition to the analysis I suggested, any results that can further demonstrate the utility and significance of your concept violation measure would be welcome.\n\n(Representation)\nThe methods for the author’s algorithm can all be represented by figures and pseudo code. This implies that Section 4.1 is somewhat redundant. Adding insights into each step of the algorithm in the main text would be beneficial. For example, is the sorting in line 4 truly meaningful? What is the reason for selecting the next label deterministically in line 9? What is an adequate range for E? Addressing questions like these would enable a deeper understanding of the author’s algorithm.\nLastly, the author’s theoretical analysis does not provide much help in interpreting the experimental results. Is it possible to define a tighter boundary under specific conditions?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The author has proposed an intriguing problem.\nIf concept forgetting is feasible, it may also be possible to remove unwanted information from a pre-trained model." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author proposes a new issue termed concept forgetting.\nThe author argues that, to forget a concept, the label proportions should be constant regardless of the concept.\nThe author proposes an approach in which, when the label distribution varies according to a specific attribute in a pre-trained model, this is directly adjusted before further training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "First, the proposed problem appears to be an ill-posed problem.\nAccording to the author’s assertion, the entire dataset must be pristine.\nIf there is a concept not included in the dataset or if certain concepts are overrepresented, the optimal model for concept forgetting will be defined differently.\nIn fact, consider the example commonly addressed in debiased classification: in the dog and cat problem, dogs are often photographed outdoors, while cats are typically photographed indoors.\nIf additional outdoor photos are included, the label for indoor cats would need to be even more frequently replaced with that of dogs in the author’s algorithm.\n\nSecondly, despite the author’s algorithm being highly intuitive and straightforward, its characteristics are not well explained.\nThe author replaces explanations of the proposed method with figures and algorithms, which does not aid intuitive understanding.\nEven concept forgetting is not well explained beyond the measure defined as concept violation.\nAt the very least, it would be essential to verify whether the author’s method is beneficial when solving zero-shot classification tasks that align concepts in the trained model.\n\nLastly, in the theoretical analysis, the gap between the two terms in the inequality is substantial.\nFor the theoretical analysis to be meaningful, this gap needs to be minimized; the current gap arises from using the maximum value of the loss.\nIn the case of cross-entropy loss, the bound is exceedingly large, and when multiplied by the concept violation values observed by the author in Table 1, the upper bound of the curated loss inevitably becomes significantly large.\nIn fact, it is challenging to identify a clear correlation between the concept violation values and the reduced accuracy in the experimental results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I am not sure I fully understand the experiments. Are examples in forgetting classes removed, and examples in the rest of the classes are used to train and test? \n2. I suppose the introduction example 'background' is good; I think in experiments, the authors should give the results of the example. Does the method only work with concepts that have labels? If so, this is a strong limitation to the proposed method." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The idea of the paper is good and important for research. \n2. The example in the introduction is also interesting that \"envision a CelebA (Liu et al., 2015) image classifier that heavily relies on background color as a distinguishing feature to classify different celebrities, limiting its ability to generalize effectively\"." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new approach to studying concept forgetting, which aims to remove some concepts from pre-trained models while preserving their performance. To achieve this goal, the authors propose an algorithm called Label ANnealing (LAN), which employs a two-stage process to align the distribution of pseudo-labels with the class distribution, as generated by the trained model's predictions. Experimental evaluations on four benchmark datasets – MNIST, CIFAR-10, miniImageNet, and CelebA – demonstrate that concept violation can be effectively mitigated." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Following the example provided in the introduction, I anticipated an improvement in performance after removing harmful features. Nevertheless, my findings contradict this expectation: despite claims of 'maintaining the model’s overall performance and generalization ability', I observed a significant drop in performance on all datasets, with a particularly notable 15% decrease on CelebA for the task 'Heavy makeup or not'. This discrepancy suggests that the authors should revisit their method to ensure it meets its stated objectives.\n2. The concept of 'concept violation' is not rigorous, as it only evaluates model outputs without considering the nuanced effects of concepts within decision-making processes. Even when results appear identical, it is uncertain whether a particular concept has been entirely eliminated or merely masked in some way.\n3. The alogrithm Label ANnealing is simple." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024concept,\ntitle={Concept forgetting via label annealing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2L7KQ4qbHi},\nnote={under review}\n}" }, "abstract": { "value": "The effectiveness of current machine learning models relies on their ability to grasp diverse concepts present in datasets. However, biased and noisy data can inadvertently cause these models to be biased toward certain concepts, undermining their ability to generalize and provide utility. Consequently, modifying a trained model to forget these concepts becomes imperative for their responsible deployment. We refer to this problem as *concept forgetting*. Our goal is to develop techniques for forgetting specific undesired concepts from a pre-trained classification model's prediction. To achieve this goal, we present an algorithm called **L**abel **AN**nealing (**LAN**). This iterative algorithm employs a two-stage method for each iteration. In the first stage, pseudo-labels are assigned to the samples by annealing or redistributing the original labels based on the current iteration's model predictions of all samples in the dataset. During the second stage, the model is fine-tuned on the dataset with pseudo-labels. We illustrate the effectiveness of the proposed algorithms across various models and datasets. Our method reduces *concept violation*, a metric that measures how much the model forgets specific concepts, by about 85.35\\% on the MNIST dataset, 73.25\\% on the CIFAR-10 dataset, and 69.46\\% on the CelebA dataset while maintaining high model accuracy. Our implementation can be found at this following link: \\url{https://anonymous.4open.science/r/LAN-141B/}" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Concept forgetting", "Privacy", "Bias", "Computer Vision (CV)" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d7b091c0d070d43702b6fcbae45fc79db5bbe46a.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Concept forgetting via label annealing" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2LHzKdb8Ao
Reducing Symmetry Mismatch Caused by Freely Placed Cameras in Robotic Learning
main
Active
Equivariance;Robotics
applications to robotics, autonomy, planning
3;3;3;5
4;4;4;4
3;3;3;3
1;1;2;2
3;3;2;3
3.5
4
3
1.5
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "7. Gripper Image: Does this formulation of having a gripper image generalize to a dextrous manipulation with a non-trivial gripper? Also, it would be better if the Fig 11 (from appendix) can be moved/integrated into the main paper. This is because, the gripper representation is one of the crucial aspects of the proposed solution and having it in a visual form would make the methodology more clearer to the reader.\n\n8. I would like to see an experiment with DrQ-v2 where image augmentaiton has shown significant sample-efficiency gains and am curious how that performs as compared to an explicit Equivariant policy. I believe the data-augmentations can be implemented in a straightforward manner within the SACfD codebase.\n\n9. Are the models in Fig 7(a) and 7(b) test-only models are are they trained on individual camera angles? If it's trained and tested separately -- I'm curious to see how Reproj equi or Presp. equi perform on testing on OOD camera viewpoints (i.e train on one camera angle and test on rest.)\n\n10. Are the class of Equivariant policies biased to the action space? Would the same set of architectures work for a other action spaces that are common in robotic manipulation such as end-effector pose, joint velocities, joint angle poisitions etc?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This work addresses a very common problem prevailing in the robotic manipulation domain i.e lack of robustness of vision based policies to viewpoints.\n2. The proposed solutions are very simple (under the known camera extrinsics assumption, which is typically common in table-top robotic manipulation settings).\n3. The paper is generally well written and easy to understand in a single go." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses a key issue in equivariant neural networks for agent learning to decrease the gap between sideview camera observations, which perform sub-optimally when cameras view the scene from the side rather than directly above. The authors propose two simple preprocessing techniques to reduce this gap: \n\n1. For RGBD cameras, they reproject the image to a virtual top-down view, and \n2. For RGB cameras, they apply a perspective transformation to align the ground plane with the image plane. \n\nThrough experiments across multiple robotic manipulation tasks using both reinforcement learning and imitation learning, they demonstrate that these preprocessing steps significantly improve the performance of equivariant networks compared to using raw side-view images." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "4. Related works: I believe a small discussion on point cloud models (in the context of Image reprojection) should also be discussed. Several works in the the past few years have proposed using point clouds for RL / policy learning [1, 2] and shown robustness to viewpoints [3].\n\n5. Sample-efficiency of RGBD experiments: I don't particularly find a difference between *Point cloud equi* and *Reproj. equi* in Fig 5. and Table 1. What are the benefits of Reproj. Equi over point cloud equi?\n\n6. Sec 5.6 (Effects of camera angle) needs to also have the PointNet++ baseline (*point cloud equi*) for the RGB-D plots. Some works have suggested that point cloud RL policies are robust to viewpoint changes [3].\n---\n**References:**\n\n1. On the efficacy of 3d point cloud reinforcement learning, Zhan Ling et al., arXiv 2023\n2. Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning, Haoyi Zhu et al., NeurIPS D&B 2024.\n3. Point Cloud Models Improve Visual Robustness in Robotic Learners, Skand Peri et al., ICRA 2024\n\n---\n**Rationale for current rating**: Overall I believe this is a well written paper with clear contributions. However, I have particular questions regarding the baselines (points 5, 6, 8) and generalization (point 9) and based on that, I'm voting for a weak reject. However, this is *not* my final decision and I am willing to update my score based on other reviewers' comments and authors' rebuttal." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In the experimental part, the author compares many baselines (equivariant, non-equivariant, 2D, 3D), but does not clearly write out the specific structure of each baseline and the group on which their equivariance properties are defined.\n- All baseline methods in this paper are based on the same framework (SACfD). To demonstrate the effectiveness of this preprocessing approach in broader scenarios, I believe it would be beneficial to include comparisons with other state-of-the-art methods." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper defines a problem of \"symmetry mismatch\" from non-ideal camera placements in image based equivariant robotic learning. By applying reprojection and perspective transformations to side-view images, it extends the utility of equivariant learning in robotics, enabling its application in more realistic setups.\n- The authors provide a thorough and well-validated empirical analysis across diverse robotic tasks and modalities (RGB and RGBD), with clear comparisons to multiple baselines. This experimental rigor strongly supports the paper's claims about the effectiveness of the preprocessing techniques." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method to improve equivariant policy learning in robotic manipulation tasks where camera views are not ideal (e.g., side views instead of top-down). The authors present two preprocessing techniques:\n- Reprojection of RGBD images to approximate top-down views by generating point clouds and interpolating missing data.\n- Perspective transformation of RGB images to map the ground plane onto a top-down view.\n\nThese methods enhance performance across different learning tasks and camera angles without additional data or privileged information, making them adaptable to real-world setups. The experiments show improved policy learning outcomes in several robotic tasks by aligning image transformations with physical symmetries in the robot workspace." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The problem this paper attempts to address may not be a genuine issue. When handling tabletop robotic manipulation tasks and aiming to apply O(2)-equivariant policy learning algorithms, a fundamental assumption is the availability of top-view observations. If only side-view images are accessible, a more natural approach might be to consider non-equivariant policy learning algorithms instead.\n\n- The methods proposed in this paper lack originality. Both 3D reprojection and perspective transformation are well-established algorithms in the field of computer vision. This paper merely applies them to a specific scenario—converting side-view images of tabletop robotic manipulation scenes into top-view images—to facilitate the use of O(2)-equivariant policy networks. I view these techniques as pre-processing tricks rather than substantive innovations.\n\n- The formulation for 3D reprojection in this paper is not entirely realistic. To perform reprojection, RGBD information is required. However, if 3D data is available, it would be more straightforward to use equivariant policy networks based on 3D groups$^{[1,2]}$ (such as SO(3), SE(3), or SIM(3)). This would eliminate the need to address issues arising from mismatched camera viewpoints.\n\n[1] Yang, J., Cao, Z. A., Deng, C., Antonova, R., Song, S., & Bohg, J. (2024). Equibot: Sim (3)-equivariant diffusion policy for generalizable and data efficient learning. arXiv preprint arXiv:2407.01479.\n\n[2] Chen, Y., Tie, C., Wu, R., & Dong, H. (2024). EqvAfford: SE (3) Equivariance for Point-Level Affordance Learning. arXiv preprint arXiv:2408.01953." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How many equivariant methods could the proposed method benefit?\n- Would the proposed method also benefit general-purpose robot learning methods such as Diffusion Policy?\n- It seems that the compared point cloud baseline is using a single-view RGBD image. What if we have access to multi-view RGBD images? Consider the scenario in [1].\n- Would real-world experiments be conducted?\n\n\n[1] RiEMann: Near Real-Time SE (3)-Equivariant Robot Manipulation without Point Cloud Segmentation. Gao et al. CoRL'24." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The problem formulation is quite straightforward. The proposed method is simple, intuitive, and effective.\n- The discussion on Occluded Regions for RGBD images and Out-of-plane Distortion for RGB images makes the proposed method more practical, giving it the potential to be deployed in real-world settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the limitations of a certain type of equivariant policy learning in robotic manipulation tasks when using side-view camera perspectives, which cause symmetry mismatches that reduce performance. The authors propose a simple method to transform side-view images into top-down representations, enhancing the performance of equivariant methods. Its effectiveness is demonstrated on RGB and RGBD images." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Though very simple and effective under the tested scenario, this paper seems more like a small pre-processing module specifically designed for a certain type of SO(2) RL and IL methods. How many equivariant methods could benefit from the proposed method? I would like the authors to discuss this question, and list as many papers as possible.\n- Lacking real-world experiments. I am concerned whether the proposed method would be effective as well in real-world settings. And since the proposed method is mainly designed to tackle the challenge when deploying cameras in the real world, I think real-world experiments are indispensable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Handling Extreme Occlusions: In the RGBD setting, how might more sophisticated inpainting or occlusion handling methods (e.g., learned inpainting) improve the performance gap with the oracle? Have the authors experimented with these techniques, and what were the results?\n\n2. Effectiveness in Real-World Scenarios: While the experiments are simulated, can the authors elaborate on the challenges and potential modifications required to apply these preprocessing steps in real-world robot learning tasks with physical cameras and hardware?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper offers practical preprocessing methods (RGBD reprojection and RGB perspective transformation) that are simple. These methods can be applied across various robotic learning tasks without additional training or modification of the robot setup.\n\n2. The proposed methods require only knowledge of the camera’s intrinsics and extrinsics, making them straightforward to implement without the need for privileged information. This makes the approach broadly applicable across robotic tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses a challenge in robotic learning, where freely placed cameras cause a mismatch between input image transformations and the inherent task symmetry in robotic manipulation environments. The authors propose two preprocessing methods: reprojection of RGBD images and perspective transformation for RGB images. These techniques transform side-view images into top-down views, thus aligning the image transformations with the task symmetry. This approach is shown to consistently improve performance in robotic manipulation tasks, particularly in reinforcement learning and imitation learning setups." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited Technical Contribution: The technical contribution of the paper is minimal. The methods of RGBD reprojection and RGB perspective transformation are well-established and mature techniques. The paper merely applies these existing methods to Equivariant Policy Learning without introducing any significant novel ideas. As a result, the work feels more like a technical report rather than a research paper offering new scientific insights.\n\n2. Lack of Real-World Experiments: The experiments are conducted only in six simple simulated environments, without any real-world validation. This limits the applicability and robustness of the proposed methods in practical scenarios, as real-world experiments are essential to demonstrate the effectiveness of the approach outside of controlled simulations.\n\n3. Performance Gap with Oracle: While the proposed methods reduce the performance gap with the oracle top-down view, they do not entirely close it. The occlusion of objects and grippers, especially in cluttered environments, remains an unsolved problem." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024reducing,\ntitle={Reducing Symmetry Mismatch Caused by Freely Placed Cameras in Robotic Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2LHzKdb8Ao},\nnote={under review}\n}" }, "abstract": { "value": "Equivariant policy learning has been shown to solve robotic manipulation tasks with minimal training or demonstration data. However, the effectiveness of equivariance depends on whether transformations of the scene align with simple transformations of the input data. This is true when the camera is in a top-down view, but in the common case where a camera views the robot workspace from the side, there is a symmetry mismatch, reducing model performance. We show that equivariant methods perform better when camera images are transformed to appear as top-down images. Our approach is simple to implement, works for RGB and RGBD images, and reliably improves performance across different view angles and learning algorithms." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Equivariance", "Robotics" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/557f8e7f27e42c5b8fa4a32df0e28d72280ab64b.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Reducing Symmetry Mismatch Caused by Freely Placed Cameras in Robotic Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2LOtSPmopq
Unsupervised Whole Object Discovery by Contextual Grouping with Repulsion
main
Active
Unsupervised Object Discovery;Unsupervised Whole Object Segmentation;Co-Segmentation;Normalized Cut;Attraction and Repulsion
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;5;5;6
3;4;3;2
2;2;3;3
2;2;2;3
3;3;3;3
4.75
3
2.5
2.25
3
-0.324443
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you please clearly emphasize the novelty part and distinguish this work from the existing literature?\n2. Could you please clarify how the comparisons drawn in the results are fair?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The main motivation of the proposed method is to focus on distinctive parts of an object by increasing similarity between them and simultaneously focus on how dissimilar they are from their context in the image. The paper first identifies this problem in existing method and show that upon taking into account both similarity and dissimilarity, there is a possibility of performance improvement. Empirically for three different unsupervised tasks, the proposed method show improvements over existing methods in both single image setting and reference image setting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a solution of discovering and segmenting objects in unsupervised setting. Inspired by object feature similarity as well as feature disimilarity, the paper proposes to utilize graph cuts that maximize similarity between object features while also maximizing dissimilarity between object and background features. Moreover, the paper shows performance gain for unsupervised object\ndiscovery, saliency detection, and unsupervised object detection" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The proposed idea of utilizing attraction and repulsion doesn’t seem to be novel, As authors say in L194 “Given attraction A and repulsion R, we follow (Yu & Shi, 2001 ) and conduct. . . ”. The referenced paper proposes the same idea of utilizing attraction and repulsion for both to measure degree of attraction and segregation of features. The difference seems to be application of this on features obtained from self-supervised transformers instead of image features. Moreover, the segmentation method remains the same as before. The rest of the method is clearly followed from (Wang et al, 2023).\n\nThere are also concerns regarding reported quantitative results in table 3. As mentioned in L257, the authors use bilateral solver (BL) to refine the masks. However, when comparing with TokenCut (Wang et al, 2023), the results are taken without bilateral solver, TokenCut+BL shows better performance by a significant margin when compare with CGR (proposed method). Similarly there is inconsistency in table 2. TokenCut+BL is not reported, which clearly outperforms CGR.\n\nAnother minor concern in the paper is repetitive writing. There are multiple instances in abstract and introduction where sentences are\nrepeated again and again. For e.g. L19 and L50. Also, few argumentative sentences in the paper are too long and complex which hinders the information being conveyed. This should be improved for clear understanding of the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My main concerns are about the training/evaluation process and parameter selection. Please refer to the weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe proposed CGR is simple and easy to understand. \n2.\tThe paper is well-written and organized, making the author's ideas easy to understand.\n3.\tThe authors validated CGR's performance on different segmentation benchmarks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the unsupervised object segmentation task. The authors proposed the Contextual Grouping with Repulsion method (CGR), which considers both the internal similarities (attraction) among different parts of an object and their common dissimilarities (repulsion) to the background. The authors formulate their pipeline using a weighted graph where nodes represent image patches and edges encode grouping cues, measured by both feature similarity (attraction) and dissimilarity (repulsion). The proposed approach extends TokenCut, which solely relies on internal similarities between different object parts for segmentation. The proposed method demonstrates superior performance across multiple unsupervised segmentation benchmarks, including unsupervised object discovery, saliency detection, and video object segmentation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThis paper lacks sufficient details about the training and evaluation process. Specifically, it does not explain how the train/validation/test sets were divided and which data subset was used in the training, hyperparameter selection, and final model evaluation.\n2.\tRegarding the repulsion weight, Figure 9 shows that when $\\omega$ fluctuates in the range of 0~0.25, the performance difference is not significant, which raises doubts about the effectiveness of the proposed method. Additionally, the author only conducted an ablation study of $\\omega$ on the ECSSD dataset for unsupervised saliency detection and then applied this parameter to all tasks and datasets. I suppose this pattern is not convincing enough. I'm not suggesting that the authors should conduct ablation studies for all tasks to determine the repulsion weight. Rather, I think it's tricky to set this parameter as a fixed value and apply it to different tasks and datasets. The authors should discuss whether this parameter could adapt automatically when facing different tasks and datasets.\n3.\tStill for the repulsion weight, comparing the experimental results, it appears that the authors used the same data subset for both hyperparameter selection (Figure 9) and results reporting (Table 3). In other words, the authors did not strictly distinguish between the validation set and test set in the experiments, which suggests that their proposed method might be overfitting to the target dataset." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "No" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The strengths are as follows:\n1. This formulate the idea \"an object of distinctive parts pops out as a whole, due not only to how similar they are to each other\" for unsupervised object segmentation in a spectral graph partitioning framework, where nodes are patches and edges are grouping cues between patches, measured by feature similarity for attraction, and by feature dissimilarity for repulsion. \n2. This paper seek the graph cuts that maximize within-group attraction and figure-ground repulsion while minimizing figure/ground attraction and within-group repulsion. \n3. This paper investigate this idea not only within a single image, but also across related images in a co-segmentation setting, where contextual grouping with repulsion between images brings additional power for discovering whole objects together" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "It is challenging to discover and segment whole objects from unlabeled images, as features unsupervisedly learned on images tend to focus on distinctive appearances (e.g., the face rather than the torso), and grouping by feature similarity could reveal only these representative parts, not the whole objects (e.g., the entire human body). The key insight of this paper is that, an object of distinctive parts pops out as a whole, due not only to how similar they are to each other, but also to how different they are from their contexts within an image or across related images. The latter could be crucial for binding different parts into a coherent whole without preconception of\nobjects. This paper formulate this idea for unsupervised object segmentation in a spectral graph partitioning framework, where nodes are patches and edges are grouping cues between patches, measured by feature similarity for attraction, and by feature dissimilarity for repulsion. This paper seek the graph cuts that maximize within-group attraction and figure-ground repulsion while minimizing figure/ground attraction and within-group repulsion. The simple method consistently outperforms the state-of-the-art on unsupervised object discovery, figure/ground saliency detection, and unsupervised video object segmentation benchmarks. In particular, it excels at discovering whole objects instead of salient parts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper present a method for unsupervised segmentation/saliency detection/co-segmentation. The weakness are as follows:\n1. The time cost and memory consumption for the proposed method is not presented. This is quite necessary as the method use a large model like ViT. \n2. What does the Self-Supervised Transformer indicate in Figure5? How about the segmentation head? does it use a pretrained sementation model? Looks like it use a mask from a segmentation model as gt to compute loss, right?\n3. In figure 8, the paper try to compare the result with SAM2, but only a few visual results are provided, it there more systematic comparison results?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The method relies on the initial segmentation provided by the graph cut method. How does it recover from large-scale errors in this prior segmentation. \n2. How would this method be extended to multi-object segmentation.\n3. How does it work with self-similar objects. E.g., multiple instances of the same object in the image.\n4. How well does this method work on more complex datasets like YoutubeVIS etc." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-written and easy to follow. \nThe proposed approach is an extension of the prior TokenCut approach, which utilized spectral graph partitioning with an attraction cue. Here, the method is extended by incorporating both attraction and repulsion cues in the graph structure, as proposed in Yu and Shi, 2001. \nMoreover, the paper adapts the framework to video data by introducing multi-frame objectives." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method to perform unsupervised object discovery and segmentation in videos. It utilizes spectral graph partitioning with both feature similarity and dissimilarity cues to capture whole objects from unlabeled images. A graph segmentation model is trained using cross-entropy loss and contrastive loss. The quality of segmentation on image and video datasets appears to improve compared to previous approaches." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "However, given the main contributions, the paper appears to be incremental, with limited innovation beyond extending existing methods. \nThe method focuses on extracting a single dominant object in the scene, which wouldn't apply to complex scenes with many objects. \nThe results are primarily demonstrated on older datasets like VOC, COCO and DAVIS, and the comparisons focus largely on previous approaches, lacking evaluation against more recent research in the field, such as VideoCutLER: Surprisingly Simple Unsupervised Video Instance Segmentation (CVPR, 2004)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024unsupervised,\ntitle={Unsupervised Whole Object Discovery by Contextual Grouping with Repulsion},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2LOtSPmopq},\nnote={under review}\n}" }, "abstract": { "value": "It is challenging to discover and segment whole objects from unlabeled images, as features unsupervisedly learned on images tend to focus on distinctive appearances (e.g., the face rather than the torso), and grouping by feature similarity could reveal only these representative parts, not the whole objects (e.g., the entire human body). Our key insight is that, an object of distinctive parts pops out as a whole, due not only to how similar they are to each other, but also to it how different they are from their contexts within an image or across related images. The latter could be crucial for binding different parts into a coherent whole without preconception of objects. We formulate our idea for unsupervised object segmentation in a spectral graph partitioning framework, where nodes are patches and edges are grouping cues between patches, measured by feature similarity for attraction, and by feature dissimilarity for repulsion. We seek the graph cuts that maximize within-group attraction and figure-ground repulsion while minimizing figure/ground attraction and within-group repulsion. Our simple method consistently outperforms the state-of-the-art on unsupervised object discovery, figure/ground saliency detection, and unsupervised video object segmentation benchmarks. In particular, it excels at discovering whole objects instead of salient parts." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Unsupervised Object Discovery", "Unsupervised Whole Object Segmentation", "Co-Segmentation", "Normalized Cut", "Attraction and Repulsion" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/48fdb0ab740cd8c21056e9963fea66483aca2322.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Unsupervised Whole Object Discovery by Contextual Grouping with Repulsion" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2MLvV7fvAz
Spectro-Riemannian Graph Neural Networks
main
Active
Graph representation learning;Spectral graph theory;Riemannian geometry;Non-Euclidean graph neural networks;Geometric deep learning
learning on graphs and other geometries & topologies
3;5;6
4;4;3
2;2;3
2;3;3
1;1;3
4.666667
3.666667
2.333333
2.666667
1.666667
-0.755929
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) What makes GPR special and why is this used as a backbone? I believe it is equivalent to any polynomial-filter spectral GNN. Why not e.g. use ChebNet, etc.?\n\n2) The (total) dimensions of the product manifolds of e.g. Table 5 and Table 2 seem to always be $d=48$. Yet in Table 13 of Appendix 6.6.4 it is indicated that $d = 64$ is the selected dimension of the product Manifold. Could the authors comment? Also, while I have seen Algorithm 1 in Appendix 7.6.5 (which seems to be used as an initial heuristic), it is still not clear to me how individual dimensions of product manifolds and respective factors are found/optimized. Is a grid search performed? If so what are the respective allowed values during this grid search? Could the authors clarify (again?)?\n\n3) In the ablation study on the impact of the CUSP Laplacian, performance using the CUSP Laplacian is compared to performance using the adjacency matrix instead. Could the authors repeat this ablation study comparing with the usual normalized and unnormalized graph Laplacians? This would shed light on whether the performance increase comes from using a Laplacian-type matrix vs. an adjacency type matrix, or indeed from the specific properties of the the Cusp Laplacian.\n\n4) Is it possible to include and discuss some basic spectral characteristics of the CUSP Laplacian (beyond self-adjointness and positivity)? As is, this new matrix is introduced without too much intuition beyond the heat-flow heuristic. Can something e.g. be said about the maximal eigenvalue or first-non-trivial eigenvalue (in a Cheeger-type fashion) for example? I realize the present paper is not mainly theoretical but introduces an architecture. However some additional theoretical foundation would indeed be nice." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well structured and mostly (please see next section) well written. A positive example is the explicit description of the limitations of previous work that are addressed in this paper (i.e. L1-L3 in the introduction).\n\n Including the notation table in Appendix 7.1 helps to keep track of the various mathematical concepts.\n\nThe idea underlying the introduced CUSP Laplacian of including curvature information into the Laplacian-matrix describing the graph is neat. \n\nFurthermore the idea of taking into account locally varying curvature structures by allowing for factors of varying curvature in the product manifold is nice. \n\nThe performance of the proposed architecture in both the node-classification and link-prediction tasks on the considered datasets is solid. \n\nThe ablation study on the impact of the signature of the product manifold structure (c.f. Section 5.1) as well as the surrounding discussion is illuminating." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Spectro-Riemannian Graph Neural Networks (CUSP), a novel approach to graph representation learning that combines spectral and curvature information into a spectral graph neural network operating on a manifold arising as a product of Euclidean as well as hyperbolic and spherical spaces. Traditional graph neural networks (GNNs) often struggle with diverse graph topologies and varying graph geometries/curvatures. CUSP addresses these challenges by enabling to integrate (geometric) features from both negatively (hyperbolic) and positively (spherical) parts of a given graph. This allows for the creation of more natural node embeddings that better align with the underlying structure of real-world graphs.\nKey components of CUSP include (1) The Cusp Laplacian, which integrates Ollivier-Ricci curvature into the definition of a Laplacian-type matrix on the graph. (2) Cusp Filtering which allows for (high- and low-pass) filtering operatoions on a product manifold where each factor has constant positive, zero, or negative curvature. (3) Cusp Pooling A hierarchical attention mechanism that evaluates the importance of substructures with different curvature characteristics.\nIn the empirical evaluation CUSP's performance is investigated across eight datasets. Here CUSP achieves a good performance (in node classification and link prediction tasks), with sometimes substantial gains over baselines.\nThe research seems to be novel and highlights the potential that combining geometric and spectral information harbours for the design of GNNs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I do have _some_ concerns regarding readability. An easy fix is the image quality in Figure 1. Here the axis labeling of the histograms is not readable if the paper is printed. Could the authors please fix this by including higher resolution images and/or using a larger font size for axis labeling. \n\nRegarding the paper itself, aside from some typos and grammatical errors that do not impede the flow of the paper too much, I had trouble understanding Section 4.3; especially from line 327 onward: The curvature kernel is defined twice: once using an inner product in an ambient space, and once as a translation invariant entity. I believe what the authors want to convey is that the former definition as utilized in the paper already leads to a translation invariant kernel. Is this correct?\n\nAlso the significance of the Bochner-Minlos theorem is not immediately apparent to me. I only gained some intuition about this after reading the proof of Theorem 2 and Theorem 3 in the Appendix. Could the authors comment more explicitly on the significance of Bochner's theorem here? \n\nIt might also be good to explain (to some extent) and motivate the k-stereographic model in the main text. \nWhile I have some background in differential geometry, I had only ever come across standard stereographic projections.\nEspecially for readers from a pure CS/EE background more details here might be useful, even if the model might be central to Riemannian GNNs. \n\nIn the same direction, it would also be good explain a bit more the respective operations in Table 8 in Appendix 7.2.4 and how they are natural.\n\n\nFinally, in the experimental sections, the datasets that are being used are somewhat old and fairly small. I strongly encourage the authors to also consider newer and much larger datasets (e.g. using OGBN) and compare their approach with baselines there." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- The authors keep using the term “curvature signal” throughout the paper. What does this term mathematically mean?\n- In κ-right-matrix-multiplication, why do the authors choose to work with the projection between the manifold and tangent space at the origin? How does this choice affect the empirical results?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper introduces a new GNN model that considers spectral information and the curvature." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces CUSP: integrating mixed-curvature representation learning with GPRGNN." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The method is incremental. It’s hard to separate the new components in the paper and how they depend on the previous work. Also, there is no theoretical justification for integrating the spectral information in the frequency domain and the curvature in the spatial domain.\n- More details are needed for the heat diffusion equation and heat flow in Section 4.1. For example, as the cooling depends on the direction (from x to y), what is the role of direction in the heat diffusion equation? What is the definition of heat flow? What is the ORC distribution? What is the diffusion rate? How the Wasserstein-1 distance be interpreted as the resistance? Does the resistance depend on the direction?\n- It’s unclear what “$x \\sim y$ denotes adjacency between nodes x and y” means in Proposition 1 in the main text. Also, more details and motivation needed for how $\\\\bar{w}\\_{xy} = e^{\\\\frac{-1}{1-\\\\tilde{\\kappa}(x, y)}}$ is designed in main texts. Does this design have favorable properties? How does Cusp Laplacian operator act differently based on $\\\\bar{w}\\_{xy}$?\nThe explanation is only in the Appendix, but an overview or high-level explanation in the main texts can help in understanding the reason behind the proposed component.\n- The font size in Figure 3 is too small. It makes it very difficult to follow complicated figures.\n- More explanation is needed for how GPRGNN jointly optimizes node features and topological information extraction.\n- It’s unclear what does curvature domain $\\mathbb{K}_{\\mathbb{P}}$ represent in line 321-322. In addition, it’s unclear why in line 251, the product space is $\\\\mathbb{P}^{d\\_{\\\\mathcal{M}}}$ but in line 322 the authors are interested in the $d\\_{\\\\mathcal{C}}$-dimensional product space $\\mathbb{P}^{d\\_{\\\\mathcal{C}}}$.\n- Based on Eq. (4), $\\widetilde{\\kappa}\\in\\mathbb{K}$. However, it’s unclear what $\\widetilde{\\kappa}(x)$ represents in Eq.(5).\n- It’s unclear what the Riemannian projector is in line 347.\n- It’s unclear what M2 is in line 319. There is no M2 in the paper.\n- There is no theory in Theorem 2. It’s confusing to call it a theorem when the claims are only definitions.\n- It’s unclear why translation invariant is a desirable property in the functional curvature encoding in the proposed method.\n- It’s unclear how functional curvature encoding gives more attention to differently curved substructures.\n- It’s unclear which part of the implementation is adopted from Ni et al. (2019) in line 412.\n- It’s unclear what do the authors mean by they “heuristically determine the signature of our manifold $\\mathbb{P}$ (i.e. component manifolds) using the discrete ORC curvature of the input graph”. It’s unclear how many hyperbolic spaces and spherical spaces are considered.\n- It’s unclear what the hyperparameter L represents.\n- It’s unclear how many layers are considered in the experiments.\n- It’s unclear what the experimental configurations and hyperparameters of the competing methods are.\n- The spacing between the tables and the main texts in Section 5 is very tight and narrow.\n- The paper is missing important spectral GNNs: OptBasisGNN, ChebNetII, CayleyNet, APPNP, JacobiConv, and Specformer.\n- It’s unclear how L3 is resolved using the proposed method.\n- The paper needs more work on proofreading. For example:\n - the English style is not consistent. Sometimes the authors use “normalised”, but sometimes they use “normalized”.\n - The word “Laplacian” sometimes has the uppercase letter L, but sometimes it is lowercase l.\n - The tangent space notation is not consistent. \n - There is a comma in Eq.(15), but there is no sentence afterward.\n - The exponential map and logarithmic map are defined using boldface letters, but these notations are not consistent in the appendix\n - In line 366, the sentence is not finished. The punctuation in Eq. (6), Eq. (7), and Eq. (8) is missing.\n - The notation of Wasserstein-1 distance is not consistent in the main paper and appendix\n - The reference pointer is missing in line 405\n\n## Minor\n- Unclear what is $\\mathbf{W}$ in line 141\n- In line 160, missing $\\times \\ldots \\times $ in $\\mathbb{P}$\n- A formal definition of Wasserstein-1 distance is missing\n- Unclear what is $\\psi_{xy}$ in Appendix 7.3\n- In line 996, it’s unclear what is the element-wise product between a matrix $\\mathbf{L}$ and a scalar $e^{\\frac{-1}{1-\\tilde{\\\\kappa}(x, y)}}$\n- In Eq.(25), $\\mathbf{X}_{i:}$ is not defined\n- $\\delta$ in line 172 and $\\delta_{vs}$ in line 181 are not defined\n- Unclear why $\\omega_{d_f}$ in line 341, there is no $d_f$ in Eq. (4)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Do you use the same neural networks $f_\\theta$ in Line 263 for all component spaces?\n2. How is the Riemannian projector $g_\\theta$ in Line 347 defined?\n3. Could you clarify how Theorem 1 applies to CUSP? What insight does Theorem 1 provide?\n4. How does the runtime of your pipeline compare to other baselines?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "(+++) **Novelty and Relevance**: The proposed method is new and of interest to the geometric machine learning community.\n\n(+++) **Strong Empirical Evaluation and Performance**: The method is thoroughly evaluated, demonstrating improved performance on downstream tasks over the considered baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces CUSP, a graph representation learning model that integrates graph discrete curvature with a geometric extension of generalized PageRank. The method is comprehensively evaluated, demonstrating strong performance against several baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are several issues with the presentation that detract from the strengths. These concerns should be straightforward to address.\n\n(----) **Presentation**: The paper’s presentation is dense and, at times, unclear. Examples include:\n* **Figure 3**: The figure is very busy. While each individual component is informative and of high quality, combining them without clear visual separators makes the overall figure difficult to interpret.\n* **Section 4**: This section is similarly dense, as it combines mathematical background, theoretical motivations, and the presentation of the CUSP architecture. Perhaps this section could focus on the method and architecture, with theoretical discussions (e.g., of heat diffusion) moved to Preliminaries or a new dedicated section.\n* **Notation**:\n * The notation in Equations 2, 3, 5, 6, 7, and 8 is dense and may be difficult to parse for readers unfamiliar with geometric machine learning or the gyrovectorspace approach. (Also refer to \"Relation to Existing Literature\" below.) Adding equation annotations or using plain English function names (e.g., Enc for encoding) could improve readability.\n * \"ORC for nodes\" is defined in line 176 without introducing the notation $\\tilde{\\kappa}(x)$ which is then used, e.g., in Equation 5. (There is a notation table in the appendix, but it does not cross reference the definition.)\n* **Baseline Taxonomy**: The classification of baselines in Section 5 into \"spatial\" and \"Riemannian\" is inaccurate, as the Riemannian baselines are also spatial. \"Spatial-Euclidean\" and \"Spatial-Riemannian\" could be more accurate.\n\n(---) **Mathematical Motivation**: The justification for the Cusp Laplacian (Proposition 1) and Functional Curvature Encoding (Theorem 2) are more of rationales or motivations than rigorous proofs. For example, Proposition 1 motivates the Cusp Laplacian by introducing a modified resistance term in a heat flow equation. This would perhaps become clearer if presented as a definition, framed as, “If one assumes a resistance of the form …,” which would help the reader recognize the principles from which the Cusp Laplacian is derived.\n\n(--) **Relation to Existing Literature**: Generalizing pipelines from Euclidean to Riemannian spaces by replacing Euclidean transformations with Moebius operations is a well-established pattern in geometric machine learning. Portions of this work follow this pattern, such as adapting PageRank GNN to product manifolds (Section 4.2) and using Moebius operations in the functional curvature encoding (Section 4.3) and cusp pooling (Section 4.4). Early works in hyperbolic graph neural networks such as [1] introduced these operations with clear motivation, via, e.g., illustrations of log and exp maps between manifolds and their tangent space. Since then, these operations have also been more broadly understood and interpreted within the framework of gyrovector spaces, aligning with Ungar's original work cited in the paper. See, e.g., [2-8]. In this work, however, the geometric components of the model are not similarly well-motivated. Perhaps a brief motivation for the operations would be helpful.\n\n\n[1] Chami, I., Ying, Z., Ré, C., & Leskovec, J. (2019). Hyperbolic graph convolutional neural networks. NeurIPS.\n\n[2] Hatori, O. (2017). Examples and Applications of Generalized Gyrovector Spaces. Results in Mathematics.\n\n[3] Kim, S. (2016). Gyrovector Spaces on the Open Convex Cone of Positive Definite Matrices. Mathematics Interdisciplinary Research.\n\n[4] López, F., Pozzetti, B., Trettel, S., Strube, M., & Wienhard, A. (2021). Vector-valued distance and gyrocalculus on SPD matrices. NeurIPS.\n\n[5] Nguyen, X. S. (2022). The Gyro-structure of some matrix manifolds. NeurIPS.\n\n[6] Nguyen, X. S., & Yang, S. (2023). Building neural networks on matrix manifolds: A Gyrovector space approach. ICML.\n\n[7] Nguyen, X. S., Yang, S., & Histace, A. (2024). Matrix Manifold Neural Networks++. ICLR.\n\n[8] Zhao, W., Lopez, F., Riestenberg, J. M., Strube, M., Taha, D., & Trettel, S. (2023). Modeling graphs beyond hyperbolic: GNNs in SPD matrices. ECML PKDD." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A novel mixed-curvature spectral GNN that unifies both curvature (geometric) and spectral insights for learning graph representations." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024spectroriemannian,\ntitle={Spectro-Riemannian Graph Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2MLvV7fvAz},\nnote={under review}\n}" }, "abstract": { "value": "Can integrating spectral and curvature signals unlock new potential in graph representation learning? Non-Euclidean geometries, particularly Riemannian manifolds such as hyperbolic (negative curvature) and spherical (positive curvature), offer powerful inductive biases for embedding complex graph structures like scale-free, hierarchical, and cyclic patterns. Meanwhile, spectral filtering excels at processing signal variations across graphs, making it effective in homophilic and heterophilic settings. Leveraging both can significantly enhance the learned representations. To this end, we propose Spectro-Riemannian Graph Neural Networks (CUSP) - the first graph representation learning paradigm that unifies both CUrvature (geometric) and SPectral insights. CUSP is a mixed-curvature spectral GNN that learns spectral filters to optimize node embeddings in products of constant curvature manifolds (hyperbolic, spherical, and Euclidean). Specifically, CUSP introduces three novel components: (a) Cusp Laplacian, an extension of the traditional graph Laplacian based on Ollivier-Ricci curvature, designed to capture the curvature signals better; (b) Cusp Filtering, which employs multiple Riemannian graph filters to obtain cues from various bands in the eigenspectrum; and (c) Cusp Pooling, a hierarchical attention mechanism combined with a curvature-based positional encoding to assess the relative importance of differently curved substructures in our graph. Empirical evaluation across eight homophilic and heterophilic datasets demonstrates the superiority of CUSP in node classification and link prediction tasks, with a gain of up to 5.3\\% over state-of-the-art models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph representation learning", "Spectral graph theory", "Riemannian geometry", "Non-Euclidean graph neural networks", "Geometric deep learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/069700cd732a93632116f1a7ea983ff79b4d0eca.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/80badced0324eb21f3c71288422e72dfb8bb2c91.zip" }, "title": { "value": "Spectro-Riemannian Graph Neural Networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2MqyCIxLSi
TopoTune: A Framework for Generalized Combinatorial Complex Neural Networks
main
Active
Topological Deep Learning;Graph Neural Network;Graph Expansion;Combinatorial Complex;Cellular Complex
learning on graphs and other geometries & topologies
3;5;6;8
3;4;3;3
2;3;3;4
2;2;3;2
2;3;3;4
5.5
3.25
3
2.25
3
-0.160128
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could the authors use larger node-level datasets for experiments?\n2. What is the time complexity of the proposed GCCNs compared with CCNNs?\n3. The GNN models perform very different results in Figure 5. More analysis is needed." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper proposes a new method to generalize any neural network to TDL architectures. \n2. The proposed GCCNs formally generalize CCNNs and have the same expressiveness as CCNNs. \n3. A new toolkit, TopoTune, has been developed to make it easy to design and implement GCCNs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on the topological deep learning (TDL) models in particular CCNNs and proposes a new powerful graph-based methodology for new TDL architectures, named GCCNs. The paper proves that GCCNs generalize and subsume CCNNs. The paper conducts extensive experiments and shows that the GCCN architectures achieve comparable performance with CCNNs. An efficient toolkit, TopoTune, is also introduced to accelerate the development of TDL models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. For node-level tasks, the paper only considers three very small datasets, which might limit the application of the method. \n2. The complexity analysis of the method is missing and the paper does not report any training time in the experiment. \n3. The experiment of \"performance versus size\" is not well analyzed especially for the graph-level datasets (i.e., PROTEINS, ZINC)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. In Figure 5, it did not become entirely clear to me why the parameter size is reduced by changing the neighborhoods. I would expect that the total number of parameters of the GNN modules are independent of the specific types of neighborhood used. However, as shown in Figure 5 this does not appear to be the case. Can you elaborate on what exactly you mean by parameter size and how it relates the the choice of neighborhoods?\n2. It is not clear to me how exactly the GCCN models are parameterized in the different experiments. In particular, which intra- and inter-neighborhood aggregators were used for the different experiments?\n3. In the conclusion, you state that you hope that TopoTune might help \"bridge the gap with other machine learning fields\". Apart from the connection GNNs (and possibly Transformer models), are there any specific fields you envision that might profit from such a connection?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "First, the proposed GCCN architecture (while fairly straight-forward) provides a useful framework for describing a large variety of TDL methods and it enlarges the design space for such methods.\nThe experiments illustrate how this simplifies the optimization of TDL models and improving upon the state-of-the-art.\nAdditionally, the authors show that GCCN can match or even outperform previously proposed approaches while requiring fewer parameters to do so.\n\nSecond, the provided TopoTune implementation of GCCN integrates with existing GNN and TDL libraries.\nThis simplifies the exploration of novel TDL architectures and, as stated by the authors, could help accelerate research on TDL.\nHowever, since I am not deeply familiar with the current literature on TDL and open problems, I can not confidently assess the relevance of this contribution.\n\nLast, I want to highlight the presentation. The paper is well structured and written. The figures are of high quality and helpful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a general topological deep learning (TDL) architecture called Generalized Combinatorial Complex Network (GCCN). It aims to unify prior work on TDL under a common mathematical framework.\nAdditionally, the authors provide the TopoTune library, a reusable software implementation of the proposed GCCN method.\nThe experiments show that the flexibility of the GCCN framework allows it to match or outperform previously proposed TDL methods while, oftentimes, requiring fewer model parameters to do so." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In Section 4 the authors show a number of theoretical properties of their proposed GCCN framework.\nWhile certainly desirable, the value of those properties is limited. \nAs stated by the authors themselves in the proofs in the supplement, those properties are, for the most part, fairly straight-forward.\nAs far as I can tell, the GCCN framework is an intuitive generalization of prior work which only provides relatively small theoretical insights.\nThe overall value of the contribution therefore seems to depend on the relevance of the previously described strengths of the paper, in particular, on the relevance of the provided TopoTune implementation.\nHowever, as mentioned, I cannot fully assess this aspect.\nThus, one potential general concern might be the overall relevance of the paper.\n\nApart from this point I have only minor suggestions for improvement:\n1. I would have found a (brief) explanation of the evaluated types of combinatorial complexes (cellular vs simplicial) to be helpful.\n2. There seem to be two small errors in the formal definitions in Section 2:\n\t- p. 3 (127): At $\\mathcal{P}(S) \\setminus \\{\\emptyset\\}$ it should probably read $\\mathcal{V}$ instead of $S$.\n\t- p. 3, eq. 2 (146): $\\mathrm{rk}(\\tau)$ after $\\exists\\ \\delta$ should be probably $\\mathrm{rk}(\\delta)$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In the introduction you say “However, constrained by the pairwise nature of graphs, GNNs are limited in their ability to capture and model higher-order interactions […]”. I would expect that higher-order GNNs (https://arxiv.org/abs/1905.11136, https://arxiv.org/abs/1810.02244, https://arxiv.org/abs/1905.11136) are able to capture higher-order interactions. Could you elaborate on how TDL differs from higher-order GNNs?\n- Related to the first question, in L88-L93 you mention the work of Jogl et al. (https://openreview.net/forum?id=HKUxAE-J6lq) on Cell Encodings, which is equivalent to using the standard Weisfeiler-Leman test on a transformed graph, but your argument for the shortcomings of this approach is not clear to me. In particular, you state that “However, although these architectures over the resulting graph-expanded representations are as expressive as their TDL counterparts […] the former are neither formally equivalent to nor a generalization of the latter”. What is “the former”? What is “the latter”? Assuming the former are Cell Encodings and the latter topological GNNs, why is it important that they are formally equivalent or one being a generalization of the other? Are they different in their runtime or memory requirements? Do we expect better learning behavior from TDL methods?\n- As outlined in the weaknesses, in Table 1, only on two datasets GCCNs outperform the best CCNN from TopoBenchmarkX. Can you further elaborate on the benefits of TopoTune in this context?\n- Related to the third question, can the authors provide an overview over the runtime and memory complexity of the compared CCNNs, as well as GCCNs, possibly in relation to the complexity the underlying GNN submodules?\n- Am I correctly assuming that the ZINC dataset used in this work is the full ZINC dataset with 250K graphs, rather than the ZINC (12K) version frequently benchmarked in graph learning?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The main strength of this work is that the authors are able to subsume TDL architectures under a single framework.\n- The empirical results indicate to me that the framework matches existing works, thus validating the claim that the framework is indeed general.\n- The framework allows the use of GNNs, which should bring the two fields closer together and have TDL research benefit from progress in GNNs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this work, the authors propose a generalization of Combinatorial Complex Neural Networks (CCNNs) called GCCNs and an accompanying software library called TopoTune, to generalize works on CCNNs into one computational framework and streamline the training and tuning of TDL architectures. Both theoretical and empirical results indicate that the proposed framework is indeed a useful generalization of previous efforts in TDL." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- L458: The authors state that “GCCNs outperform CCNNs”. Out of the 8 presented datasets, I can only find two instances (NCI1, ZINC) where GCCNs actually perform better than the best CCNN baseline (accounting for one standard deviation). I could be convinced that the benefit of TopoTune is that one must only sweep over the GNN sub-modules to obtain an (at least) on-par model. However, this would still require some effort to find the best sub-module; see question 3 for more on this.\n- In L468 and Figure 5, the authors discuss performance vs. number of parameters. However, I don not find this comparison convincing as a smaller number of parameters may not necessarily be more cost-efficient. Instead, I would like to see a comparison in terms of runtime and memory usage of the different models.\n- Since the authors argue their approach to be superior to works on higher-order GNNs, a comparison of GCCNs and higher-order GNNs would be very useful. For example, PPGN++ (https://arxiv.org/abs/1905.11136), a higher-order GNN, performs much more on par with the best GCCN on ZINC than most CCNN baselines presented in the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Can the authors expand on whether CCNNs can capture GCCNs? Are there functions expressed by GCCNs that cannot be expressed by CCNNs? If the two classes are equivalent, can the authors discuss more in detail what is the effective advantage of considering their proposed GCCN class?\n- Can the authors better explain what were the research questions addressed in their experimental section and how their results contribute to answer them?\n- Can the authors better discuss how TopoTune goes beyond merely a hyper-parameter search tool?\n\nPlease also see weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The submission tackles an interesting research topic in a timely manner.\n- The implemented TopoTune module can be helpful to practitioners and researchers outside of the specific field of TDL." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors tackle the challenge of systematically defining new Topological Deep Learning (TDL) architectures and to enlarge the accessibility of the latter to the broader community. The way they approach this endeavour is by (i) proposing a new class of TDL architectures that generalises previously proposed ones, and by (ii) implementing a software module that encapsulates architectural search over this class.\n\nAs for (i), the authors build upon the concepts of “strictly augmented Hasse Graphs” and “Per-rank neighborhoods”. The former ones are employed to model the structure of a combinatorial complex via an ensemble of augmented Hasse graphs, one for each neighbourhood. The latter ones prescribe defining a specific set of neighbourhoods for each rank. The authors propose GCCN as architectures which process ensembles of strictly augmented Hasse graphs with per-rank neighbourhoods with specific neural models and “synchronisation” components.\n\nAs for (ii), the module is called TopoTune and is a configuration-oriented component integrated with other TDL frameworks.\n\nExperiments are conducted on graph datasets, lifted to either simplicial or cellular complexes. Results show that GCCNs can outperform standard architectures with a smaller number of parameters or lower computational cost." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- From the perspective of the framework generality, it is not clear how GCCNs would unlock new interesting operations or computational patterns.\n - Eq. 3 and 8 look particularly alike, and it is not evident what kind of advantage the latter brings. In particular, in Eq. 3, the message function $\\psi$ can be specific to a particular neighbourhood (and rank), similarly to the neighbourhood message function $\\omega$ in Eq. 8 — which, incidentally, is not rank specific.\n - Specific information about ranks and neighbourhoods could be specified by features akin to ”marks” over nodes and edges of an augmented Hasse graph, and a general enough neural architecture could then make use of these for neighbourhood and rank specific updates.\n- Proposition 3 appears to be quite trivial given Proposition 1. What is it telling us in addition to that?\n- It is not clear how the proposed contributions would help “democratising” TDL, as the authors claim. The proposed approach appears to significantly enlarge the hyper-parameter space by considering a plethora of possible architectural designs arising from the combination of neighbourhood and rank specific neural modules. Although TopoTune lowers the practical effort of searching over these spaces, these large parameter searches may still require large computational capabilities to be satisfactorily performed in a reasonable time frame.\n- The value and/or interest of some experimental questions and emerging results is not clear.\n - “GCCNs outperform CCNNs”: It is not clear what the outperformance is due to when comparing to “standard” CCNNs, which could have, potentially, neighbourhood and rank-specific message functions. What is the take-home message for readers?\n - “GCCNs are smaller than CCNNs”: the authors do not explain why this is the case, and it is seemingly the first time this concept emerges in the manuscript\n - “GCCNs improve over existing CCNNs”: the results seem to be merely a matter of additional hyper-parameter search?\n - “Performance-cost tradeoff”: The authors highlight the reduced number of parameters of GCCN models, but they do not expand into how this actually translates into lower computational cost (e.g. because run-time experiments are not discussed in this section).\n- Generally speaking, the manuscript would benefit from a clearer and more punctual presentation in regards to the motivations behind the proposed contribution and how these precisely address the research questions put forwards by the authors." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "TopoTune generalizes any architecture (Graph Neural Network, Transformer, etc.) into a Topological Neural Network that can process higher order structures on relational data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024topotune,\ntitle={TopoTune: A Framework for Generalized Combinatorial Complex Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2MqyCIxLSi},\nnote={under review}\n}" }, "abstract": { "value": "Graph Neural Networks (GNNs) excel in extracting knowledge from relational datasets, processing node and edge features in a way that preserves the symmetries of the graph domain. However, many complex systems ---such as molecular analysis, or social networks--- involve $n$-body interactions that are more naturally represented by higher-order entities like faces and volumes. Topological Deep Learning (TDL) models, and in particular Combinatorial Complex Neural Networks (CCNNs), accommodate these higher-order structures, thus benefiting from enhanced expressive power over GNNs. However, this emerging field lacks a principled strategy for defining new TDL architectures, restricting its accessibility and applicability. To address this, we introduce a simple yet powerful graph-based methodology capable of systematically transforming any neural network into a novel TDL architecture, which we call \\textit{Generalized CCNN} (GCCN). We prove GCCNs generalize and subsume CCNNs, while extensive experiments on a diverse class of GCCNs show that these architectures consistently match or outperform them, often with less model complexity. In an effort to accelerate and democratize TDL, we introduce TopoTune, a lightweight software allowing practitioners to define, build, and train these most general TDL models with unprecedented flexibility and ease." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Topological Deep Learning", "Graph Neural Network", "Graph Expansion", "Combinatorial Complex", "Cellular Complex" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3ed8d695bf2f9046adb4760771c8ab05c9c83eb5.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/7b5981c7d83a47b2c6eea7ea9f5d4e2d80afe5a4.pdf" }, "title": { "value": "TopoTune: A Framework for Generalized Combinatorial Complex Neural Networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2NqrA1wYi6
Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation
main
Active
memory-based RL;memory;pomdp
reinforcement learning
3;5;5
4;3;4
2;3;3
2;2;3
2;2;3
4.333333
3.666667
2.666667
2.333333
2.333333
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In this paper, is memory meant to be a problem or a solution?\n\n2. Is there some kind of Marr-Poggio levels of analysis story that could be used to clarify the overall structure of the argument here?\n\n3. Doesn't this taxonomy seem to bundle together too many things that really are not similar? Replay buffers used just for training and dynamically accessed external memories, used at test time, are quite different algorithmically, and used in quite different ways. Why does this classification scheme seem to drop these in the same bucket? (There is text in the appendix suggesting this). Also, I don't see why it's even true according to the definition in the main text. I would think that definition would separate these approaches. And that would. be kind of the whole point of separating declarative and procedural memory. Why doesn't it separate them in the appendix anymore?\n\n4. The paper includes the following sentence in the results section “This ambiguity arose because the first experiment did not follow our proposed methodology”, well this certainly doesn’t inspire any confidence. Why talk about an experiment that doesn’t fit the proposed methodology? It's likely I've misunderstood this paragraph. I find it very hard to follow this part.\n\n5. How would you think about an RL model that remembers and reproduces a time interval? E.g. [Deverett, B., et al. (2019). Interval timing in deep reinforcement learning agents. NeurIPS.] That paper showed that purely feedforward agents can sometimes solve what appear to be memory tasks. Does that matter? How would your definitions classify the feedforward agent in that paper?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I'm generally favorably disposed toward conceptual papers like this. And I do agree with the authors that their target, memory in RL, is a worthy target for such an effort.\n\nThe main distinction is between declarative and procedural first, and then short-term versus long-term second. The latter is defined with respect to a context length parameter. I do think it's a good idea to highlight somehow the difference between associations within the context window and outside of it. This is a very relevant difference with many algorithmic implications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This is a sort of conceptual paper, it's main concern is to taxonomize the concept of memory in reinforcement learning. Given the taxonomy, it aims to demonstrate why paying attention to the categories it suggests is important for interpreting the results of experiments on RL agents involving memory." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- This is trying to be a conceptual paper aimed squarely in the intersection between AI and cognitive /neuro science. However, judged in that way, I don’t think it really makes the grade. The problem is that it doesn’t really connect clearly into the conversation on the cognitive / neuro science side. There are very few references to these disciplines for one, or less than I would expect any way. And critical references for multiple memory systems are missing (there are so many, but I like some of Squire's old papers on the topic). And there is basically no context in the paper connecting the work to the ways that researchers in these other fields have thought about memory. For a paper like this which purports to offer a formalization of what is meant by ‘memory’, it's clearly important to relate the new formalization to old ones and discuss how they are similar and different, and to try to sustain an argument for why the present one is an advance on the old.\n\n- I’m not buying the claim that RL is capturing declarative memory. I would tend to say that a defining feature of declarative memory as opposed to procedural memory is the declarative memory’s arbitrariness. The classic prototype example of a declarative memory is a person’s name. And most definitions you find look something like \"declarative memory is defined as the type of memory that involves consciously recalling facts and events\". It’s all very language-like. But RL memories aren’t usually like that. In some cases they may be, but it’s not too common memory in RL outside of language data. I would probably have been more forgiving of this claim to capture declarative memory with RL had come a few years ago, but now that we have LLMs, what’s the point in trying to get all bent out of shape to capture declarative memory in RL? The prototype of declarative memory is arbitrary information conveyed by language e.g. “My teacher’s name is Bob” (episodic flavor) or “Paris is the capital of France” (semantic flavor). So it seems very reasonable to expect a model of declarative memory to use the kinds of AI systems that work for that kind of data, now that they exist and are so widespread. And of course there are plenty of ways to combine RL and LLMs. I understand though that that would take this too far afield for the present work. And this isn’t really a computational neuroscience modeling paper. So this isn’t really a weakness of the paper here. No need to reply to this bullet point since I don’t think it really matters for your paper. But I’m just leaving it here as a way to convey a bit more my mindset with regard to this paper. At very least, the arbitrariness of the associations seems like a critical part of declarative memory.\n\n- Right after positing a difference between declarative and procedural memory in terms of the algorithms that implement them, in the very next paragraph it then acts like this distinction is established and ready to support further claims when it says “many studies fail to differentiate between agents with declarative and procedural memory”. But that’s not a strong argument given that it just followed right after defining these terms. Why should other papers have tried to probe things according to the arbitrary categories you just defined? Especially since, I suspect many researchers would not necessarily agree with the classification. At any rate, the paper merely asserts that one set of tasks are declarative and another are procedural, but it offers no evidence that this distinction corresponds to what others mean by those terms.\n\n- Definition 3 says one should call RL problems involving a “single environment” problems of declarative memory and RL problems featuring multiple environments problems of procedural memory. This definition would be impossible to apply in practice. It would appear to suggest that all memory is declarative since one can always compose “multiple environments” together into a single meta-environment. The difference between one environment or many environments is not in the task itself, it’s just a purely formal aspect of the modeling language. One generally is not supposed to predicate a general definition on such a purely formal property since it would make your classifications float around following specific and contingent task parameterization properties. Note: all the same comments also apply to the episode concept too." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "This is where double blind reviewing sucks, as I would strongly recommend that the authors produce the review paper, with additional contribution this paper could be, and I would happily serve as a reviewer for said paper." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I really like the paper and the topic it presents. I think it is good to have a more clear definition of what exactly is meant by memory and what its contribution can be in reinforcement learning. I like the approach the authors came up with and the clarity with which they presented it. I think there is a need for a paper like this, and I like how the authors looked at the current state of affairs in reinforcement learning research and its treatment of a new and important branch in the field that deals with memory in a bunch of different contexts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper attempts to create clarity in the use of the term \"memory\" in a reinforcement learning context. As well as suggesting definitions for different kinds of memory and different memory related tasks, the authors present a more rigorous way for testing memory capabilities of reinforcement learning techniques and show possible pitfalls of violating the proposed methodology." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "However, the topic seems difficult to deal with in a conference paper. When reading the introduction and the goal of the paper as set out, I was expecting a more broad overview of current use of the memory term and the different ways it is used and abused in reinforcement learning literature and research. I think the topic is very interesting, but a paper doing a deep dive into a topic such as this has to build a clear foundation for its contributions by taking the body of existing work into account (To be clear, I am not suggesting that the authors don't do this.) and illustrating this by giving a broad overview of said existing work in the paper. \n\nIn my view, this contribution wants to be presented in a review paper, with an overview of recent existing work laying a strong foundation for the contributions made by the authors, namely, bringing clarity to the current mismatch in use of the term \"memory\" in the field. \n\nCurrently, the paper includes a very brief section on POMDPs, which are important, but don't represent all ways in which the term memory is used. However, since this is section 2, I think this is a bit misleading, as it seem to set the context in full. The related works section is very brief, and much related work is relegated to the appendix, where most of it is only referenced, but not placed in context of the suggested structure and definitions. Section 4 lays some foundation from cognitive science and RL, and talks about the credit assignment problem in relation to memory handling, but it feels rushed and the role or importance it plays isn't obvious. All of this should be given more room to be elaborated on. I understand that this is impossible in a conference paper however, but now it feels rushed, and I don't feel the work will get the attention it deserves or reach the audience that it should." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Could the framework be extended to evaluate procedural memory in Meta-RL settings? Are there specific experiments that could be added to address skill transfer across tasks?\n* In my interpretation, declarative and procedural memory are intended as distinct concepts; however, the definitions in Equation 2 of Definition 3 imply that declarative memory could be included within procedural memory due to the “or” condition and “≥” allow for overlap. Could the authors clarify whether declarative memory is meant to be a subset of procedural memory or fully distinct? How does this impact the proposed distinction between Memory DM and Meta-RL in the framework?\n* How does this framework compare to existing memory evaluation approaches in RL? What are the specific advantages of using cognitive science-inspired definitions over more traditional RL memory metrics?\n* What motivates the specific classification of memory types (declarative vs. procedural, STM vs. LTM), and how does it improve memory assessment in RL over a general approach?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper provides neuroscience-based definitions of memory types, clarifying RL memory research, which enables more accurate agent comparisons and tailored evaluation methods for each type​. The cognitive science-inspired approach has interdisciplinary appeal, likely to attract interest from both RL and cognitive science researchers, fostering potential collaboration and cross-disciplinary insights.\n* The paper’s methodology is grounded in theoretical rigour, offering a scientifically robust framework that enhances the validity and reliability of memory evaluation in RL studies.\n* It introduces a standardised methodology for assessing memory capabilities, promoting reproducibility and consistency across RL studies by providing clear criteria for experimental setups." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces an approach inspired by human cognitive abilities to formalise memory types in reinforcement learning (RL), providing precise definitions for short-term memory (STM) and long-term memory (LTM). STM is defined as the agent's reliance on recent interactions, while LTM involves recalling information over longer time intervals outside of the immediate context. The authors differentiate between Meta-Reinforcement Learning (Meta-RL), which focuses on cross-task skill transfer (procedural memory), and Memory Decision-Making (Memory DM), where agents use historical data within a single environment (declarative memory).\n\nIn the Memory DM setting, the authors develop a rigorous evaluation methodology to assess memory capabilities in RL agents. This approach is validated in memory-intensive environments, such as the Passive T-Maze and Minigrid-Memory, by varying critical parameters—context length (the memory span an agent can handle) and correlation horizon (the temporal dependency between events). By varying key parameters, the experiments demonstrate the Memory DM framework’s ability to reliably assess STM and LTM in RL agents." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The framework has been validated in simple environments, which may not capture the challenges of more sophisticated settings or real-world scenarios, potentially limiting its practical applicability.\n* The paper discusses procedural memory as part of its classification scheme but does not provide or suggest an evaluation methodology related to it, focusing solely on declarative memory. This results in an incomplete validation and leaves open questions about the classification’s practical application to skill-transfer scenarios.\n* The methodology section is dense and complex, additional visual aids or examples could clarify the experimental design and enhance comprehension for a broader audience." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A formal description of the memory types of RL agents and a methodology for conducting an experiment to test the memory." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024unraveling,\ntitle={Unraveling the Complexity of Memory in {RL} Agents: an Approach for Classification and Evaluation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2NqrA1wYi6},\nnote={under review}\n}" }, "abstract": { "value": "The incorporation of memory into agents is essential for numerous tasks within the domain of Reinforcement Learning (RL). In particular, memory is paramount for tasks that require the utilization of past information, adaptation to novel environments, and improved sampling efficiency. However, the term ``memory'' encompasses a wide range of concepts, which, coupled with the lack of a unified methodology for validating an agent's memory, leads to erroneous judgments about agents' memory capabilities and prevents objective comparison with other memory-enhanced agents. This paper aims to streamline the concept of memory by providing precise definitions of agent memory types, such as long-term versus short-term memory and declarative versus procedural memory, inspired by cognitive science. \nUsing these definitions, we categorize different classes of agent memory, propose a robust experimental methodology for evaluating the memory capabilities of RL agents, and standardize evaluations. Furthermore, we empirically demonstrate the importance of adhering to the proposed methodology when evaluating different types of agent memory by conducting experiments with different RL agents and what its violation leads to." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "memory-based RL", "memory", "pomdp" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1be77c98b5bae06eb089009aa121e12c6af0aed8.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/e538ac6900e85ad22a14cab0f05bc0bf988877d8.zip" }, "title": { "value": "Unraveling the Complexity of Memory in RL Agents: an Approach for Classification and Evaluation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2NqssmiXLu
Automated Proof Generation for Rust Code via Self-Evolution
main
Active
Large Language Models;Program Verification
applications to computer vision, audio, language, and other modalities
5;6;6;8
4;4;4;4
2;3;3;4
4;3;3;4
2;3;2;4
6.25
4
3
3.5
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please address the points raised in the “Weakness” section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper studies automating proof generation in formal program verification with LLMs, an important direction with great potential for practical applications. The focus is on Rust, a relatively new language that is gaining widespread adoption. Although synthetic data generation for fine-tuning LLMs is not a completely novel idea, the paper introduces a few interesting techniques for the domain of proof generation for Rust. I particularly like the metric for filtering high-quality specifications. The evaluation is thorough, demonstrating the benefits of SAFE over baselines and the effectiveness of its individual components." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes SAFE, a data generation and fine-tuning procedure for improving LLMs in generating proofs for the correctness of Rust code. SAFE consists of three stages: (i) verus-compatible code generation, (ii) self-evolving specification synthesis, and (iii) self-evolving proof synthesis. During stage (ii), SAFE leverages a symbolic and quantitative measure based on the correctness and completeness of the specification. For stage (iii), SAFE fine-tunes both proof generation and repair models. The experiments demonstrate the advantages of SAFE: it significantly improves the performance, compared to both the base model and GPT-4o." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper only focuses on small programs in the style of MBPP and CodeNet. Although I understand this is partly due to the limitation of the Verus tool, I do believe that the paper should present some case studies or discussion on how to scale the approach to real-world software projects.\n\nApart from proof generation, a major part of formal verification is writing the specifications. The paper covers mechanisms to fine-tune a “good” specification generation. It would strengthen the paper if more evaluation can be done on the specification generation task and how it can be combined with proof generation to automate end-to-end verification.\n\nThe paper lacks a study on the choice of the correctness and completeness thresholds for the specification metric.\n\nThe paper writing can be improved. Below are some issues I found or some recommendations:\n- The text in Section 3 is sometimes ad-hoc and contains low-level details (e.g., choice of parameters). I would be helpful to revise the text to be more formal and move the details to later sections.\n- Line 289: The paper says “much previous work relies on running many test cases” without providing any references.\n- Line 519: Table 2 should be Table 3\n- Table 3: The split of model names to multiple lines is confusing. I thought one line of text corresponds to one single baseline. The $\\Delta$ rows look redundant as well." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. A clarifying question about the self-evolving data: The data collected through GPT-4o (round 0) is used to fine-tuned the first specification/proof generation model. What's the data input used to let the generation model generate data for then next round? \n* Are these data the same programs as those used in the generating round 0 data? If this is the case, would the training data in each round kind of repetitive and lack of diversity?\n* Or does author use some strategies to leave some unique programs for each round, so that the fine-tuning data for each round contains different programs?\n2. Self-debugging is quite effective and improves the accuracy, how does the model obtain the ability of self-debugging? does the fine-tuning procedure contains self-debugging training data?\n3. Why are the baseline models prompted with 4 examples instead of more examples?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper proposes a novel approach that use self-evolving to iteratively improve LLM's ability of generating Rust specification and proofs. The fact that this approach does not rely on larger LLM such as GPT-4o in the following iterations (except the first round) makes it more generalizable and scalable.\n2. The proposed approach shows great effectiveness, with three round of self-evolving, the fine-tuned LLM shows about 40% higher accuracy@1 compared to the prompting approach.\n3. Comprehensive analysis and experiments, showing that each round of 1, 2 and 3 brings some improvement to the fine-tuned LLM (although the round 2 model is better than the round 3 model under some settings), and showing that high-quality specifications important to improve the model's accuracy during self-evolving." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces SAFE, an innovative framework designed to address the challenges of automated proof generation for Rust code, SAFE overcomes the significant data scarcity issue ( i.e., there is far less proof data than code data for training language models) by using a self-evolving approach to synthesize specification/proof data and fine-tune models iteratively. SAFE operates through a feedback loop where synthesized proofs are continuously verified by a symbolic verifier, distinguishing correct from incorrect proofs. Incorrect proofs are used to train the model's self-debugging ability, while the correct proofs are used to improve the model for the next round. The design of the approach is smart and uses the insight that (1) using a quantitative metric to select high-quality specifications for fine-tuning; (2) we only need reasonably well, instead of perfect specifications to fine-tune in the next step; and (3) Verus can quickly tell correct proof from incorrect ones, which enables the collecting and filtering of large amount of data.\n\nSAFE achieves a substantial improvement, attaining a 70.50% accuracy on a benchmark set crafted by human experts, a notable advancement over GPT-4o's performance of 24.46%. SAFE also obtains self-debugging ability using the incorrect proofs collected during the data collection step. Experiments show that each round of self-evolving improves the accuracy of SAFE, and proves the importance of using high-quality training data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The self-debugging ability is shown to be only effective for the first time, what could be potential approach for improving the self-debugging ability in the following rounds?\n2. I am wondering if this self-evolving approach can improve smaller LLMs ability. For instance, if the backbone is DeepSeekCoder-1.3B, how effective is the self-evolving approach?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please provide a short statement or clarification to the points raised above." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper is well-written and nicely structured. The figures and tables are well formatted and legible.\n- The story is engaging and the tackled topic highly relevant.\n- The results are clearly presented and provide interesting insights." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The presented paper proposes a method to bootstrap training data for generating proofs of Rust code using LLMs. The pipeline starts from a small set of curated programs and gradually evolves using the verifier as signal for dataset curation. Finally they evaluate the resulting fine-tuned LLM and show state of the art results on a difficult dataset of low-resource correctness proofs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In Table 1 the Accuracy of GPT-4o on VerusBench @100 is unfortunately missing (likely due to high inference cost?). Similarly the result of DeepSeekCoder RAW @100 is missing. If the authors could provide these values, the tables would provide a much more complete picture.\n- In Table 2, Round 3 appears to severely degrade performance of the resulting model on the Tutorial dataset. Does this constitute some first signs of overfitting or collapse or could the authors provide some more insight on what is happening here? It might be interesting to provide some basis on deciding where to stop the iterative process.\n- There is no discussion of Limitations. While the provided method is clearly powerful some discussion on potential limitations would be highly appreciated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In table 2 you report a \"total\" column; I noticed the numbers don't add up if you just take the mean of the other columns, so I presume what you're actually doing is taking the mean over all of the samples in the entire dataset? (I think that's what you want to do, I just want to make sure I understood correctly).\n- When constructing training data for the repair task, are you doing any filtering to make sure that the \"incorrect program\" is actually similar to the \"correct program\", or could they be completely different?\n- What is the difference between SAFE and expert iteration, other than your synthetic data generation for the specifications?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "I think the core ideas in this paper are very interesting, and that there are several good contributions here:\n- Filtering the specifications based on a symbolic, deterministic score computed from the tests seems like the right thing to do, and I appreciate the brief ablation study of the impact of this step (section 4.2.4).\n- The experiment in 466-474 provide further evidence for previous findings in the code generation literature about \"sampling+greedy\" self-debugging outperforming \"greedy+sampling\" (I recommend that the authors consider explicitly comparing these results to e.g. [1, 2]).\n- Perhaps most importantly, verifying Rust code is not only potentially impactful but also (as far as I know) a completely novel task; kudos to the authors for going through the effort to collect all the data.\n\n\n[1] \nTeaching Large Language Models to Self-Debug\nXinyun Chen, Maxwell Lin, Nathanael Schärli, Denny Zhou.\nInternational Conference on Learning Representations (ICLR), 2024.\n\n[2] \nIs Self-Repair a Silver Bullet for Code Generation?\nTheo X. Olausson, Jeevana P. Inala, Chenglong Wang, Jianfeng Gao, Armando Solar-Lezama.\nInternational Conference on Learning Representations (ICLR), 2024." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper seeks to finetune a code-generating LLM to generate verification annotations for code.\nSpecifically, the authors target Verus, which is an SMT-backed automated-theorem-proving-style verifier for Rust code.\nThe key technical challenge that the authors thus need to overcome is that ths is a very low resource language,\nso simple techniques such as finetuning aren't directly applicable; and even API-backed models do so poorly\non this task that naively distilling wouldn't help, either.\nInstead, the authors basically bootstrap the finetuning process as follows:\n- First, they generate a set of proof *specifications* for some Rust programs using GPT-4o. They then filter out specs which are \"low quality\", e.g. those which are always true.\n- Then, they use these specifications to generate (against using GPT-4, with some expert-crafted task-specific prompts) proof *annotations* for a small subset of these specifications.\n- Finally, they bootstrap a finetuning process from these initial annotations, training in each round the open-source model on the correct proofs it generated in the last round.\nThere's some additional bells and whistles, such as also training on incorrect proofs by framing it as auxiliary repair task, but I believe this summarizes the core idea.\n\nIn terms of the experiments, the authors share results both for a small, human-written benchmark and for GPT-transpiled versions of MBPP and SV.\nAt first glance I was a bit worried about the scale of this data, but given the novelty of the task and the relative lack of Rust datasets in the literature I actually commend the authors on their effort to collect as much data to evaluate on as possible." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are a few rather major flaws that make me hesitant to recommend the paper for acceptance in its current format.\n\nOne is that I find that the paper tries a little bit too hard to sell the novelty of this \"SAFE\" approach.\nBootstrapping finetuning of LLMs by interleaving it with search has been done before; most people call it \"expert iteration\" [3] (authors: please correct me if you think there is a *significant* difference between your method and this).\nEspecially what you call \"SAFE_0\" feels a bit rich: unless I am mistaken, you are literally just doing synthetic data generation with GPT-4o and filtering the results based on some measure of quality.\nAlso, on line 359 you say \"21,398 [programs] have been successfully transformed [...] by SAFE\"; unless I'm mistaken you mean \"by GPT-4\" here, because at this point you haven't done anything other than asking GPT to first transpile the code to Rust and to then transpile the Rust code into the subset of the language that is supported by Verus.\n\nI would encourage the authors to tone down the language a bit more and focus on the actually novel parts of this paper, which I believe to be: the task target; finetuning on repair tasks to improve generation performance; and the metric used to filter the specification samples.\n\nA more important issue is that I do not think the comparison to the baselines is fair in its current form for the \"SAFE+\" method.\nThe authors themselves point out that in this variation (i.e., when you do a round of self-debugging if the initial generation does not succeed), they generate `k * k` repair samples - how can you then compare against pass@1? You have actually drawn `k + k*k` samples from the model, so you should at least compare against a baseline of `pass@(k + k*k)`.\nThis is an issue that has come up again and again the self-debugging/refinement literature, and I once again encourage the authors to engage with that literature.\nYou still have good results here - for example, the SAFE+ pass@10 is substantially higher than the SAFE pass@100 on VerusBench - but the way you're currently presenting them overstates their significance.\n\nFinally, the writing could use some more proof reading, especially the abstract and the introduction (but this is a minor complaint).\n\n\n[3] \n@misc{polu2022formalmathematicsstatementcurriculum,\n title={Formal Mathematics Statement Curriculum Learning}, \n author={Stanislas Polu and Jesse Michael Han and Kunhao Zheng and Mantas Baksys and Igor Babuschkin and Ilya Sutskever},\n year={2022},\n eprint={2202.01344},\n archivePrefix={arXiv},\n primaryClass={cs.LG},\n url={https://arxiv.org/abs/2202.01344}, \n}" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024automated,\ntitle={Automated Proof Generation for Rust Code via Self-Evolution},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2NqssmiXLu},\nnote={under review}\n}" }, "abstract": { "value": "Ensuring correctness is crucial for code generation. Formal verification offers a definitive assurance of correctness, but demands substantial human effort in proof construction and hence raises a pressing need for automation. The primary obstacle lies in the severe lack of data — there is much less proof than code for LLMs to train upon. In this paper, we introduce SAFE, a novel framework that\novercomes the lack of human-written proof to enable automated proof generation of Rust code. SAFE establishes a self-evolving cycle where data synthesis and fine-tuning collaborate to enhance the model capability, leveraging the definitive power of a symbolic verifier in telling correct proof from incorrect ones. SAFE also re-purposes the large number of synthesized incorrect proofs to train the self-\ndebugging capability of the fine-tuned models, empowering them to fix incorrect proofs based on the verifier’s feedback. SAFE demonstrates superior efficiency and precision compared to GPT-4o. Through tens of thousands of synthesized\nproofs and the self-debugging mechanism, we improve the capability of open-source models, initially unacquainted with formal verification, to automatically write proof for Rust code. This advancement leads to a significant improvement in performance, achieving a 70.50% accuracy rate in a benchmark crafted by human experts, a significant leap over GPT-4o’s performance of 24.46%." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Program Verification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3842ee20a21942369ac8aed6575799ee7b1b1bde.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Automated Proof Generation for Rust Code via Self-Evolution" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2OANNtX3T5
EXPLORING RESPONSE UNCERTAINTY IN MLLMS: AN EMPIRICAL EVALUATION UNDER MISLEADING SCENARIOS
main
Active
UNCERTAINTY;MLLMs;Misleading
alignment, fairness, safety, privacy, and societal considerations
3;5;5;5
4;4;3;3
2;2;1;2
2;3;3;3
4;3;3;2
4.5
3.5
1.75
2.75
3
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Why best-of-5 sampling, and why only for \"Implicit\"?\n\nCan you provide several random samples from the \"Implicit\" setting?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper tackles an interesting problem, and the idea of adding misleading evidence to a prompt is a nice way to test for robustness. I also thought it was interesting that consistency decreases." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a dataset of misleading instructions for multimodal language models. This is done in two ways: through a template (telling the model that the answer is \"X\", where X is wrong), and through a language model (for instance by adding evidence or reasoning that contradicts the true answer). It is shown that models have lower consistency on instructions that are successfully misleading, and that fine-tuning can improve this." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I'm not sure I buy the overall motivation -- of course if you tell the model the answer is wrong, it will flip some fraction of the time. But is this going to affect real users in any way? Arguably this is even an intended feature to avoid getting into fights with users.\n\nAs a result of this, I don't think the \"Explicit\" misleading prompts are really a meaningful benchmark. The \"Implicit\" ones are more interesting, but need more detailed treatment: for instance, at least give several examples and enough information to assess data quality. The evaluation of the \"Implicit\" setting is also strange -- using best-of-5 sampling (even though \"Explicit\" is best-of-1) which inflates the success rate.\n\nA separate issue is that, throughout, there is not enough information to understand the data or experimental setup in detail. The paper says that they \"fine-tuned on separate data\", but there are not many details that would let a reader verify or reproduce the experiments. (This is also part of the problem with the \"Implicit\" setting -- not enough details to fully understand.)\n\nI think the authors are tackling an interesting problem, and have made a good start on it, but in my opinion the experiments and writing should be tightened up before it's ready to be accepted to ICLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- I think the effort of proposing novel and more efficient metrics to is very relevant and helpful. Previous metrics, such as self-consistency rate are widely used but in practice I found it to be very unreliable on big models.\n- The experimental evaluation is comprehensive, covering most of the commonly used closed and open-sourced models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies uncertainty measurement for responses from MLLMs. The main novelty is a novel uncertainty measurement based on how MLLMs' response shifts after injecting misleading instructions. Empirically, the paper developed Multimodal Uncertainty Benchmark (MUB), and systematically evaluates most major MLLMs’s uncertainty; The result suggests a dominant issue of uncertainty in MLLMs, with an average misleading rate exceeding 86%. The author experimented with fine-tuning MLLMs with targeted misleading data, which notably improves robustness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Soundness of measurement (Major): While I very much appreciate the effort on better and efficient measurements for uncertainty, currently I still have some doubts about whether adding misleading information can measure the uncertainty in model’s response. I’ll explain my concerns and perhaps the authors can clarify:\n - The measurement might be dependent to the misleading information themselves: the content, position, length, etc might all influence this metric. Moreover, since the implicit misinformation is generated by GPT4o, which is also evaluated on the benchmark, will it incur evaluator bias?\n - Implicit scenarios seem better defined; But for explicit scenarios (e.g. telling the model true answer), the model behavior might be inherently undefined: i.e. shall the model follow the user’s “instruction” (e.g. “the true answer”), or answer the question and ignore user instruction.\n- Task (Minor): The study is confined to multiple choice question. I am curious about how would the definitions, measurements, and findings generalize to open-ended question. But I don’t think this is a major point, because most current VLM benchmarks are multiple-choice only." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- The misleading information was only added to the textual questions, why not consider altering the image to inject misleading information?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The attempt of addressing the response uncertainty in MLLMs is an interesting and important task. The proposed method seems largely sound to me in addressing at least parts of the problem. The paper is written in a structured and esay2understand manner -- quite straightforward. The MUB benchmark could be useful and the benchmarking results are generally informative. The effort of finetuning the MLLMs with misled data adds some more insights into how the problem could be mitigated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper dives into how MLLMs often fail to perform well when faced with misleading prompts. To tackle this, the authors set up a benchmark called the Multimodal Uncertainty Benchmark (MUB), which first gathers standard responses and then throws in misleading inputs to see how often the models get tripped up. They then fine-tuned open-source models with a mix of straightforward and subtle misleading data, cutting down the rate of being misled, while keeping the models’ overall accuracy intact." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main concerns are\n\n- This work only evaluates/tackles VLLM instead of MLLM as claimed multiple times in title and throughout paper, though I could maybe see the way to extend to other modalities.\n\n- Having the implicit misleading information generated by GPT-4o seems like a \"fighting fire with fire\" approach -- I think it is better to have at least a subset of implicit ones written by human annotators so that we can see whether there is any difference between the human-generated ones and GPT-4o generated ones.\n\n- During finetuning, a random set of explicit and implicit misled samples are used for finetuning, yet I am afraid the explicit misleading info has a too obvious and unique pattern due to how it's designed, hence too easy to pick them up, making the improvement after finetuning not too surprising.\n\n- Instead of finetuning, I would recommend the authors to simply systematically prompt the MLLMs, such as \"The questions might contain misleading information, you should try to answer the question correctly despite of those misleading information ...\"; another version could even give it two examples (one explicit and one implicit). I would guess/assume, simply doing this extra prompting will make the results much better.\n\n- The questions only include multi-choice and T/F styles, which certainly makes the metrics calculation easier (reflected in equation 1 and 2), yet probably losing the delicacy in the type of Q/A addressed?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Refer to Weakness. The analysis of utility and calibration is important for such work." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The strengths of this paper include:\n\n- This work focuses on an important issue: the robustness of Multimodal Large Language Models (MLLMs) when faced with misleading instructions. This is a compelling research topic that addresses a gap in the current field. \n\n- The paper is well-structured, with a clear framework, and the authors present three research questions that are thoroughly examined through extensive experiments involving 12 models. \n\n- This work contributes to the community by introducing the Multimodal Uncertainty Benchmark and providing a fine-tuning dataset, demonstrating improved model robustness against misleading instructions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the issue of response uncertainty in Multimodal Large Language Models (MLLMs), which can be problematic when models encounter misleading information. To tackle this, the authors propose a two-stage pipeline: first, gathering MLLM responses without misleading information, followed by collecting responses influenced by specific misleading instructions. They effectively evaluate model uncertainty by measuring the misleading rate and tracking shifts between correct and incorrect responses. They introduce the Multimodal Uncertainty Benchmark (MUB), which uses explicit and implicit misleading instructions to assess MLLM vulnerability across various domains. To improve robustness, they fine-tune open-source MLLMs using misleading data, substantially reducing misleading rates." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The weaknesses of this work include:\n- The paper lacks a discussion on whether the models are calibrated. It does not address whether more consistent (more certain) model outputs correspond to more accurate answers. The results in the paper are primarily based on misleading rate (MR) and average consistency rate (ACR), without showing metrics like model accuracy. There is a lack of analysis on utility.\n- The authors fail to analyze the impact of instruction tuning on model usability. It is unclear how much the model’s performance on different tasks changes before and after fine-tuning. This lack of explanation limits the understanding of the benchmark’s functionality. Researchers are left uncertain whether fine-tuning causes models to generate consistent but incorrect responses.\n\nSuggestions:\nThe images in the paper are quite blurry, especially Figure 2. The authors should check the image quality. There are also some typos, such as mixed-use of InternVL-Chat-V1-5 and Internvl-chat-v1.5." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024exploring,\ntitle={{EXPLORING} {RESPONSE} {UNCERTAINTY} {IN} {MLLMS}: {AN} {EMPIRICAL} {EVALUATION} {UNDER} {MISLEADING} {SCENARIOS}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2OANNtX3T5},\nnote={under review}\n}" }, "abstract": { "value": "Ensuring that Multimodal Large Language Models (MLLMs) maintain consistency in their responses is essential for developing trustworthy multimodal intelligence. However, existing benchmarks include many samples where all MLLMs exhibit high response uncertainty when encountering misleading information, requiring even 5-15 response attempts per sample to effectively assess uncertainty. Therefore, we propose a two-stage pipeline: first, we collect MLLMs’ responses without misleading information, and then gather misleading ones via specific misleading instructions. By calculating the misleading rate, and capturing both correct-to-incorrect and incorrect-to-correct shifts between the two sets of responses, we can effectively metric the model’s response uncertainty. Eventually, we establish a Multimodal Uncertainty Benchmark (MUB) that employs both explicit and implicit misleading instructions to comprehensively assess the vulnerability of MLLMs across diverse domains. Our experiments reveal that all open-source and close-source MLLMs are highly susceptible to misleading instructions, with an average misleading rate exceeding 86%. To enhance the robustness of MLLMs, we further fine-tune all open-source MLLMs by incorporating explicit and implicit misleading data, which demonstrates a significant reduction in misleading rates" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "UNCERTAINTY", "MLLMs", "Misleading" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/286c5462648a2ec5e13ee796e7a873e535238d88.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/43477468f1ef9e4c987955daf29f9b19b3fb60f9.zip" }, "title": { "value": "EXPLORING RESPONSE UNCERTAINTY IN MLLMS: AN EMPIRICAL EVALUATION UNDER MISLEADING SCENARIOS" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2OMyAFjiJJ
Flow matching achieves almost minimax optimal convergence
main
Active
flow matching;generative model;convergence rate;optimality
generative models
5;6;6;6
5;3;4;3
3;3;2;4
2;3;2;3
2;3;2;3
5.75
3.75
3
2.5
2.5
-0.870388
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How do estimates change if you take a distribution other than the Gaussian distribution as the initial distribution $P_{[0]}$?\n\n- Can the obtained estimates be easily extended to the case of estimation error in the total variation (TV) distance?\n\n- Would your estimates change if you use different heuristics for Flow Matching, such as OT-minibatch?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper is the first to present estimates for the Flow Matching framework, which shows that almost optimal minimax converges rates are achieved under several assumptions.\n- The paper is well written" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper provides estimates for the 2-Wasserstein distance for the sample-based distribution obtained in the Flow-Matching framework relative to the exact distribution. These estimates depend on the number of samples used in training, the smoothness of the true distribution as an element of the Besov space, the asymptote of the growth of the conditional map at the initial time instant.\nThe paper considers the early stopping mode of the ODE, when the solution stops at time $T_0<1$, and the estimates of $T_0$ are also given." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper contains many points in common with the paper cited therein [1]. In particular, using Besov space for target density, B-splines for its approximation, etc. Many estimates are based on thouse from [1], see, for example, Appendix A.4--A.5 of the presented paper, where the citation on [1] are explicit. In paper [1] diffusion models are considered, but as shown in paper [2], Flow Matching approach includes, under certain conditions, Diffusion models approach. Thus, generalization or obtaining similar results for Flow Matching is rather straightforward. Namely, in essence, the difference is to use Alekseev-Gröbner Theorem (Lemma 16 about error of a perturbed solution of ODE) instead of Girsanov’s Theorem (Proposition D.1 of [1] for error of a perturbed solution of SDE).\nOne of the main differences is the presence in the estimates of the degree of growth of the parameter $\\sigma_t$ at 1, but the authors come to the well-known (empirical) conclusion that the optimal asymptotics is $\\sqrt t$. Does this provide the first theoretical justification for this empirically observed optimal scaling? How can one intuitively realize that the degree of $\\sigma_t$ growth near the time point $t=1$ is important if the ODE solution is considered on the interval $[0, T_0]$, where $T_0<1$?\n\n\n[1] Kazusato Oko, Shunta Akiyama, and Taiji Suzuki. Diffusion models are minimax optimal distribution estimators. volume 202, pages 26517–26582. PMLR, 4 2023\n\n[2] Aaron Lipman, Ricky T Q Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In The Eleventh International Conference on Learning Representations, 2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) In Theorem 1, what is a good bound for R_0?\n2) Still in theorem 1, why do you chose $\\sigma_{[\\tau]}$ in that form? In what applications does it appear that way?\n3) Can you test the sharpness of the bounds of theorem 9 for some FM famous use cases? \n\nat line 310 \"in general we generally consider\" may be rephrased" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "It's interesting to know that FM have the same standard guarantees as DMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper applies the same framework as in Oko et al. (a paper on convergence rates of diffusion models as timesteps and/or sample size goes to infinity), to Flow Matching. Due to the application to a different model, some of the proofs are different but the results are of the same strength." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Mathematically, the paper does not shine as to novelty, it mostly chains known estimates, and applies them to FM. \nLike in similar papers for other models, some of the setups look like toy models, this may be because the mathematical theory is unavailable in general." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "My comments are minor (these are mostly typos I found, but this is far from exhaustive)\n\n1. Line 188 \"for probability *density* estimation\"?\n2. L193 (this might happen in many places): I believe grammatically it makes sense to say \"i.id. sample*s*\" instead of a single sample\n3. L222: *reverse* not revserve\n4. L254: \"respectivly\" is misspelled\n5. This is a question: is there a clean way to track the dependence on the diameter of the set of the support? The assumptions assume the support of the density is in the unit cube. What is the dependence if the radius was arbitrary? This might fall outside the scope of the paper, but I'm curious if the authors have the answer\n6. L516: \"diffrence\" is misspelled" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper provides (to my understanding) the first estimation rates for flow matching in the context of classical statistical estimation rates. This paper is similar in spirit to the FM paper of Lipman et al (2023), where they use different combinations of mean-variance parameters to define their path. While this work leverages many ideas from Oko et al. (2023), what is especially interesting is this idea that \"optimal parameter choices\" lead to minimax convergence rates, whereas other choices do not enjoy the same statistical rates." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper provides near-minimax convergence guarantees for the flow matching (FM) algorithm for $p$-Wasserstein distances. Distinct from diffusion models, FM use ordinary differential equations (ODEs) at inference time instead of stochastic differential equations (SDEs). Their estimator is based on time-partitioned estimators, similar to the analysis of Oko et al (2023). They adopt the estimator for more general parameters (specifically the mean and covariance parameters) to provide an estimator for flow matching." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper could be more clearly written, and the main text is very technical. This paper would benefit substantially from a small figure explaining the construction of the estimator at a high level, and also explaining the reason why the full minimax estimation is not possible. These are overall minor points, but I do believe the paper would benefit greatly from these modifications overall." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "L223: Kappa = 1/2 corresponds to diffusion-FM, correct? So the result actually says that that only diffusion-FM is optimal within the family of FMs you consider? If so, L230 'FM is as strong as diffusion models' seems somewhat misleading -- it seems more like diffusion is actually stronger than FM, except when FM is equivalent to diffusion. Please correct me if I'm wrong; otherwise might want to state this differently.\n\nL225: Can you elaborate on the ways in which your proof technique differs significantly from Oko's? (Since Oko's result is a special case of yours for diffusion-FM and 1-Wasserstein.)\n\nL133: Can anything be said in the more general non-Gaussian case of FMs?\n\nL177: Notation not super clear here. What is P_[1] vs p_[1]?\n\nTheorem 1: It seems like you are further restricting sigma to have the form (1-\\tau)^\\kappa?\n\nL222: Typo 'revserse'\n\nL224: Diffusion can be expressed as an ODE so I am not sure what you mean here?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Provides new theoretical results for a family of flow models, showing almost minimax optimality (with 'almost' depending on a specific parameter). The paper is rigorous, clearly written, and clearly places the results in the context of prior work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proves an almost minimax optimality result for a class of flow models. Previously, Oko had shown that diffusion models are minimax optimal under the 1-Wasserstein distance. This paper builds on Oko to show that a class of FMs with terminal Gaussian distribution and paths of the form x_t = \\sigma_t x_0 + m_t x_1 (which includes diffusion as a special case) are almost minimax optimal, with a parameter kappa determining the non-optimality. They show this under the p-Wasserstein distance for 1 <= p <= 2." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Please see Questions." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We establish that FM can achieve an almost minmax optimal convergence rate in terms of 2-Wasserstein distance." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024flow,\ntitle={Flow matching achieves almost minimax optimal convergence},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2OMyAFjiJJ},\nnote={under review}\n}" }, "abstract": { "value": "Flow matching (FM) has gained significant attention as a simulation-free generative model. Unlike diffusion models, which are based on stochastic differential equations, FM employs a simpler approach by solving an ordinary differential equation with an initial condition from a normal distribution, thus streamlining the sample generation process. This paper discusses the convergence properties of FM in terms of the $p$-Wasserstein distance, a measure of distributional discrepancy. We establish that FM can achieve an almost minimax optimal convergence rate for $1 \\leq p \\leq 2$, presenting the first theoretical evidence that FM can reach convergence rates comparable to those of diffusion models. Our analysis extends existing frameworks by examining a broader class of mean and variance functions for the vector fields and identifies specific conditions necessary to attain these optimal rates." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "flow matching", "generative model", "convergence rate", "optimality" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f12a968546cfc82dffae091c3f5ca5337aefbfe7.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Flow matching achieves almost minimax optimal convergence" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2OegVbwvY2
ZIP: An Efficient Zeroth-order Prompt Tuning for Black-box Vision-Language Models
main
Active
vision-language models;prompt-tuning;black-box optimization;zeroth-order optimization
applications to computer vision, audio, language, and other modalities
5;6;6;6
2;3;4;3
2;3;3;3
3;3;3;3
3;3;4;4
5.75
3
2.75
3
3.5
0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- ZIP is well-motivated.\n- The paper is well-organized.\n- Empirical analyses of the proposed method are sufficient." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces ZIP for efficient zeroth-order prompt-tuning of black-box vision-language models. ZIP addresses the challenge of excessive query requirements in existing black-box prompt-tuning methods by reducing problem dimensionality and gradient estimate variance through feature sharing and intrinsic-dimensional gradient clipping. ZIP demonstrates significant improvements in few-shot accuracy and query efficiency over other existing methods. Various experiments on image classification show the effectiveness of ZIP." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I'm not familiar with this research field, i.e. black box prompt tuning. Therefore, it's hard for me to accurately judge the novelty of the proposed method compared with existing works. \n\nFrom my perspective, one major weakness is that I find the competitors in the experiments are slightly old, e.g. BLACKVIP is published at CVPR'23 and BPTVLM is published at IJCAI'23. There are some more recent works like [a][b] in this field. I think the authors should better discuss the differences between ZIP and more recent works like [a][b], and provide fair experimental comparisons as well. \n\n[a] Language Models as Black-Box Optimizers for Vision-Language Models, CVPR 2024, https://openaccess.thecvf.com/content/CVPR2024/html/Liu_Language_Models_as_Black-Box_Optimizers_for_Vision-Language_Models_CVPR_2024_paper.html\n[b] Connecting the Dots: Collaborative Fine-tuning for\nBlack-Box Vision-Language Models, ICML 2024, https://arxiv.org/abs/2402.04050" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "* It is not until the background section that I understood what zeroth-order intrinsic-dimensional prompt-tuning means. I suggest to improve the introduction to make it clearer from early on.\n* In figure 2, it would be good to add a baseline of accuracy when no soft prompts are optimized (i.e. m=0).\n* Where are the learned soft prompts injected? Are they concatenated to text embeddings and fed to CLIP's text encoder?\n* In table 3, the average accuracies for CDT between ZIP and the second-best method seem very close. Did authors run a significance test?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Good motivation to reduce the number of learnable parameters in ZO optimization (section 3) and clever idea to reduce the intrinsic dimensionality while maintaining the number of tokens (and the extrinsic dimensionality, which is a requirement from the model being optimized).\n* Several techniques (diagonal matrix, parameter sharing) are applied to preserve performance while reducing the number of learnable parameters.\n* The proposed method not only improves few-shot performance wrt existing ZO methods but also reduces considerably the number of function evaluations required to reach a certain level of performance (section 5.3).\n* All the design choices for the soft prompt reparameterization are thoroughly ablated in section 6.\n* The paper is clearly written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method to optimize black-box models without the need for computing gradients (zeroth-order). The key observation is that increasing the number of learnable parameters in soft prompts hurts the performance and training speed of zeroth-order optimization, while this trend is reversed for SGD-based prompt tuning (first-order). To overcome this, authors propose to reparameterize soft prompts in order to reduce the effective number of learnable parameters while maintaining the extrinsic embedding dimensionality. The proposed reparameterization involves projecting parameters into a diagonal matrix, feature sharing and gradient clipping. In addition, reducing the number of learnable parameters results in increased query efficiency (reduced number of forward passes through the model). The proposed method is applied to black-box prompt-tuning of a CLIP model, and evaluated on a suite of standard vision-language benchmarks, achieving improvements of 6% in few-shot accuracy and 48% in query efficiency compared to the best performing existing methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Authors motivate fine-tuning black-box models with the use case of improving proprietary LLMs (e.g. GPT-4, Gemini) which are only accessible through API. However, this interface only accepts text and images as input, not soft prompts or embeddings, so the proposed method would not be directly applicable to API-based models.\n* To verify the method's robustness and generality, it should be evaluated on other model families such as multimodal LLMs.\n* Figures 2, 4, 6 and 7a should report validation accuracy since there could be overfitting." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "(1) In Section 4.2, the paper introduces feature sharing to enhance expressiveness. Could the authors clarify whether this feature sharing technique affects the generalization ability on unseen datasets, and if so, how?\n\n(2) ZIP has demonstrated strong results across vision-language tasks, but could the authors provide more insights into its potential for domain generalization? Specifically, how well does ZIP adapt to unseen domains or datasets outside the evaluated benchmarks, and would any adjustments be necessary to improve its robustness in such scenarios? Such as CoOp and CoCoOp. \n\n(3) Could the authors elaborate on the sensitivity of ZIP to the choice of intrinsic dimensionality and low-rank approximation parameters? How do these choices impact both performance and query efficiency?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1) The paper is well-organized and accessible, with clear visuals and structured explanations that effectively communicate the method's strengths.\n\n(2) ZIP innovatively enhances zeroth-order prompt tuning through intrinsic-dimensional gradient clipping and low-rank parameterization, making it highly efficient.\n\n(3) Comprehensive evaluations demonstrate ZIP's superior accuracy and query efficiency across 13+ tasks, proving its practical value under query constraints." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces ZIP, a zeroth-order prompt tuning method designed for efficient prompt optimization in black-box vision-language models, particularly under limited query budgets. ZIP achieves high efficiency by using low-rank representations and intrinsic-dimensional gradient clipping, which reduces query usage while maintaining robust performance. Evaluations on multiple benchmarks show that ZIP not only outperforms state-of-the-art methods in accuracy but also greatly enhances query efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) While ZIP outperforms existing BBPT methods, comparisons with additional baseline methods in zeroth-order optimization could strengthen claims of superiority.\n\n(2) While ZIP shows strong performance on various tasks, its results on ImageNet in Table 1 are comparatively modest, suggesting limitations in scalability to complex datasets. An in-depth analysis of ZIP's performance on larger, diverse datasets would clarify its robustness and potential for broader application." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethical concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Suggestions: \n\nThe caption for Figure 1 should include citations for the baseline methods (BAR, BlackVIP, BPT-VLM) to provide appropriate references and context for these comparisons. This would enhance clarity for readers unfamiliar with these specific methods." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.The paper presents a novel black-box prompt-tuning method, effectively addressing the issue in zeroth-order methods where an increase in trainable parameters adversely impacts accuracy. By reducing the number of parameters and query requirements, the proposed approach is well-suited for practical applications with limited query budgets.\n\n2.The paper demonstrates strong performance across three extensive and diverse experimental settings, which effectively validate the method’s efficacy. The ablation studies further support the approach, particularly highlighting that the feature-sharing technique helps preserve the model’s expressive capacity. \n\n3.The intrinsic-dimensional clipping mechanism in ZIP requires no manual hyperparameter tuning, making it highly practical and user-friendly. \n\n4.The paper is well-written, with clear explanations and logical organization that make the proposed method and its contributions easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces ZIP, a zeroth-order intrinsic-dimensional prompt-tuning method designed to efficiently optimize black-box vision-language models. By leveraging low-rank approximation, feature sharing, and intrinsic-dimensional gradient clipping, ZIP achieves faster training speeds and superior generalization performance while significantly reducing query requirements. Extensive experiments on diverse tasks demonstrate ZIP's robustness and query efficiency, outperforming existing BBPT methods and establishing it as a practical approach for resource-constrained scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.Although the paper performs ablation studies on individual modules such as low-rank approximation with a diagonal matrix and feature sharing, it lacks ablation experiments on different combinations of these modules. Without evaluating different combinations, it is challenging to fully understand the synergistic effects and the relative contributions of each module to the overall performance. \n\n\n\n2.The paper lacks an ablation study to isolate the effect of low-rank approximation alone, making it unclear if improvements are mainly due to the diagonal matrix. This analysis would clarify the diagonal matrix's contribution." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose Zeroth-order Intrinsic-dimensional Prompt-tuning (ZIP), a method that reduces query demands in black-box prompt-tuning by optimizing in a lower-dimensional space with a robust clipping mechanism." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024zip,\ntitle={{ZIP}: An Efficient Zeroth-order Prompt Tuning for Black-box Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2OegVbwvY2},\nnote={under review}\n}" }, "abstract": { "value": "Recent research has introduced various approaches for prompt-tuning black-box vision-language models, referred to as black-box prompt-tuning (BBPT). While BBPT has demonstrated considerable potential, it is often found that many existing methods require an excessive number of queries (i.e., function evaluations), which poses a significant challenge in real-world scenarios where the number of allowed queries is limited. To tackle this issue, we propose Zeroth-order Intrinsic-dimensional Prompt-tuning (ZIP), a novel approach that enables efficient and robust prompt optimization in a purely black-box setting. The key idea of ZIP is to reduce the problem dimensionality and the variance of zeroth-order gradient estimates, such that the training is done fast with far less queries. We achieve this by re-parameterizing prompts in low-rank representations and designing intrinsic-dimensional clipping of gradients. We evaluate ZIP on 13+ vision-language tasks in standard benchmarks and show that it achieves an average improvement of approximately 6% in few-shot accuracy and 48% in query efficiency compared to the best-performing alternative BBPT methods, establishing a new state of the art. Our ablation analysis further shows that the proposed clipping mechanism is robust and nearly optimal, without the need to manually select the clipping threshold, matching the result of expensive hyperparameter search." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "vision-language models", "prompt-tuning", "black-box optimization", "zeroth-order optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/b9341894acc4463fed56747e96393dc08974f6b3.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ZIP: An Efficient Zeroth-order Prompt Tuning for Black-box Vision-Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2Oh2EOcFSO
Can a Bayesian oracle prevent harm from an agent?
main
Active
AI safety;probabilistic guarantees;guardrails;safe-by-design AI;Bayesian inference;posterior convergence
alignment, fairness, safety, privacy, and societal considerations
5;5;5;6
3;3;4;2
4;3;3;3
2;3;2;2
2;2;3;3
5.25
3
3.25
2.25
2.5
-0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This is a very well written paper, and it is easy to follow. \n\nThe proposed approach represents a promising initial step toward designing AI systems that ensure safety through built-in probabilistic guarantees, rather than relying solely on external safety mechanisms.\n\nThe authors also outline several open problems for future work." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the problem of designing AI systems that satisfy probabilistic safety guarantees. Within a Bayesian framework and given the safety specifications (as a probability), the authors provide risk bounds for potentially harmful decisions, showing that the probability of harm can be upper-bounded by a probability that can be estimated by approximating Bayseian posterior over theories given the observed data. They study two settings: i.i.d case and non i.i.d case and provide a simple experiment to evaluate the performance of safety guardrails." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors present an upper bound on the harm probability, though it appears to be highly conservative. It would be valuable if they could offer a convergence rate or practical guarantees to make the framework more usable. Additionally, it is unclear how this approach compares to other conservative methods for preventing harm. \n\nSince the theoretical results lack practical assurances, I would have appreciated more experimental validation, especially in complex and realistic settings. \n\nObtaining a Bayesian oracle could be very challenging (posterior distribution). \n\nOverall, while the paper introduces a promising method for designing safer AI systems, it would greatly benefit from additional components (both theoretical and experimental) before publication." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1. How do you envision these guardrails to be applied in realistic scenarios? For example, consider the situation of a language model trying to obtain your passwords, or an autonomous car trying to crash with another vehicle. Could this notion of harm be applied efficiently to these realistic scenarios?\n\nQ2. How sensitive are the results to the choice of priors in the Bayesian framework? Can the authors discuss the robustness of the proposed approach under different prior choices?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1. The topic of AI safety is timely and relevant for ICLR.\n\nS2. The theoretical results (as far as I could check) are sound.\n\nS3. The experimental evaluation serves to showcase how these bounds could be used in a realistic scenario." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the problem of bounding the probability of some event in the context of an unknown but consistent distribution and a Bayesian setting. \n\nThe paper is motivated by the prevention of harm by AI agents. In short, harm is inherently unavoidable since in real applications we have no direct access to the distribution governing the environment. However, if we assume a fixed distribution, and a prior assumption of that distribution, we can get better and better approximations when data is presented to us, by using the data to update our prior knowledge of the distribution. With this, we can theoretically bound the probability of doing harm. In deployment, actions whose probability of harm is larger than some threshold can be blocked.\n\nThe paper explores two cases: incoming data as iid and non iid, and obtains bounds on the probability of harm in both cases.\n\nThe paper presents an experimental evaluation on a multi-armed bandits example, blocking actions that are considered unsafe according to the different bounds obtained as well as a baseline (with an unrealistic assumption of the underlying model). The paper ends with a discussion of the open problems still to be solved to be able to use this method as a reliable guardrails for AI agents." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. I understand the appeal to frame this work in the context of harm by an AI agent, and I think it is an interesting point. However, there is nothing inherent to \"harm\" in the concept presented. The concept of \"harm\" could be substituted by \"reward at a state\" and we could be discussing the same results in a different light. I think the paper may benefit from a more general motivation.\n\nW2. While the experimental evaluation is welcome, it is a very simple example, and one wonders if these theoretical bounds would find applicability in problems that are more complex and close to the real applications of guardrails.\n\nW3. The concept of guardrails presented here, as an algorithm that blocks an action if it shows an expected harm larger than some threshold, is very similar to the concept of probabilistic shielding in MDPs [1] (which is essentially the \"cheating guardrail in Sec. 5), and this can be extended to partially observable MDPs to eliminate the (unrealistic) assumption of having full knowledge of the ground truth [2]. The paper would benefit from comparing to these methods, especially with [2].\n\nW4. The paper does not engage in some recent work on defining harm in similar scenarios, see for example [3] or [4]. It could be useful to understand, in light of different definitions of harm, whether the results are specific to harm prevention, or can be framed in a more general understanding of bounds over rewards.\n\n\n\nOTHER (MINOR) REMARKS\n\nR1. The paper is mathematically dense and difficult to follow in parts. I'm not sure whether this is a weakness on its own, but I have the feeling that the ideas conveyed are simpler than the dense mathematical presentation seems to suggest. \n\n\nREFERENCES\n\n[1] N. Jansen et al. Safe Reinforcement Learning Using Probabilistic Shields. CONCUR 2020.\n\n[2] S. Carr et al. Safe Reinforcement Learning via Shielding under Partial Observability. AAAI 2024.\n\n[3] S. Beckers et al. Quantifying Harm. IJCAI 2023.\n\n[4] J. G. Richens. Counterfactual harm. NeurIPS 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How does the upper bound in Prop. 3.4 apply if the prior distribution $P$ is misspecified?\n2. How does this work compare to the existing literature on concentration bounds in the Bayesian setting? For instance, these methods could include analysis of Bayesian regret in RL, and PAC Bayes." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-organized and clearly written. All the theoretical assumptions have been stated.\n- The proposed concentration results seem reasonable. The derivations seem technically sound.\n- Training large AI systems to satisfy certain safety criteria (i.e., with guardrails) is an exciting problem. This paper formulates this problem as a hypothesis-testing problem and presents non-trivial algorithms to perform the test. This problem formulation could be inspiring for other AI researchers across domains." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the problem of evaluating an unknown world model from observed data to determine whether it satisfies a certain safety metric. The safety metric, or guardrail, is a binary variable $H$, taking other variables in the world model as input. The authors utilize a Bayesian approach. It assumes access to the actual prior distribution over the groundtruth world model. The authors first prove that under certain parametric assumptions, the posterior distribution over candidate models will uniquely converge to the ground-truth model at the limit of large samples. Building on this concentration results, the authors derive an upper bound over the posterior probability of the harmful event $H = 1$ conditioning on the observed data. This concentration bound is then extended to non-i.i.d settings where observed samples are correlated. Finally, simulations were performed, and results supported the proposed theory." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The concentration result in Prop. 3.1 assumes \"all theories in $M$ are distinct as probability measures.\" This assumption does not seem to hold many common probabilistic models. For instance, in the linear component analysis, the number of independent components is generally not uniquely discernible (i.e., not identifiable) with non-linear mixing functions. Also, the number of latent components in Gaussian mixtures is generally not identifiable from the observed data. This seems to suggest that the application of the proposed concentration results might be limited.\n- The proposed concentration results also assume access to the actual prior distribution generating the ground-truth world model. It is unclear whether the upper bound could still hold when the prior distribution is unknown and misspecified.\n- Other concentration bounds exist over the target estimates using Baysian methods. Generally, one should be able to translate empirical concentration bounds to the Bayesian settings. For instance, (Osband & Van Roy, ICML'17) translates the concentration bounds for online reinforcement learning to Bayesian regret. How does the proposed method compare to other related work? This paper should include a section discussing related work in large deviation theory and how this paper is situated in the existing literature.\n\n- Reference: _\"Osband, Ian, and Benjamin Van Roy. \"Why is posterior sampling better than optimism for reinforcement learning?.\" International conference on machine learning. PMLR, 2017.\"_" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can you detail how you can utilize the CLT to obtain convergence rates (line 238ff)? If you applied this to the example in seciton 5, would it yield practical bounds?\n\n2. Are the results in Propositions 4.4 to 4.6 tight (in a similar vein as Remark 4.3 shows for Proposition 4.2)?\n\n3. How do you motivate the definition of $\\mathcal{I}^{\\alpha}_{1\\colon t}$ and have you considered different approaches?\n\n4. Can you provide some heuristics on choosing a safe, yet effective $\\alpha$ a priori? Which information might be helpful for this from e.g. which model paramteres have the biggest impact on $\\alpha$ and what information from a domain expert could be incorporated?\n\n5. Can you make any predictions on how you proposed guardrails perform on larger, more complex models? In particular, how do you expect the overestimation of harm (see Fig. 2) to be affected?\n\n6. Are there any existing works in which your framework fits, i.e. for which you can give (probabilistic) guarantees where they were previously unavailable?\nIf not, are there certain settings in which you can make reasonable a priori assumptions such that your framework is applicable and concrete guarantees can be derived for a given data set?\n\nminor comments:\n- line 96: explain what $q$ is\n- line 193: introduce delta as dirac notation beforehand\n- the axis and legends in the figures in section 5 are barely readable" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The main strength of the paper is the introduction of a (as far as I am aware) novel view at safe-by-design in AI at runtime and opening the possibilities for future Bayesian methods to utilize the safety guarantees shown in the paper. Allowing for safety guarantees and steering future work in a direction that empashizes those is a significant problem in AI.\nI especially appreciate the discussion on open problems of the approach in the conclusion.\n\nThe theory being developed is also quite general and spans over a wide range of possible systems/problems.\n\nThe paper is well motivated and generally well structured, introducing formal concepts as needed in the respective sections. A small experimental evaluation is performed and well discussed. Proofs are provided in the appendix and I could not find any mistakes." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper is tackling the problem of safety in AI. The authors take the view of defining safety as avoiding certain undesirable states in specific contexts.\nThey introduce a framework based on Bayesian inference from which an agent can derive safe policies that come with (probabilistic) guarantees of preventing harm.\nThe approach is safe-by-design, i.e. able to prevent undesired outcomes even if no concrete example of harmful states was ever observed in the system." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main critique points of the paper are the lack of technical novelty (or at least it is not clarified enough if there are new results) and questions on applicability.\n\nFor the former, essentially all Propositions and Lemmata are either adaptations of well known results (Prop 3.1), taken from previous literature (Lemma 4.1, Prop 4.2), or rather simple Corollary derived from them (Lemma 3.3, Props. 3.4, 4.4, 4.5, 4.6). For Prop. 4.2 is is shown that the result is tight (Remark 4.3). To me it did not became clear whether this is a known result or a new contribution. It is also not clear whether the derived results (Props. 4.4, 4.5, 4.6) are also tight as a consequence or whether there is room for improvement.\nFor Prop. 4.5 and 4.6 in particular, restricting the possible world models indeces to $\\mathcal{I}^{\\alpha}_{1\\colon t}$ is essential, however, the choice of definition of $\\mathcal{I}^{\\alpha}$ is not really motivated. At the same time, Fig. 2(a) shows a substantial gap between applying Prop 4.6 in practice, and the theoretical optimum. This begs the question whether a different definition of $\\mathcal{I}^{\\alpha}$ (e.g. a simple cutoff, or requiring $\\mathcal{I}^{\\alpha}$ to have a certain probability mass) has potential to yield tighter bounds. However, as the definition of $\\mathcal{I}^{\\alpha}$ is not motivated, these questions remain unadressed.\n\nFor the applicability, my core concern is that the main problem in AI is not providing safety guarantees under certain assumptions, but rather designing a Bayesian agent that actually works well for a given problem while satisfying these assumptions. To go into detail, section 3 only provides \"law of large numbers\"-style guarantees which are not useful in practice. A small paragraph on the rate of convergence (which would be very helpful to know) is included but essentially is very problem-dependent and thus not discussed in detail in this more general framework. In the experimental evaluation, where Prop. 3.4 is utilized, it is not even clear whether $t$ is large enough for the guarantee statement of Prop. 3.4 to hold (on top of Prop 3.4 not being applicable due to non-i.i.d. as the authors mention themselves). Section 4 then relaxes to probabilistic guarantees, which is a more practical approach. However, to apply the results of section 4 in practice it ultimately relies on defining a hyperparameter alpha. On the theoretical side, the guarantees in section 4 only hold if alpha is chosen small enough (which is impossible to know without knowing the system in the first place) and on the practical side, the evaluation in section 5 shows that choosing alpha too large can have catastrophic consequences, even for the simple bandit system considered in section 5. In summary, I do not see any immediate way to take advantage of the theoretical results the paper provides. This is also amplified by the fact that the main body essentially does not discuss related work, and how existing approaches can be embedded into the framework.\n\n*These weaknesses make the paper feel like more of a statement paper with some additional mathematical background, rather than a fully fletched research paper.*\n\nAs a minor comment, from a reader's POV, the paper can be hard to follow at times, especially in the formal sections. Many paragraphs are written in a very technical way, assuming a deep mathematical background. While this surely can be expected from an audience like ICLR, I feel like many sections disrupt the flow of the paper, e.g. the two paragraphs \"Setting\" (l.155ff and l. 268ff). While these are defnitely important to make the paper rigorous, they are not strictly required to convey the main ideas of the paper. In the interest of readability, it might be advantageous to instead outsource the technical definitions to a separate section." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We explore Bayesian posterior convergence results that provide probabilistic safety guarantees by estimating context-dependent bounds on safety violations to reject dangerous actions." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024can,\ntitle={Can a Bayesian oracle prevent harm from an agent?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2Oh2EOcFSO},\nnote={under review}\n}" }, "abstract": { "value": "Is there a way to design powerful AI systems based on machine learning methods that would satisfy probabilistic safety guarantees? With the long-term goal of obtaining a probabilistic guarantee that would apply in every context, we consider estimating a context-dependent bound on the probability of violating a given safety specification. Such a risk evaluation would need to be performed at run-time to provide a guardrail against dangerous actions of an AI. Noting that different plausible hypotheses about the world could produce very different outcomes, and because we do not know which one is right, we derive bounds on the safety violation probability predicted under the true but unknown hypothesis. Such bounds could be used to reject potentially dangerous actions. Our main results involve searching for cautious but plausible hypotheses, obtained by a maximization that involves Bayesian posteriors over hypotheses. We consider two forms of this result, in the i.i.d. case and in the non-i.i.d. case, and conclude with open problems towards turning such theoretical results into practical AI guardrails." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "AI safety", "probabilistic guarantees", "guardrails", "safe-by-design AI", "Bayesian inference", "posterior convergence" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/dc11915699469301df72fbbd591ef4f4bd86e387.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Can a Bayesian oracle prevent harm from an agent?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2P4p4RxUxT
Conformal confidence sets for biomedical image segmentation
main
Active
Deep learning;neural networks;uncertainty quantification;confidence sets
applications to computer vision, audio, language, and other modalities
3;5;6;8
4;3;2;4
2;2;3;4
1;2;2;3
2;2;3;3
5.5
3.25
2.75
2
2.5
-0.083624
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weakness part." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The topic of this work is quite interesting. By proposing the concept of conformal confidence sets, this work could provide spatial uncertainty guarantees for the outputs of image segmentation models.\n2. Theoretical proofs are well formulated to serve as a strong proof for this paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Authors develop confidence sets providing spatial uncertainty guarantees for outputs of a black-box machine learning model designed for image segmentation. Specifically, this paper adapts conformal inference to the imaging setting, obtaining thresholds on a calibration dataset based on the distribution of the maximum of the transformed logit scores within and outside of the ground truth masks. Qualitative evaluations are implemented on a polyp tumor dataset to demonstrate the effectiveness of this approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. A very obvious typos “polpys” exist many times, even in the abstract. That should be “polyps”.\n2. It will be more convincing if authors could provide quantitative results for the segmentation performance of polyp segmentation. The evaluation metrics include Dice, Precision, Recall, etc. For comparable baseline models, authors could choose PraNet, SANet, etc.\n3. Since the concept of conformal confidence sets can be generalized to other medical image segmentation tasks, maybe more public datasets are applicable to this work, such as vertebrae or tooth segmentation.\n4. Some technical terms need to be further explained for a better understanding, such as FWER/FDR/FDP in the introduction part." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "none" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* You are testing on public data. Has your pretrained polyp segmentation algorithm been trained on the same public data? \n* Are there any susequent video frames in the dataset, or images of the same polyp / patient? If there are, did you stratify your training / testing set accordingly? \n* Please remove the reference to tumors throughout the paper. Polyps may be precursors to tumors, but they aren't any. \n* You are using a dataset from different centers, there may be systematic differences in how the polyp areas are annotated - some annotators being more inclusive with respect to surrounding tissue, others being less. How does this variability impact on your measure? \n* I might have missed it but what is the accuracy of your underlying segmentation algorithm? I would be under the impression that it is a well performing algorithm on a rather easy segmentation task? How does your approach relate to extrema in algorithmic performance, i.e., perfect segmentations or complete misses? \n* You are stating \"In order to make efficient use of the data available, the learning dataset can in fact contain some or all of the data used to train the image segmentor.\" Your training data may be fairly overfittet impacting on your logit score and, hence, your choice of margin (logit/distance, thresholds). Wouldn't it be a safer approach to generate cross-validated logit functions and use them in the comparison?\n* I understand that the primary contribution of this study is the theory offered. Still, you are stressing that your algorithm is a very lightweight addition to any pretrained segmentation algorithm. And there are a lot of standard computer vision / biomedical image data sets for segmentation available, as well as pretrained algorithms. Would you be able to generate segmentations maps for predefined certainty levels, and compare these levels with the testing performances across a larger set of applications? It would be quite convincing, if e.g., your 90% certainty map of the outer margin would indeed include 90% pixels of a test set or lead to a sufficiently large overlap (that has previously been defined) in 90% of all test cases." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The authors present the problem in a formal manner, relating it to existing work. \n* The overall problem addressed is relevant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors formally present an approach that aims at inferring uncertainty margins to segmentations. They propose either take the logit score of a CNN and to threshold it to obtain this margin, or to threshold at a certain distance to the predicted segmentation. Threshold and type of margin (logit score / distance) is to be identified experimentally for a given dataset. Experiments on one public dataset are shown (containing still images from minmally invasive surgery)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The motivation for the scores functions (logit, distance, ...) is weak. The necessity to choose the type and to even mix them gives the overall approach a bit of a heuristic touch. (While I do understand that you would consider your contribution here to be in the formal derivation of underlying theory, i.e., very much the opposite of a heuristic.)\n* The experiments only provide insights into one very narrow application. they are merely fulfilling the purpose of an illustation of the problem, but not a validation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How does the results generalize to other datasets and segmentation of multiple structures?\n- How does the uncertainty quantified by the proposed method relates with the real uncertainty (assuming it can be measured by the disagreement between multiple experts)?\n- How one can use the proposed method in a practical application? Can we get samples of plausible segmentations within the margin outputted by the algorithm?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The idea of using transformed max logit scores is simple but quite effective strategy to produces conformal segmentation sets.\n- The presented experiments show the effectiveness of the method compared to using non-transformed logits." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a conformal prediction based method to quantify the uncertainty for medical image segmentation. The proposed method is particularly designed for pre-trained segmentation models which notoriously make overconfident and wrong predictions. The proposed method learns thresholds using the maximum logit scores from a calibration set for the inside and outside of the ground truth masks and apply them on the logit scores of the test image to return conformalized segmentation prediction which guarantees to include the ground truth segmentation. The paper shows that naively learning the outside thresholds on max logits is not optimal and propose to transform the scores using a distance to make sure that far away pixels have lower scores. The method is validated on a single dataset for polyp segmentation and the results show that the proposed method produces conformal sets with narrower boundaries compared to using scores which are not transformed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1- Although I found the proposed idea of transforming max logit scores interesting, I don't think that the paper presents enough contribution to be presented in ICLR. The idea of applying conformal prediction to max logits for inside and outside of the boundaries is a direct extension of initial conformal prediction methods developed for segmentation, and applying transformations based on distance is an intuitive choice to refine predicted boundaries.\n\n2- The paper does not present any comparisons with the existing conformal prediction works for image segmentation.\n\n[1] Mossina et al. Conformal Semantic Image Segmentation: Post-hoc Quantification of Predictive Uncertainty, CVPR Workshops, 2024,\n\n3- The method is evaluated on only a single dataset. Multiple datasets should be included to make sure that the performance generalizes across datasets.\n\n4- In many segmentation tasks, we are interested in segmenting multiple structures. The paper only focuses on binary segmentation. I think the method should be validated on multi-class setting to make sure that it is also applicable in that setting.\n\n5- The explanation of how the method is applied at test time could also be clearer. As I understand it, during testing, the method applies the inner threshold on max logits to find inner boundaries, then applies a distance transformation based on each pixel’s distance from these inner boundaries, and finally applies an outer boundary threshold. However, the exact steps of the algorithm during test time need more clarification.\n\n6- In conventional uncertainty quantification algorithms for segmentation such as [2, 3] the uncertainty is quantified by the variance of the segmentation samples generated from the posterior distribution. How can the quantification be done in this case? Is it the margin between the inner and outer boundaries? Is the uncertainty quantified by the algorithm correlates with the uncertainty in the input image? For example, does the method output larger margins when there is greater disagreement between the segmentations of different experts? \n\n[2] Kohl et al. A Probabilistic U-Net for Segmentation of Ambiguous Images\n[3] Erdil et al. MCMC Shape Sampling for Image Segmentation with Nonparametric Shape Priors\n\n7- The margin between the inner and outer boundaries appears quite large and there can be many unplausible segmentations within this area. For practical applications, an uncertainty quantification method should ideally produce a set of plausible segmentation samples within this margin, rather than simply indicating a large margin that may or may not include the ground truth segmentation. How could one obtain a plausible segmentation sample from this margin?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Couldn't a related/similar smooth distance be defined using kernels?\n- What is called \"original scores\", is this when you use the identity score transformation?\n- What are the dashed lines in Figures 4 and 5?\n\nMajor comments:\n- Add labels and/or legends to the rows and columns of the figures.\n\nMinor comments:\n- The word \"polyp\" is misspelled in different ways in almost every instance. Do check this.\n- It says \"... the set a side [num] images ...\", or something similar, a few times. Check the grammar there." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper is well-written and clear, although it took a second read-through to fully understand. The proposed method seems to work very well, and the presented experiments are convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a conformal prediction method that computes confidence sets with spatial uncertainty guarantees in image segmentation from any machine learning model. They illustrate the usefulness of the proposed method on medical images." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am missing more quantitative results. For instance, aggregated coverage scores (e.g., mean; or other metrics, e.g., evaluate Equations 1 and 2) for the different versions on more than one dataset. This comparison should then also include some existing methods, to illustrate the relative strengths of different methods.\n\nAs just mentioned, for the results to be more convincing, I would also like to see examples on more than just one dataset.\n\nAlso, there must be other score transformation functions that could also be evaluated. Testing a couple more could strengthen the results and make it more convincing." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Conformal uncertainty quantification for the output of black-box image segmentation models" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024conformal,\ntitle={Conformal confidence sets for biomedical image segmentation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2P4p4RxUxT},\nnote={under review}\n}" }, "abstract": { "value": "We develop confidence sets which provide spatial uncertainty guarantees for the output of a black-box machine learning model designed for image segmentation. To do so we adapt conformal inference to the imaging setting, obtaining thresholds on a calibration dataset based on the distribution of the maximum of the transformed logit scores within and outside of the ground truth masks. We prove that these confidence sets, when applied to new predictions of the model, are guaranteed to contain the true unknown segmented mask with desired probability. We show that learning appropriate score transformations on a learning dataset before performing calibration is crucial for optimizing performance. We illustrate and validate our approach on a polpys tumor dataset. To do so we obtain the logit scores from a deep neural network trained for polpys segmentation and show that using distance transformed scores to obtain outer confidence sets and the original scores for inner confidence sets enables tight bounds on tumor location whilst controlling the false coverage rate." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Deep learning", "neural networks", "uncertainty quantification", "confidence sets" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7ed6f416e4606e9e482b998099e052cb8271d195.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f4741fc08a0ec78bc990084d612de685f11ab037.zip" }, "title": { "value": "Conformal confidence sets for biomedical image segmentation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2PKLRmU7ne
In-context learning and Occam's razor
main
Active
generalization;complexity;compression;in-context learning;meta-learning
transfer learning, meta learning, and lifelong learning
5;5;5;5;8
2;4;4;3;4
1;3;2;3;3
2;3;2;3;3
1;3;3;3;3
5.6
3.4
2.4
2.6
2.6
0.375
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Figure 2b: Why does the Transformer without a bottleneck perform worse than the one with a bottleneck? Intuitively, one would expect that a Transformer with a bottleneck would lose essential information necessary for predicting the query’s label, making this result seem suspicious.\n2. Regarding experimental details: I found that the Transformer without a bottleneck and the Transformer with a bottleneck were presented with different input formats—one with (x,y) concatenated and the other without. Why is this the case? This setup does not provide a fair comparison between the two models.\n3. In the setting where the Transformer is trained with train-risk ICL: Given a total sequence length k (following the notation in line 265), do you break the sequence into k subsequences of length j, where $j\\in [k]$, or pass it as a whole sequence, relying on causal attention to prevent future information leakage? If it’s the latter, how do you select the query x? If x is not x_i in the sequence, then it’s not guaranteed that the query x is included in the context x_{1:j}. If it is x_1, would this allow the model to learn a shortcut solution, potentially biasing its predictions?\n4. Following the previous question: If a sequence of length k is always broken into k subsequences, why use a decoder-only Transformer? If my understanding is correct, there should be no causality requirement in the context.\n5. Regarding the gap observed in performance: Why is the performance gap smaller for linear regression but larger for sinusoid regression and Mastermind? The authors attribute this to task difficulty, but the explanation feels vague. Fixing the function class and varying the problem dimension (as a more concrete indicator of task difficulty) might clarify this point, rather than relying on a vague explanation.\n6. Why does the MLP baseline generalize worse than the Transformer? Was model complexity minimized through regularization techniques, such as weight decay, in the MLP? This baseline offers limited insight into the results and seems to introduce some ambiguity. Additionally, what would be the Bayesian optimal model’s generalization error?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written, the theory is interesting and the experiments are well serving the points." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper first provide theoretical understanding that the success of ICL lies in its implicit optimization for both data fit and model simplicity through the lens of data compression. It then examine the case where the training objective is changed to minimize training error alone instead of the prequential code length, and found that it exhibits worse generalization performance compare to the standard next-token prediction error." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Some of the experiments are not rigorous enough. Please see questions below." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How is it ensured that $T_\\phi$ in section 2.3 does not memorize? “To forbid $T_\\phi$ from memorizing a single dataset, ..”\n2. Could the authors clarify what would change if the data were not iid ? Does any of the results hold? In general ICL properties arise by simply training the next-token prediction loss without iid data. Could any of the results be generalized? \n3. It seems that the current setting assumes that the model is updated each time a token is predicted, but isn’t it the case that when training a model auto regressively, the model is updated with a gradient step over the cumulative loss over all the tokens of the sequence. And so, the loss is not the objective of the prequential code (eq. 4). Could the authors elaborate on why these two are equivalent? \n4. What is standard SGD vs ICL? Do the authors mean that they simply use an MLP in which the examples are concatenated and given as Input to the MLP, rather than having them in the sequence length? I am not sure I understand this distinction since the minimization of the cumulative loss over the next token prediction is also requires to train a model with SGD. Could the authors clarify more this setting? \n5. In section 3.2 the authors state: “For instance, when $T_\\phi$ is a Transformer, the expressivity of the model it implicitly fits to the context scales with the number of activations in the network ($N$), whereas the expressivity of a DNN trained through SGD scales with the number of weights ($N^2$). A Transformer in the attention layer has the multiplication of two $d\\times n$ matrices, while it also has $d^2$ parameters for each weight matrix. Could the authors elaborate on how they deduce that the expressivity of a Transformer scales with the number of activations ($N$) and why for a DNN with the number of weights ($N^2$)? \n6. Could the authors think of some other setting, which would not require to alter the architecture for training with the target of only minimizing the training loss?\n7. In figure 3b, is the x-axis the data points seen during training or the context length? In the y-axis is it the prequential code lengths or the error measured over some batch on the next token only?If the x-axis is the context length, how is it exactly the generalization error measured? \n8. I think the paper would be improved, by focusing more on the last setting of experiments, in which the theory does not provide any guarantees, to understand whether similar results would hold in the case of non-iid data." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed connection between the minimization of the next-token prediction loss and the prequential coding algorithm is interesting. Intuitively as an observation, it makes sense that as the model is trained, it should learn to represent new data better, if there is any overlapping information between the different data points. In the specific setting, the data are iid and so the model should get better with each training point. It is also interesting that this loss can be connected with minimizing the complexity of the model." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper draws a connection between the objective used to minimize the next-token prediction loss, when training with iid data, and a compression algorithm called prequential coding. They show that, if the model is updated after predicting each token, then the minimization of the cumulative loss corresponds to the minimization of the objective of prequential decoding, which serves as an upper bound for jointly minimizing the compression of the data plus the complexity of the model used. The authors also provide a set of experiments to corroborate their theoretical observations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In general, the ICL properties of models arise when training the next-token prediction loss, without iid data. The current results do not cover the next-token prediction in general. \n2. It seems that the current setting assumes that the model is updated each time a token is predicted, but isn’t it the case that when training a model auto regressively, the model is updated with a gradient step over the cumulative loss over all the tokens of the sequence." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Some small suggestions/questions:\n\n- Section 2.4 stops short of actually writing down the next-token prediction loss and doing the simple calculation that connects it to the prequential code length. Since this is claimed in the summary as one of the key contributions, it seems worthwhile to make this explicit.\n- Section 3.4 has a reference to Figure 2a (line 392) that should be 3a\n- Perhaps I’m confused but is line 1245 backwards? Isn’t your proposal that models trained with maximal length contexts should lead to worse generalisation? Perhaps I am misunderstanding what “need less tokens to arrive at simple models” means.\n\nIn conclusion, while I believe prequential coding is a promising direction to understand ICL, I cannot agree with the authors that their theoretical arguments succeed in linking the next-token prediction objective to Occam’s razor (line 502), in their current form.\n\nThings that might change my views:\n\n- A more detailed explanation of why I should believe (4) is an approximate equality (either theoretical or empirical)\n- A stronger link between the empirical work in Section 3 and the theory, explaining exactly how the experiments are predicted (as it stands, it reads to me as a few somewhat confirmatory pieces of evidence, but not strongly so)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- I find the prequential code-length perspective on the pre-training objective of transformers useful, it is relatively novel, and I think it is a promising route to understanding ICL. I did not think any of these things before reading this paper, which introduced me to the idea." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper the authors propose to re-examine the next-token prediction loss used to train sequence models such as transformers from the perspective of compression, and in particular prequential coding. This is an attractive idea that has been the subject of several recent works, including Delétang et al “Language modeling is compression” published in ICLR 2024, and it holds significant promise as a theoretical and empirical means to understand in-context learning (ICL) and more generally the generalisation behaviour of large language models.\n\nThe paper has three main components:\n\n(1) The observation that the next-token prediction loss is related to prequential code length\n\n(2) The relation between this code length and Kolmogorov complexity, which begets the claim that training transformers “explicitly” optimises an objective jointly minimising training error and model complexity (line 248), and\n\n(3) Experiments that aim to validate these theoretical claims, and suggest potential improvements to the design of transformers which will incentivise better ICL." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I find the perspective adopted by the paper intriguing, however in its current form I do not think it has achieved its stated aims in any of three main components identified above:\n\n- The relation between next-token prediction loss and prequential code length appears not to be novel, as it is explained clearly in Delétang et al Section 2, and I think this is not sufficiently emphasised in the paper (their work is cited on line 250 for other reasons).\n- While Kolmogorov complexity is presented as playing a significant role in the framing of the theoretical contributions, I am not convinced of this in its current form. The inequality in (4) is of course true, but the major claims (about e.g. transformer training “explicitly” optimising some objective involving model complexity) seem to rely on this inequality being an approximate equality. This is justified in passing, very briefly, around line 162 and a reference is made to Blier-Ollivier (presumably to the discussion in Section 2.4) but I do not understand how this amounts to a strong justification of the approximate equality.\n- The experimental results seem a bit scattered, and I am unsure of how strongly they corroborate the theoretical claims. Taking Section 3.1 as an example, I think too much is left implicit about how this connects to the theoretical claims. I do not understand how these findings “follow directly from our theory” (line 312). I do not know how to judge whether or not a gap in performance between prequential and train-risk ICL below 10 datapoints in-context is actually significant." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A. See weakness." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Novel Perspective. Instead of studying the algorithmic aspect or mechanistic aspect of how LLMs perform in-context learning, this paper proposes a different yet novel perspective --- contrasting ICL with most current approaches in machine learning, and conclude with occam razor's principle that ICL generalize well across tasks without overfitting.\n\n2. Innovative Use of Prequential Coding for Complexity and Generalization. By framing prequential coding, the paper introduces a novel approach to balance model complexity and training accuracy. This insight offers a practical metric for understanding model simplicity.\n\n3. Comprehensive Empirical Validation. The paper validates its theoretical claims through a variety of experiments across different tasks, architectures, and training conditions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper examines in-context learning (ICL) through the lens of Occam’s razor, which suggests that the simplest model that explains the data is most likely to be the true one. This paper proposes that ICL's next-token prediction loss functions similarly to prequential coding. The authors argue that by training models on this principle, ICL can produce models that generalize well across tasks without overfitting, especially in low-data scenarios. \n\nThis paper shows that ICL aligns with Occam’s razor more closely than traditional methods that only focus on training error. They also reveal limitations of current ICL methods, which may underfit in large-data regimes and have varying performance based on model architecture. This paper suggests refining architectures and data distribution controls to improve ICL’s generalization and task adaptability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited Generalization to Non-IID and Complex Real-World Tasks. While the paper effectively applies its theory to IID data, the assumptions might limit its relevance to more complex, non-IID real-world tasks, such as language data or continuously evolving data streams. \n\n2. Underexplored Architectural Dependencies. Although the paper observes that model architecture significantly influences the effectiveness of ICL, especially in low-data and high-complexity settings, it does not thoroughly explore or analyze which architectural features are most beneficial. A deeper investigation could be interesting.\n\nNonetheless, I don't think the two weaknesses here are significant. They are more of good-to-haves or future research." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Figure 1.a: in the description of the prequential coding algorithm, there is a line D+= decode(d_next_encoded,p). I do not see where that decode function is defined. Can you add more details?\n2. Equation (4): the first equality says the code length is the sum of bits for all the data based on the learner. Do we also need extra bits to represent the learner itself? Maybe I missed something here. Please feel free to comment.\n3. Can you also provide some details on the approximation in (6). Why it is an approximation and what have we missed here. Thanks. \n\n\nFinally something minor: The first sentence in the abstract is a bold claim. Even though I agree generalization is a key to machine learning, I would be cautious claiming that the (only) goal of machine learning is generalization." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "Overall I like the perspective of this paper. Kolmogorov complexity nicely poses the learning and generalization problem as a compression of data and model. It is surprising to see nowadays the modern LLMs, trained simply on next token prediction, generalizes so well in downstream tasks with or without some fine-tuning. Any effort connecting the two is always welcome." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper discusses an interesting topic to connect the Kolmogorov complexity, prequential coding with In-context learning. The authors first show the prequential code is a “good” algorithm” to compress both data and model. And through meta learning, the prequential code length could be minimized. In the setting of ICL, the meta learner and inner model for each task are unified as the sequence model. And the next token prediction is equivalent to the prequential coding algorithm. Thus through the next token prediction, the training error and model complexity are jointly minimized." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Unfortunately the draft, to me, lacks a sense of rigor. The connection stated in the draft looks like a good story, but there is not much guarantee. How well prequential code length approximates the Kolmogorov complexity is always a question mark. I feel it is a very loose bound. In the prequential coding algorithm, it is assumed that as the learner T sees more data, it will generalize better on the new data. However there is no quantitative analysis on how this is measured. Any assumptions on the distribution of data? What is the sample complexity here?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "The next-datapoint prediction error objective used in models that exhibit in-context learning can be seen as a meta-objective that optimizes for learners that not only explain their training data, but do so using the simplest model." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024incontext,\ntitle={In-context learning and Occam's razor},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2PKLRmU7ne},\nnote={under review}\n}" }, "abstract": { "value": "The goal of machine learning is generalization. While the No Free Lunch Theorem states that we cannot obtain theoretical guarantees for generalization without further assumptions, in practice we observe that simple models which explain the training data generalize best—a principle called Occam's razor. Despite the need for simple models, most current approaches in machine learning only minimize the training error, and at best indirectly promote simplicity through regularization or architecture design. Here, we draw a connection between Occam's razor and in-context learning—an emergent ability of certain sequence models like Transformers to learn at inference time from past observations in a sequence. In particular, we show that the next-token prediction loss used to train in-context learners is directly equivalent to a data compression technique called prequential coding, and that minimizing this loss amounts to jointly minimizing both the training error and the complexity of the model that was implicitly learned from context. Our theory and the empirical experiments we use to support it not only provide a normative account of in-context learning, but also elucidate the shortcomings of current in-context learning methods, suggesting ways in which they can be improved." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "generalization", "complexity", "compression", "in-context learning", "meta-learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/16f49deac62703f0965e4efde8003db1b2360ea8.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "In-context learning and Occam's razor" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2PRpcmJecX
Global Convergence of Policy Gradient in Average Reward MDPs
main
Active
Policy Gradient;Reinforcement Learning;Average Reward MDPs
reinforcement learning
5;6;6;8
3;3;3;4
3;3;3;4
2;2;3;3
2;2;2;3
6.25
3.25
3.25
2.5
2.25
0.927173
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It is briefly touched upon in *Notes on Limitations and Future Work* that the approach can be generalized to \"parametric classes of policies\". I wonder if the authors have any rough ideas on how this could be done, and further, if it is also doable to extend the tabular MDP setting to generic MDPs with infinite state-action spaces (probably with function approximation, like linear/low-rank MDPs).\n2. The relationship with discounted-reward MDPs is discussed in Section 3.2, where it's written that \"the constants can be derived through an *analogous* process\". Is it possible to (at least) sketch how the final results should look like in the appendix?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The paper is overall well-written, and the flow is friendly to first-time readers. \n2. The research problem is of theoretical interest and importance, which is sufficiently motivated and justified by a thorough review of literature.\n3. The technical contributions are solid, rigorous, and clearly articulated (as summarized in Section 1.2). The proofs are checked to be correct and are largely self-contained.\n4. Table 1 is especially appreciated since it gives a high-level yet clear idea of the instance-related constants involved in the bound.\n5. I like the discussion presented in Section 3.2 that relates the new results to existing results in the classical discounted-reward setting, as well as a brief hint on the reason why instant-specific bounds may be tighter and thus more useful in applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a comprehensive global convergence analysis for policy gradient in infinite-horizon average-reward MDPs. It proposes a novel proof framework for the smoothness of the average reward objective, which settles the intrinsic challenge of divergence face by the standard analysis technique that regards the average-reward setting as a limiting case of the discounted-reward setting (as $\\gamma \\to 1$). Based on the smoothness results, it further analyzes the convergence properties of policy gradient in the average-reward setting, and concludes with an instance-specific bound convergence bound. Simulation results are presented to justify the analysis and reveal the influence of instance-related constants." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The simulation results do help to promote the understanding of the instance-related constants, but it can be improved to include more direct and more convincing evidence under the principle of controlled variables. E.g., exemplary MDP families might be explicitly constructed with certain constant(s) varying and all the others fixed, so that the curves clearly reflect how the performance depends on the varying constant(s).\n2. There are a few typesetting issues: (a) Use $\\verb|\\citep|$ and $\\verb|\\citet|$ correctly for the author-year format, and avoid using $\\verb|\\cite|$ — specifically, only use $\\verb|\\citet|$ when it's a part of the sentence. (b) On line 223 and below, use $\\verb|\\ll|$ ($\\ll$) instead of $<<$. (3) There are a few typos and grammatical issues (e.g., the inconsistency of tenses used in the literature review, where I would recommend the use of present tenses only)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Policy Gradient (PG) and its variants are among interesting and important algorithms in RL. Their convergence properties for the class of discounted MDPs are very well-studied and by now well-understood. However, their counterparts for average-reward MDPs are less explored, especially when the interest lies in globally optimal solution. This is mostly due to the challenges involved in the average-reward setting, rather than the interest in the problem. \n\nOne strength of the approach taken in the paper is to depart from the classical approach of using a discounted MDP as a proxy, which further leads to sub-optimal bounds. This way the authors eliminate the smoothness assumption that is typically made in the convergence analysis of PG in the context of discounted MDPs. \n\nThe paper admits a good organization. Its technical part is written mostly clearly and precisely, apart from some inconsistent or undefined notations (see comments below). However, there are some inconsistencies in the presentation and advertisement of the results between the introductory part and the main technical part; further on this below. The writing quality is overall fine, but some parts could still benefit from a more careful polishing. \n\nAs a positive aspect, the paper delivers a good and accurate review of related literature, to my best knowledge. Yet another positive aspect is reporting numerical results, albeit on toy problems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies convergence of Policy Gradient (PG) in average-reward MDPs and present non-asymptotic bounds on the global convergence of an error function defined in terms of gains of the optimal policy and the output policy by PG. For the class of unichain MDPs (cf. Assumption 1), the authors present convergence rate to the globally optimal solution (of the reward maximization problem in the long run), but without any assumption on the smoothness of the value functions involved. Such smoothness assumptions were key in the analysis in discounted MDPs. The presented convergence rates decay as $O(1/k)$ where the involved constants depend on MDP-dependent quantities. These results also lead to improved convergence analysis of discounted MDPs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Key Comments and Questions:\n-\n- The opening of the paper (Abstract and Introduction) talk about regret bounds for PG (scaling as $O(\\log(T))$). Figuratively speaking, these are cumulative measures of error incurred by the algorithm. But they are not defined anywhere – or do I miss something? – and the core part of the paper only deals with per-step error measures. Please clarify. \n- Despite some interesting results, one key limitation of the paper is the restriction to the class of unichain MDPs (cf. Assumption 1). They are far easier to deal with and are much less relevant in modeling practical RL tasks when compared to the more interesting class of communicating MDPs. Without this assumption, one will not get a closed-form value function in Lemma 1, which is key to establish the results. In other words, it renders unlikely, in my opinion, that the technical tools developed or promoted here could be used beyond the restricted class of MDPs satisfying Assumption 1. \n- A key question is how bad the MDP-dependent constant $C_{PL}$ could be. Even though a convergence rate of $O(1/k)$ is superior to those decaying as $O(1/k^p)$ for some $p<1$, the involved MDP-constants (e.g., in Theorem 1) could be prohibitively large in some MDPs (that are not necessarily pathological). More precisely, I expect it could be exponentially large in the size of state-space $|\\mathcal S|$.\n- In the first paragraph of Section 1, you discuss approaches for determining the optimal policy (i.e., planning algorithms) for average-reward MDPs. Yet you mostly cite papers dealing with the learning problem. Could you clarify, or correct if relevant? \n\nMinor Comments:\n-\n- In line 50, you use $\\pi_k$ but it is not defined yet. \n- Regarding refs: Please check formatting guidelines. In many places you must use \\citep or \\citet instead of \\cite so that you get (A & B, year) instead of A & B (year); for instance, in the first paragraph of Section 1. But they are correctly used in Section 1.1. This issue renders rather distracting when reading the paper. \n- The work (Lin Xiao, 2022) is cited twice. Is there any difference between them? \n- Line 133 (and elsewhere): Using $\\Delta(\\mathcal A)$ instead of $\\Delta \\mathcal A$ could make things more readable. \n- Inconsistent notations: In Eq. (8) you used $d_\\mu(\\pi^*)$ whereas later you used $d_{\\mu,\\gamma}^{\\pi^*}$ to denote essentially the same thing. \n- Unless I am missing something, the textbook (Boyd and Vandenberghe, 2004) does not include definition of $L$-smoothness, etc. \n- Table 1: Make precise the norms used for $C_p$ and $C_m$.\n\nTypos:\n-\n- Line 82: is , Bai et al. ==> remove “,”\n- Line 198: … relationBertsekas …. ==> … relation (Bertsekas, …)\n- Line 251: Further is the function is ==> Further if … \n- Line 269: euclidean norm ==> Euclidean norm ---- to be consistent with an earlier use of this term. \n- Line 346: in the Lemma below ==> … lemma …\n- Line 384 and elsewhere in Section 3.2: To be consistent with notations used elsewhere, use $|\\mathcal S|$ instead of $S$ since the latter is not defined. \n- Line 398: By $L$, did you mean $L_2^{\\Pi}$?\n- Line 388: a verb might be missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- When presenting the convergence rates of the related works, why was the dependence of $\\epsilon$ omitted?\n- Could the remark of Theorem 1 be clarified. Why is the bound $$\\frac{\\sigma}{k^p}$$ less meaningful for the inital $k$? Isn't $k$ the number of iterations? Also note that for softmax policies, there exists faster convergence rates shown in [1] compared to [2].\n- Is it possible to show that the $O(\\frac{1}{\\epsilon})$ bound is tight? \n\n\n[1] Liu, J., Li, W., & Wei, K. (2024). Elementary analysis of policy gradient methods.\n[2] Mei, J., Xiao, C., Szepesvari, C., & Schuurmans, D. (2020, November). On the global convergence rates of softmax policy gradient methods." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- First proof of global convergence of Project Policy Gradient for average reward MDPs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author show that Project Policy Gradient ascent for average reward MDPs can achieve an $O(\\frac{1}{\\eps})$ rate to the optimal policy. To attain this rate, the authors prove the smoothness property of the objective. Additional experiments are conducted to validate the proposed rates." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Missing comparison to [1]. This work improves the convergence rate of [2] and show the rate of Policy Mirror Descentt is linear. Projected Policy Gradient is an instance of Policy Mirror Descent when the squared Eucliden distance is used as the mirror map.\n- The clarity of the writing could be improved, \n - The precise definition of $d^\\pi(s)$ should be given\n - It's not clear what the step-size used in Thereom 1 Is\n- A reference / proof for Eq. 8 should be given. \n- Formatting errors: 155: Bellman equation equation 3, 181: discount factorBertsekas (2007), 202: \\textit{equation 8}\n\n\n[1] Johnson, E., Pike-Burke, C., & Rebeschini, P. (2023). Optimal convergence rate for exact policy mirror descent in discounted markov decision processes.\n[2] Xiao, L. (2022). On the convergence rates of policy gradient methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Since a linear convergence rate is already available in the discounted setup (Xiao 2022b), is it possible to achieve the same in the average reward setup? What are the fundamental challenges to obtain it?\n\n2. Please mention in Table 1 that the constants $C_e$ and $\\lambda$ are taken from Assumption 1. It will help the reader.\n\n3. Is the smoothness result only valid for ergodic MDPs or is it possible to extend it to a larger class?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. New state-of-the-art convergence rate of $\\mathcal{O}(1/T)$ for projected gradient descent algorithm for average reward MDPs.\n2. New smoothness result of the value function for the same setting.\n3. Despite some weaknesses stated below, the paper is overall nicely written." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents the convergence rate analysis of the projected policy gradient algorithm for tabular average reward Markov decision processes (MDPs). Assuming access to the exact gradient, the authors proved a convergence rate of $\\mathcal{O}(1/T)$ where $T$ is the number of iterations. To prove the result, they established the smoothness property of the value function for ergodic MDPs, which is of separate interest." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors should rewrite the related works and put their work in context. First, they should separate the related works into two groups: ones that use exact gradients (and hence, are more of a planning problem), and others that use gradient estimates (and therefore, are more of a learning problem). Authors should note that some papers explicitly fall into the second group while many others discuss problems of both kinds. The work of the authors falls into the first group. This should be highlighted both in the abstract as well as in the introduction.\n\n2. While mentioning the convergence rate established by earlier works, the authors only focused on the $1-\\gamma$ factors while completely ignoring the $\\epsilon$ related factor. For example, equation (1) does not show any dependence on $\\epsilon$. Is there any specific reason for that? I think it makes the comparison quite confusing.\n\n3. Although one of the results of (Xiao 2022b) proves a convergence rate of $\\mathcal{O}\\left((1-\\gamma)^{-5}\\epsilon^{-1}\\right)$, in the same paper, they also provide a better result. Specifically, using policy mirror descent, which can be thought of as a generalization of the policy gradient, they establish a linear convergence rate of $\\mathcal{O}\\left((1-\\gamma)^{-1}\\log\\left((1-\\gamma)^{-1}\\epsilon^{-1}\\right)\\right)$. I am surprised that the authors failed to mention the linear convergence rate.\n\n4. Some of the state-of-the-art results mentioned are outdated. For example, (Bai et. al. 2023) is no longer the only work that establishes a regret bound for average reward MDP. A recent paper [1] supersedes their result.\n\n5. To my understanding, the concept of regret makes sense only for a learning problem, not for a planning problem. In my opinion, the author should solely stick to the convergence rate result.\n\n[1] Ganesh, S. and Aggarwal, V., 2024. An accelerated multi-level Monte Carlo approach for average reward reinforcement learning with general policy parametrization. arXiv preprint arXiv:2407.18878." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024global,\ntitle={Global Convergence of Policy Gradient in Average Reward {MDP}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2PRpcmJecX},\nnote={under review}\n}" }, "abstract": { "value": "We present the first comprehensive finite-time global convergence analysis of policy gradient for infinite horizon average reward Markov decision processes (MDPs). Specifically, we focus on ergodic tabular MDPs with finite state and action spaces. Our analysis shows that the policy gradient iterates converge to the optimal policy at a sublinear rate of $O({\\frac{1}{T}}),$ which translates to $O({\\log(T)})$ regret, where $T$ represents the number of iterations. Performance bounds for discounted reward MDPs cannot be easily extended to average reward MDPs as the bounds grow proportional to the fifth power of the effective horizon. Recent work on such extensions make a smoothness assumption that has not been verified. Thus, our primary contribution is in providing the first complete proof that the policy gradient algorithm converges globally for average-reward MDPs, without such an assumption. We also obtain the corresponding finite-time performance guarantees. In contrast to the existing discounted reward performance bounds, our performance bounds have an explicit dependence on constants that capture the complexity of the underlying MDP. Motivated by this observation, we reexamine and improve the existing performance bounds for discounted reward MDPs. We also present simulations which empirically validate the result." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Policy Gradient", "Reinforcement Learning", "Average Reward MDPs" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2d4f1b29dd9585dd4c2ee8bb8b35925166d923fe.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/08c2b514cc0784fd2633e5d0b96a2a9e0ce612fa.pdf" }, "title": { "value": "Global Convergence of Policy Gradient in Average Reward MDPs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2PzozgigiA
CollabEdit: Towards Non-destructive Collaborative Knowledge Editing
main
Active
Collaborative Learning;Knowledge Editing
alignment, fairness, safety, privacy, and societal considerations
5;6;6
3;4;3
2;3;3
3;3;3
3;3;3
5.666667
3.333333
2.666667
3
3
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "COLLABEDIT allows for non-destructive knowledge editing, which prevents significant performance drops that are common in traditional methods\n\nThe framework is versatile and can integrate existing knowledge editing methods, providing a comprehensive solution to collaborative KE challenges\n\nEmpirical results show that COLLABEDIT outperforms existing destructive baselines, demonstrating superior editing performance even with a large number of edits" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the collaborative knowledge editing (KE) for large language models (LLMs). It identifies three primary challenges in this domain: knowledge overlap, knowledge conflict, and knowledge forgetting. The authors propose a framework called COLLABEDIT, which utilizes a non-destructive model merging mechanism to aggregate knowledge edits from multiple parties while maintaining performance and privacy.\n\nThe framework aims to mimic the optimal global editing behavior without the significant performance drops associated with existing destructive methods. Through extensive experiments on canonical datasets, the authors demonstrate that COLLABEDIT outperforms traditional approaches, addressing the identified challenges effectively." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The non-destructive merging mechanism may introduce additional complexity in implementation compared to simpler, traditional methods.\n\nIts scalability in large collaborative environments or with numerous clients may need further exploration.\n\nMore experiments on different LLMs could benefit the demonstration of the effectiveness of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to my summary of my weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ The paper tackles an important problem of generalizing knowledge editing to collaborative learning settings where privacy is a critical concern.\n+ The authors provide a compelling theoretical analysis of the limitations of naive weight sharing and introduce the concept of sharing $KK^{T}$, which is proved to be difficult to attack in the traditional privacy-aware setting.\n+ The experiments conducted seem to effectively demonstrate the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the generalization of knowledge editing within the collaborative learning setting, with a focus on ensuring privacy while modifying the knowledge of large language models (LLMs). The authors propose a novel approach by sharing $KK^{T}$, an intermediate weight associated with the keys of edited knowledge, instead of naively sharing and averaging weights, which is theoretically proven to be resistant to attacks. The experiments conducted demonstrate the effectiveness of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It is not surprising to see the destructive performance of direct fed-average for knowledge editing, as edits individual client are naturally diluted when models are averaged, although I appreciate the formal mathematical treatment of the issue.\n- While knowledge conflict is identified as a key challenge, the paper addresses it in a rather ad hoc manner compared to other challenges, which are supported by theoretical analysis.\n- My biggest concern is on the privacy part of the model. Although the authors propose to share $K^{T}K$ and providing theoretical proof of its resistance to attacks, the paper does not fully address the new privacy challenges faced by LLMs. If the edit is successful, the new knowledge can be easily prompted out from the LLMs by simply asking questions. This is especially convenient given that most knowledge editing tasks involve only the editing of factual knowledge. Therefore, the traditional privacy methods may not suffice in the LLM case, and further exploration in preserving privacy for knowledge editing is needed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Why was the setup of editing 10 models with 500 requests (Table 1 and 2) per model not applied consistently in Table3?\n* Could you clarify why the MCF dataset was not included in experiments in Table 3? This dataset would likely provide a valuable benchmark for evaluating the framework’s robustness in handling knowledge conflicts.\n* In the knowledge overlap experiments, the focus was on the R value’s $\\ell_2$-norm rather than directly showing the editing method’s performance. How does COLLABEDIT perform when subjected to repeated editing requests for the same knowledge items?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper identifies and addresses a novel problem of knowledge editing in federated learning for LLMs, a new setting within model editing research.\n* The authors propose a straightforward yet effective method—COLLABEDIT—that enables privacy-preserving collaborative editing, which is an essential consideration in multi-party learning scenarios.\n* Experiments on GPT-J and GPT2-XL show that COLLABEDIT can substantially improve performance over methods like MEMIT in federated settings, highlighting its practical effectiveness in this new problem space." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces COLLABEDIT, a framework designed for collaborative knowledge editing (KE) in LLMs within federated learning scenarios. COLLABEDIT allows multiple parties to collaboratively edit the knowledge in LLMs while preserving data privacy, a novel scenario within knowledge editing and federated learning. It addresses three main challenges—knowledge overlap, knowledge conflict, and knowledge forgetting—by implementing a non-destructive model merging technique that aims to achieve performance close to direct global model editing without degrading results. Extensive experiments on GPT-J and GPT2-XL demonstrate the effectiveness of COLLABEDIT, showing improvements over existing approaches in federated scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The need for collaborative knowledge editing within federated LLM may be limited, as large-scale federated LLM scenarios are currently uncommon. This reduces the perceived applicability and impact of the problem being solved.\n* The experiments are conducted on older models like GPT-J and GPT2-XL. More recent models such as LLaMA-2, LLaMA-3, or Gemma would provide stronger validation of the proposed method’s efficacy.\n* The paper’s structure could benefit from refinement, as some figures and tables (e.g., Figure 3 and Table 4) are misaligned, affecting readability and presentation quality." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024collabedit,\ntitle={CollabEdit: Towards Non-destructive Collaborative Knowledge Editing},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2PzozgigiA},\nnote={under review}\n}" }, "abstract": { "value": "Collaborative learning of large language models (LLMs) has emerged as a\nnew paradigm for utilizing private data from different parties to guarantee\nefficiency and privacy. Meanwhile, Knowledge Editing (KE) for LLMs has also\ngarnered increased attention due to its ability to manipulate the behaviors of\nLLMs explicitly, yet leaves the collaborative KE case—in which knowledge\nedits of multiple parties are aggregated in a privacy-preserving and continual\nmanner—unexamined. To this end, this manuscript dives into the first investigation\n of collaborative KE, in which we start by carefully identifying the unique\nthree challenges therein, including knowledge overlap, knowledge conflict, and\nknowledge forgetting. We then propose a non-destructive collaborative KE\nframework, COLLABEDIT, which employs a novel model merging mechanism\nto mimic the global KE behavior while preventing the severe performance drop.\nExtensive experiments on two canonical datasets demonstrate the superiority of\nCOLLABEDIT compared to other destructive baselines, and results shed light on\naddressing three collaborative KE challenges and future applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Collaborative Learning", "Knowledge Editing" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/09c547216da05b78cd274f8b53e7b4408ea34531.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "CollabEdit: Towards Non-destructive Collaborative Knowledge Editing" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2Q8gTck8Uq
Gradient correlation is needed to accelerate SGD with momentum
main
Active
optimization;convex;nesterov momentum;sgd;neural network
optimization
5;5;5;6
4;4;3;3
3;3;3;3
2;3;3;3
2;2;3;4
5.25
3.5
3
2.75
2.75
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* While RACOGA has been demonstrated to facilitate the acceleration of SNAG over SGD in convex and strongly convex functions, how does RACOGA perform in non-convex optimization scenarios, such as those commonly found in deep neural network training? Can RACOGA be effectively applied to these more complex models, or are there additional considerations needed to achieve similar acceleration benefits?\n* The paper highlights that large RACOGA values enable the acceleration of SGD with momentum. However, what practical methods or criteria can be used to identify or achieve large RACOGA values in real-world applications?\n* How robust is SNAG's performance to variations in RACOGA across different types of datasets and optimization problems?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Originality:\n- Proposes the hypothesis that Stochastic Nesterov Accelerated Gradient (SNAG) can accelerate over Stochastic Gradient Descent (SGD) and proves that this hypothesis is valid when SNAG is under a Strong Growth Condition. \n- Provides new asymptotic almost sure convergence results for SNAG. \n- Gives the new characterization of the SGC constant by using the correlation between gradients.\n- Introduces a new condition named Relaxed Averaged COrrelated Gradient Assumption (RACOGA).\n2. Quality and clarity:\n- Clearly shows when $f$ is convex and $\\mu$-strongly convex, it shows the possibility of acceleration of SNAG over SGD is highly dependent on the SGC constant $\\rho_K$, where $\\rho_K < \\sqrt{\\frac{L^2_{(K)}}{\\mu L}}$.\n- Provides clear and explicit steps for proofs.\n- The numerical results are readable and show a clear difference in convergence speed among different algorithms.\n3. Significance:\n- People can get faster and better results by applying the condition proposed in this paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the possibility of obtaining accelerated convergence of the Stochastic Nesterov Accelerated Gradient (SNAG) method. The authors provide a clear proof that the average correlation between gradients allows to verify the strong growth condition, which is essential for achieving accelerated convergence in convex optimization settings. Furthermore, the paper includes comprehensive numerical experiments in both linear regression and deep neural network optimization, empirically validating the theoretical findings. The experimental results are clear and concise. These contributions advance the understanding of momentum-based stochastic optimization techniques and demonstrate the practical effectiveness of SNAG in enhancing convergence rates." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The text and formulas are a bit dense; the author can add a table to compare the convergence speed of SGD and SNAG under different conditions.\n* The graphs look good. However, that would be better if the author gave more detail about the explanation for the graph, for example, what the \"small values\" of RACOGA mean on the graph.\n* The colors in the right graph for Figure 1(a) are similar, author can use more contrasting colors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Was the Algorithm 2, in its given form, first introduced by Nesterov (2012) \"Efficiency of coordinate descent methods on huge-scale optimization problems.\"? If yes, the authors should cite that paper. I appreciate Proposition 4 in the appendix showing that the more common two parameter NAG algorithm (Algorithm 8, with $\\tau=0$) can be obtained as a special case of this algorithm with a reparametrization.\n\n2. Proposition 2 suggests that RACOGA holding with $c>-0.5$ is sufficient to verify the SGC. But in Figure 1(a), SNAG does not accelerate over SGD despite the RACOGA values being greater than -0.12. Is there an explanation for this apparent discrepancy?\n\n3. Just to confirm, in the experiments, were GD and NAG used with the full batch gradient at each step (e.g. were all of 50k images used for the CIFAR-10 experiment at each training step)? If yes, this might be worth specifying explicitly since most of the times in machine learning experiments, NAG refers to Algorithm 8, even when it is used with mini-batch gradients." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Previous works have shown that stochastic versions of NAG converge at the same accelerated rates when the gradient estimates satisfy the strong growth condition (SGC). While they provide heuristics that suggest that SGC is a reasonable assumption in the context of overparametrized deep learning, it is not always clear when the condition is actually satisfied. This work addresses that gap in the literature. The authors show that for functions of the form $f=\\sum_{i=1}^N f_i$, positive gradient correlation (i.e. $\\langle f_i, f_j\\rangle \\geq 0$ for all $i,j$) is sufficient to guarantee the strong growth condition for the gradients. This result also gives a bound for the strong growth parameter ($\\rho$) in terms of the batch size, which is important for choosing optimal parameters for the SNAG algorithms. The main contribution of the paper is a gradient correlation condition (RACOGA) which implies SGC for functions with a finite sum structure. This further implies that SNAG converges at an accelerated rate in those settings. I think this is a useful contribution and a step in the direction of better understanding why momentum-based stochastic gradient algorithms perform well in practice. The authors provide numerical experiments to back their claims, which I found interesting and insightful as well." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies stochastic versions of Nesterov's accelerated gradient descent (NAG). These algorithms have been previously shown to converge at the same accelerated rates as NAG when the stochastic gradient estimates satisfy the so-called strong growth condition (SGC). Specifically for functions satisfying a finite sum structure, this paper finds a sufficient condition (RACOGA) in terms of gradient correlation that implies the strong growth condition, consequently implying that SNAG converges at an accelerated rate in those settings. Numerical experiments are provided to verify the implications of the RACOGA condition on accelerated convergence of stochastic algorithms." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Line 70: \"However, even the question of the possibility to accelerate with SNAG in the convex setting is not solved yet.\"\nThis is either unclear or inaccurate or both. There are several works which address the convergence of accelerated methods in the stochastic setting, both under SGC and with classical Robbins-Monro bounds, at least for smooth objectives. For a rigorous statement, the authors should specify the geometric assumptions, smoothness assumptions, and assumptions on the gradient oracle. Since the authors are aware of previous works on acceleration in convex optimization under SGC noise, it is unclear what meaning is intended.\n\n2. More concerningly, the next sentence says \"Finally, note that our core results (Propositions 1-2) do not assume convexity, and thus they could be used in nonconvex settings.\" Juxtaposed with the previous sentence, it gives the reader the impression that the authors have addressed the question of acceleration for non-convex functions, which is not true. It is true that the conditions PosCorr and RACOGA studied in Propositions 1 and 2 imply SGC even for non-convex functions. But SGC/PosCorr/RACOGA alone is not sufficient for any of the accelerated convergence results provided here or in previous works, some form of convexity is still required. The current phrasing is misleading since, again, it conflates conditions on the noise in the gradient estimates and on the geometry of the objective function, which are in general independent. If there is a relation in the setting the authors consider, they need to explain and emphasize this. I do not see the implication.\n\n3. The authors claim one of their main contributions is \"new almost sure convergence results (Theorem 4)\". However, almost sure convergence is already covered by corollary 5 in Gupta et al. \"Achieving acceleration despite very noisy gradients\" arXiv:2302.05515. That paper studies a stochastic version of NAG under a condition similar to SGC. The authors should highlight the differences in their results.\n\n4. The theorem 4 statement suggests that the authors recover a rate almost surely, but in the current presentation, it is unclear what precisely is meant. Even for $O(n^{-2})$: Is there a random variable C such that $f(x_n) - f(x^*) \\leq C/n^2$ simultaneously for all $n$ (and almost surely in probability), or does the random constant $C$ depend on $n$? And, what is meant by $o(n^{-2})$? For a machine learning venue, they should state a non-asymptotic quantitative bound. Almost sure convergence is a notion of convergence which is *not* induced by a metric on a space of random variables. As such, there is no immediate way of making sense of the notion that $f(x_n)$ and $f(x^*)$ are $o(n^{-2})$-close in a specific sense. More explanation is needed. The same concern applies to Theorem 2.\n\n5. The title of the paper is \"Gradient correlation is **needed** to accelerate SGD with momentum\", which makes it sound like gradient correlation is a necessary condition (i.e. if it is not satisfied then SGD with momentum does not converge at an accelerated rate). But I did not see a result proving that in the paper. The results actually claim that it is a sufficient condition. The title does not accurately reflect the main results.\n\n6. $L_{(K)}$ acts as an \"effective\" Lipschitz-continuity parameter of the gradient, depending on the batch size $K$. The results for SGD (Theorems 1 and 2) are provided in terms of $L_{(K)}$ without assuming SGC but the results for SNAG (Theorems 3 and 4) are provided in terms of the SGC parameter $\\rho_K$. Then these two results, derived under different conditions, are compared to conclude that SNAG does not accelerate over SGD unless $\\rho_k<\\frac{L_{(K)}}{\\sqrt{L}}\\cdot C$ (where $C$ is a constant that differs in the convex and strongly convex cases). This seems like an unfair and misleading comparison to me. Both, $L_{(K)}$ and $\\rho_K$, measure the stochasticity of the gradient estimates but in different ways. The authors demonstrate in Appendix E.2 an example where $L_{(K)}$ is a tighter estimate of the effective Lipschitz constant than $L\\rho_k$. That does show that if the smoothness parameter $L_i$ of each summand $f_i$ is known, then using $L_{(K)}$ would allow us to choose a larger step-size for SGD than the one provided by $1/L\\rho_k$. However, a fair comparison between SGD and SNAG can only be made if the same assumptions and information are used to calculate the step size, but there are no convergence results available for SNAG that directly make use of $L_{(K)}$. This feels like comparing apples and oranges. If the authors want to argue that you can use a larger step size for SGD than for SNAG, they should justify why Nesterov would blow up with that step size." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. It would be nice to have a table that summarizes results from theorem 1-4 (perhaps including results from the literature) so that readers don't have to go back and forth to compare them. \n\n2. Authors have remark 5 to explain the results from theorem 4 which does help me to understand it. I wonder if there is any intuition about why $\\rho_k$ plays a different role in convex vs strongly convex cases. Also, for the strongly convex case, it seems we need less noisy data for SNAG to beat SGD, because we want $\\rho_k$ to be small. Am I understanding this correctly? For continuous strongly convex function, there is a unique minimizer, meaning it won't stuck in some local minimizers. How does this fit into this theory?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-organized and easy to read. The material in the supplement serves as a good complement to the main paper. Experimental results are presented in a clear way with nice plots and great details.\n\n2. The main result is indeed very interesting to the community and gives some insight into a long-standing question. The theoretical contribution mainly comes from Theorem 4 which provides an almost surely convergence for SNAG showing a speed-up compared to SGD. \n\n3. By proposing a new characterization of SGC, the authors improved the assumption so that it only depends on the size of the dataset and batch size. Using this, the authors proposed a new condition--RACOGA. \n\n4. The authors also discuss the relation between batch size and gradient correlations which brings interesting insights into when to use stochastic and non-stochastic versions of these algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors study the convergence of SGD with momentum--Stochastic Nesterov Accelerated Gradient (SNAG) and obtained an improved rate compared to vanilla SGD. More precisely, they consider the strong growth condition that, intuitively, quantifies the amount of noise. Using this definition, they are able to achieve e.g. $o(\\frac{1}{n^2})$ convergence rate (SGD has $o(\\frac{1}{n})$) for convex functions. For certain objective functions, the authors propose a way to compute the strong growth condition and a new condition RACOGA. In addition, the authors have numerical experiments to verify their results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Proof for theorem 4 heavily relies on an existing result (Sebbouh et al. 2021, theorem 9), which one could argue it weakens the theoretical contributions of this work.\n\n2. I appreciate that the authors made an effort to compare RACOGA with gradient diversity and gradient confusion and agree with the authors that they are not identical, but they do look quite similar." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How is Theorem 2 different from the results in Vaswani et al, 2019? It would be nice if the paper could include a detailed comparison of the two results.\n\n2. In Example 3, the paper demonstrates the benefits of PosCorr condition over the traditional way of verifying the SGC condition by showing that $\\rho_K\\leq \\frac{N}{K}$. Why is $\\frac{N}{K} \\leq \\frac{L_{(K)}}{\\mu}$, so that it could be considered an improvement?\n\n3. What is $\\lambda$ in Figure 1?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper provides an extension of the original convergence theorem in Vaswani et al, 2019 in the almost sure convergence form, leading to a more comprehensive understanding of the SNAG algorithm.\n\n2. The paper's result covers both convex and strongly convex case. In particular, both the PosCorr and the RACOGA condition can lead to the SGC without assuming the strong convexity.\n\n3. Centered around the SGC, the paper develops conditions that implies the SGC, which allows the paper to investigate the relationship between the batch size and the SGC coefficient." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the acceleration of Stochastic Nesterov's Accelerated Gradient (SNAG) method over the SGD algorithm. In particular, the study is based on Vaswani et al, 2019, in which the acceleration is first proved based on the Strong Growth Condition (SGC) of the stocastic gradient. This paper extends the previous paper by showing an accelerated almost sure convergence result for SNAG, and develops condition that lead to a better SGC coefficient. Based on this condition, they show how the SGC coefficient changes as the batch size increases. The paper also verifies the condition using experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the almost sure convergence result provided in Theorem 4 deepens our understanding of the SNAG method, I believe that the major focus of this paper is still on how the gradient correlation can lead to a better SGC coefficient, which gives acceleration for SNAG. From this perspective, the result in Theorem 4 seems a bit disjoint from the other sections of the paper.\n\n2. Although the RACOGA condition holds in general with a coefficient of $c \\geq -\\frac{1}{2}$, it does not seem to be easy to find a tight $c$ for the objectives, as evaluating this lower bound involves analyzing the pairwise inner product between gradients for all choices of the parameters in the parameter space. Furthermore, when $c$ approaches to $-\\frac{1}{2}$, the SGC coefficient $\\rho = \\frac{N}{1 + 2c}$ approaches infinity, leading to a trivial condition.\n\n3. The experimental verification of the paper seems quite weird. It is noticed that, in the linear regression case, the gradient correlation involves both the inner product term and the $\\mathbf{a}_i^\\top \\mathbf{a}_j$, sign of the residual terms $\\mathbf{x} ^\\top\\mathbf{a}_i - b_i$'s. In particular, different signs of the residual terms could lead to completely different lower bound on the gradient correlation. However, it seems that in the experimental design the paper considered only the correlation between the data. Moreover, it may contradict the claim of the paper that RACOGA helps acceleration since in Figure 1.(a) the green curve, with a smaller RACOGA coefficient, led to a faster convergence than the blue curve." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We prove that gradient correlation enables nesterov momentum to accelerate SGD for convex functions." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024gradient,\ntitle={Gradient correlation is needed to accelerate {SGD} with momentum},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2Q8gTck8Uq},\nnote={under review}\n}" }, "abstract": { "value": "Empirically, it has been observed that adding momentum to Stochastic Gradient Descent (SGD) accelerates the convergence of the algorithm.\nHowever, the literature has been rather pessimistic, even in the case of convex functions, about the possibility of theoretically proving this observation.\nWe investigate the possibility of obtaining accelerated convergence of the Stochastic Nesterov Accelerated Gradient (SNAG), a momentum-based version of SGD, when minimizing a sum of functions in a convex setting. \nWe demonstrate that the average correlation between gradients allows to verify the strong growth condition, which is the key ingredient to obtain acceleration with SNAG.\nNumerical experiments, both in linear regression and deep neural network optimization, confirm in practice our theoretical results." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "optimization", "convex", "nesterov momentum", "sgd", "neural network" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6dfb8fa8ddee83447b903e2cbaf0f0d9e74ee3ea.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c79b575d79fc63c0a0b684e75c8567c1ad5bc74a.zip" }, "title": { "value": "Gradient correlation is needed to accelerate SGD with momentum" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2QXC4NX8oC
PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models
main
Active
Diffusion models;Text-to-Image;Image Editing
generative models
3;5;5;6
4;4;4;4
3;3;3;3
2;2;2;3
3;3;2;4
4.75
4
3
2.25
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1) This paper focus on an interesting question, which as great significance to downstreaming research and tasks.\n\n(2) The overall design of the model design is generally make sense.\n\n(3) This paper is easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an inference-based image editing method that can perform fine-grained object parts editing. Specifically, this paper trains part-specific tokens that specialize in localizing the editing region at each denoising step, then develop feature blending and adaptive thresholding strategies that ensure editing while preserving the unedited areas." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) Some related works are missing. Discussing and compare these related works will be good for improving the paper's quality.\n\n- [1] Pnp inversion: Boosting diffusion-based editing with 3 lines of code\n\n- [2] Inversion-free image editing with natural language\n\n- [3] Dragondiffusion: Enabling drag-style manipulation on diffusion models\n\n(2) Can you provide more experimental results to prove the effectiveness of the proposed method? For example, more comparison results with training-based editing methods such as InstructPix2Pix[4]. More visualization for editing regions of various image-editing prompt pairs. Results of combining the proposed method to different pretrain checkpoints/different diffusion model backbones to show its generalization ability.\n\n- [4] Instructpix2pix: Learning to follow image editing instructions" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Given the limited number (10–20) of images used for training part tokens, how were these images selected to ensure representativeness, and what impact does this selection have on the model's generalization capabilities?\n\n2. Are there guidelines or best practices provided for hyperparameter tuning?\n\n3. How effectively does the method handle instructions that require simultaneous modifications to two or more parts of an image?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper addresses a critical problem in image editing: the inability to accurately edit specific parts of an object while keeping the rest of the image unchanged.\n\n2. The use of token optimization to learn adaptable tokens for subsequent editing tasks is intuitive and intriguing.\n\n3. The experiments are thorough, with comprehensive ablation studies that validate the effectiveness of the proposed approach.\n\n4. The paper is well-written, easy to follow, and logically structured." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a method to enhance pre-trained diffusion models for fine-grained image editing by training part-specific tokens for localizing edits at each denoising step. This approach uses feature blending and adaptive thresholding for seamless edits while preserving unaltered areas. A token optimization process expands the model’s semantic understanding without retraining, using existing or user-provided datasets. Qualitative as well as quantitative experimental comparison have been conducted to demonstrate the effect of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The images used for training part tokens are very limited, with only 10–20 images. In such cases, the representativeness of the images is crucial for generalization. It would strengthen the paper if the authors would conduct experiments to show the impact of varying the types of training images on the model's performance.\n\n2. The method involves many hyperparameters that require tuning, including the number of diffusion timesteps for training part tokens and inference, the selection of layers for optimization, and the adjustable tolerance for transitions between edited parts and the original object. This adds complexity to the overall framework and could make it challenging to implement effectively.\n\n3. In practical scenarios, one might want to adjust two parts simultaneously. Therefore, how will the method apply when handling instruction text that requires simultaneous editing of two parts? I suggest the authors include experiments or examples to show the model's performance on multi-part edits.\n\n4. Will the evaluation dataset for PartEdit be made publicly available? Also, will the code be available?\n\n5. Typo: The text inside Figure 1, \"prompt-to-prompt (ICRL2023),\" should be corrected to \"ICLR.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The paper does not explicitly mention how it deals with fine-grained edits for multiple objects within an image, such as distinguishing between two heads in an image for editing purposes. could you provide some form of a mechanism to differentiate between objects? How your method deal with this situation?\n2. How many part tokens can you serve?\n3. What's your random methods for initializing textual embeddings?\n4. How do you generate reliable localization masks? Does this process rely on specific datasets or pre-trained models? Will the distribution of the training data for the mask overlap with the distribution of the images used for testing?\n5. see weakness." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper presents a flexible method for text-based image editing focused on object parts, which is a novel contribution to the field of image processing and editing.\n2. The paper is well-written and easy to follow.\n3. The authors have conducted extensive experiments and provide a solid basis for its practical application." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a text-based image editing method for object parts using pre-trained diffusion models. It enhances model understanding of object parts for fine-grained edits through optimized textual tokens and masks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The approach relies on a finite and manually defined set of part tokens, which could restrict the flexibility and applicability of the method in real-world scenarios where users might need to edit object parts that are not covered by the predefined tokens. This limitation could affect the generalizability of the technique to a broader range of editing tasks and objects.\n2. There are many methods nowadays that utilize semantic segmentation to create masks, which are quite similar to this paper. You should supplement your study with some relevant ablation experiments, like replace the attention mask with semantic segmentation part and compare it with similar methods [1][2][3].\n\n[1] SmartMask: Context Aware High-Fidelity Mask Generation for Fine-grained Object Insertion and Layout Control.\n[2] Accelerating Text-to-Image Editing via Cache-Enabled Sparse Diffusion Inference.\n[3] Towards Understanding Cross and Self-Attention in Stable Diffusion for Text-Guided Image Editing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses. \n\nI think is not of ICLR quality (the scientific contribution is too small) and could be published in a more applied venue (e.g. WACV) or dedicated workshop." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- the objective of accurate part editing with text prompts is relevant for many users. \n- the method is simple and results show that the method successfully addresses the problem. \n- overall writing and presentation is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for part editing in images. The paper shows that current state-of-the-art fails when asked to change only particular parts of images (e.g. 'hood' of a car). The paper proposes to perform part token learning, and then uses the attention maps of the learned part tokens for accurate part-based editing of images. Results show improved results over several baseline methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I find the main scientific/technologic contribution of the paper insufficient for ICLR conference, nor does the paper provided new insights in the functioning of DM for editing. I agree part-based editing is relevant. Given existing methods, part-based editing can be addressed by solving part detection or just user provided masks (for example based on segment anything). The proposed method of learning part tokens makes sense. The computed attention maps reduce the problem to a text-based inpainting problem. \n- the method is only applied to a very limited set of parts (7). Are these stored in 7 different models or jointly held within a single network ? Could this scale to many more parts ? Some analysis of the quality as a function of number of parts (if contained in a single model) would be interesting.\n- the method needs to learn new prompts for every new part users might want to change. The method dependents on existing part datasets for these parts, else they need to be created. Do the authors see any other solutions, using other existing models to preven t annotation? \n\nminor\n- are all weights tuned, or do you use LoRA for layer optimization of the L layers\n- figure 3 could be improved, it is hard to read in print" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present the first text-based approach for editing parts of various objects in images using pre-trained diffusion models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024partedit,\ntitle={PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2QXC4NX8oC},\nnote={under review}\n}" }, "abstract": { "value": "We present the first text-based image editing approach for object parts based on pre-trained diffusion models.\nDiffusion-based image editing approaches capitalized on the deep understanding of diffusion models of image semantics to perform a variety of edits.\nHowever, existing diffusion models lack sufficient understanding of many object parts, hindering fine-grained edits requested by users.\nTo address this, we propose to expand the knowledge of pre-trained diffusion models to allow them to understand various object parts, enabling them to perform fine-grained edits.\nWe achieve this by learning special textual tokens that correspond to different object parts through an efficient token optimization process.\nThese tokens are optimized to produce reliable localization masks at each inference step to localize the editing region.\nLeveraging these masks, we design feature-blending and adaptive thresholding strategies to execute the edits seamlessly.\nTo evaluate our approach, we establish a benchmark and an evaluation protocol for part editing.\nExperiments show that our approach outperforms existing editing methods on all metrics and is preferred by users 77-90% of the time in conducted user studies." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion models", "Text-to-Image", "Image Editing" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d5137435435f210eecbe3379ee0da58ac349e711.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/df53b9cb2c4c087d6dbd6144200af699bf9d9aaa.zip" }, "title": { "value": "PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2QdsjiNXgj
Direct Imitation Learning: RLHF Secretly Performs Imitation Learning
main
Active
Alignment
alignment, fairness, safety, privacy, and societal considerations
3;5;6;8
4;4;4;4
2;3;3;3
2;2;3;4
2;2;3;4
5.5
4
2.75
2.75
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "A discussion and answers regarding the weaknesses listed above would be appreciated. And if the authors can provide some more rationale and clean-up the derivations the score could be improved." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The connection between RLHF and imitation learning approaches is highly relevant to the community and the first part of the paper (background and initial derivation up to Eq.12-14) is well presented and leaves the reader with a condensed and improved understanding of how different existing algorithms relate (although perhaps more references to the literature could help, see weaknesses below).\n- Any improvement over DPO (which is perhaps the predominant algorithm at least for offline RLHF from fixed datasets) is relevant to the community.\n- The benchmarks used are open and relevant and at reasonable scale (i.e. 7B models)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper makes a connection between various approaches for RLHF of large language models and imitation learning. In particular the authors re-derive a well known connection between probabilistic inference and reinforcement learning which associates the reward function with the energy of a Boltzmann distribution (see e.g. [1] for a good review of all the related methods and derivations) for the special case of RLHF.\nFrom this perspective classical reward model learning can be derived as matching the energy to the generated responses with highest reward. Based on this the authors then derive a surrogate objective (DIL) that is closely related to DPO and other RLHF algorithms that exist, but which makes less assumptions on the form of the reward model. They show empirical evaluations on language modeling which match/gives slight improvement over DPO.\n\n[1] Levine, Sergey. \"Reinforcement learning and control as probabilistic inference: Tutorial and review.\" arXiv preprint arXiv:1805.00909 (2018)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- My first main gripe with the paper is that the idea that RLHF in it's entirety is performing imitation learning seems to stand on shaky foundations. A lot of leaps from one objective to another are required to arrive at this conclusion and a lot of the nuances in the differences between different objectives get lost along the way (that are already well discussed in the literature see e.g. Sergey Levine's review and also existing literature on offline RL as probabilistic inference). For example, the title says \"RLHF secretly performs imitation learning\" then up to Eq. 12 this thread is followed closely, and I find the connection that is made between reward model learning and the imitation learning perspective insightful, however directly after the authors make a leap to knowledge distillation / minimizing the reverse KL, which then attains the actual RL objective. This objective then is no longer directly related to learning from a dataset of reference or \"chosen\" examples (as would be the case in imitation learning) but instead can be understood as imitating an optimal policy (and not any policy that generated the dataset) on the state distribution induced by the currently learned policy (see also [3]). It thus really is RL (and not just imitation learning) and has to \"deal\" with all the problems RL comes with, i.e. exploration of the energy landscape of the optimal policy is required, premature convergence could be an issue etc.. The fact that the energy itself is given by a reward model that comes from matching chosen examples on a pre-collected dataset has no bearing on this. This is easy to see as depending on the temperature (which also pops out wihtout explanation) chosen in Eq 13. the policy may collapse to matching a single mode of the energy model but may also result in much higher energy / better reward than the chosen policy. The authors do discuss some of these nuances below in a short section on why SFT (which uses a forward KL) might underperform the reverse KL approach. But all this does is it leaves the reader with the impression that the authors painted too broad a picture to derive a connection that then, in practice is not relevant. This could be rectified by perhaps framing the paper as \"RLHF can be seen as imitating an optimal policy based on human preferences\" and toning down some of the quite strong language, e.g. \"learning without an RL loop\" etc.\n- The paper seems to be derived backwards as some of connections made feel slightly contrived upon reading. E.g. the jump from the original objective to knowledge distillation mentioned above. The steps taken to arrive at a DPO like objective from density ratio estimation etc. The paper requires a lot of steps to arrive at a simple algorithm that the authors probably had in mind from the get-go and started from.\n- The knowledge distillation connection seems tenuous (and already known), it seems more straightforward to think of the entire process as imitating a better policy as in MARWIL [2] or as chaining a specific variant of an improvement operator and distillation as already derived in detail for many different variants in [3].\n- A lot of the derived formulas and connections are already known in the literature but this is often not explicitly stated, e.g. \"\nIn this section, we connect RLHF to the imitation learning framework. We show that RLHF is a special\ncase of imitation learning problem by defining the following specialized energy-based mode\" in front of Eq 9, which very clearly is already derived in the DPO paper and literature on RL as probabilistic inference. It is fine to re-state such derivations but then please say: we built on a well known connection between RL and energy based models/probabilistic inference.\n- The key innovation that the paper hinges on seems to be the approximation of the log ratio between chosen and current policy but the derivation seems very ad-hoc and on shaky foundations. To be explicit: in order to arrive at their Eq. 21 (and thus Eq 24 which is their DIL objective) they make the assumption that the reference policy is the same as the policy that generates the rejected samples only and disregard any terms on the positive examples; i.e. \"Here, we use the set of rejected responses y_l ∼ π_ref(y | x) to approximate the expectations under π_ref(y | x)\". This is simply a wrong assumption. I do not know why the authors have chosen to make the assumption, but it feels like a contrived way to come to Equation 24 and form a connection to a DPO like objective. \n- The results are on relevant benchmarks, but the improvement over DPO seems minor in most cases. In this scenario what would be nice would be to analyze qualitative differences, e.g. examples which DIL seems to have stronger performances on compared to DPO. Or an analysis on how closeness (in KL) wrt. to the reference policy evolves during the course of optimization for different algorithms and how this affects performance. Or a plot that have the DIL objective on the x axis and win-rate (over different models, e.g. reference policy and DPO) on the Y-axis.\n\n[2] Wang, Qing, et al. \"Exponentially weighted imitation learning for batched historical data.\" Advances in Neural Information Processing Systems 31 (2018).\n[3] Ghosh, Dibya, Marlos C Machado, and Nicolas Le Roux. \"An operator view of policy gradient methods.\" Advances in Neural Information Processing Systems 33 (2020): 3397-3406." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Since DIL doesn’t suppress the likelihood of dispreferred responses as much as SimPO, how does this affect alignment from a safety perspective? Is the model more prone to generate harmful responses?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper is well written and presents intriguing connections between imitation learning and human preference alignment. \n - They derive a new alignment framework based on imitation learning and show empirical improvements on existing baseline. \n - DIL shows significantly better training dynamics compared to SimPO by ensuring that the likelihood of generating chosen responses is maintained." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- This paper reinterprets preference alignments methods like RLHF and DPO as special cases of a more general imitation learning objective. \n - They mathematically show how the RLHF and DPO objective functions fit within a general imitation learning framework. \n - They develop a new alignment method DIL based on imitation learning with the objective as minimizing the reverse KL loss between the optimal policy and current policy and derive a preference data based learning objective which suppresses the likelihood of generating dispreferred responses while increasing the likelihood of generating preferred responses. \n - They empirically show that DIL results in a better policy compared to other offline alignment methods across reasoning and alignment benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The amount of data needed for satisfactory alignment with DIL compared to other methods is not clear. The authors claim that DIL is more efficient, so it would be nice to see some metrics that measure this. \n - All the models in the experiments are smaller (<10B parameters) so it’s not clear how effective DIL would be for larger models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Section 5 under the models paragraph, the authors state “For fine-tuning on UltraFeedback Binarized dataset, we use Zephyr-7b-SFT (Tunstall et al., 2023) and Llama3-8b-SFT used in (Meng et al., 2024) as our base models.”, but then in Table 2 the top results are labeled as Mistral-7B-Base. Should that be Zephyr-7B-SFT instead?\n2. In Section 5, the authors mention KTO as part of the baselines, but it doesn’t seem the result tables include it? Also, SLiC is included in the result tables, but is not discussed in the baselines paragraph?\n3. Could the authors include the base model (SFT) performances in Table 2?\n4. In Table 3, what is the difference between Chosen and Average?\n5. In Table 3, it might be interesting to compare win rates of DIL directly with DPO or other baselines. Is there a reason the authors didn't include this?\n6. At the end of section 6.1, the authors state that “We hypothesize that these improvements can be attributed to avoiding the BT assumption and preventing the decrease in the likelihood of chosen responses.” Could the authors elaborate on why avoiding the BT assumption could lead to these improvements? Do they have examples in mind where BT might not be the right model?\n7. I’m a bit confused as to how $\\pi_{\\mathrm{chosen}}$ is defined. Is it essentially defined to be the policy that, given a preference dataset of $ (x, y_w, y_l) $ triplets, was responsible for generating all the $y_w$ pairs?\n8. In the beginning of section 4.3, the authors state that “In the tabular setting, we can directly compute $\\pi_{\\mathrm{ref}}(y | x)$ and $\\pi_{\\mathrm{chosen}}(y | x)$.” Could the authors please elaborate on this a bit? It’s not clear to me what the tabular setting here means.\n9. Is the Y-axis in figures 1 & 3 the *negative* log likelihood? And for the margins figure on the right, is it a difference of negative log likelihoods? This could use some better labeling. Putting the model name on the y-axis is a bit confusing, and might be better put in the caption.\n10. At the end of section 4.1: “achieving this in practice requires full data coverage and infinite models that are rarely met”. What is meant by “infinite models” here?\n11. In the paragraph right after equation 22, what’s $\\pi_{\\mathrm{data}}$?\n12. In the paragraph right after equation 22, why is there no log before the reward $r$ in $Z(x)$? Shouldn’t there be since there is one in equation 22 as well?\n13. In the paragraph after equation 22, the authors state “This characteristic, determined by the reward definition in Equation (17), is super beneficial as it allows our imitation learning to theoretically generalize to a broader class of loss functions beyond the pairwise BT preference model used in DPO.”. Could the authors please elaborate on this? What does \"this characteristic\" refer to? And how does it allow the imitation learning to generalize to a broader class of loss functions beyond BT?\n14. At the end of section 4.5 the authors state “Specifically, we demonstrate that DPO also falls under the imitation learning objective in Equation (16) and essentially employs the CPC method for density ratio reward estimation.”. While I agree CPC indeed estimates the correct density ratio, it’s unclear to me that this is used in equation 27. Specifically, the learned $f^*$ from equation 26 doesn’t seem to show up in equation 27?\n15. Towards the end of 6.1, the authors state “Notably, we observe DPO and SimPO hurt the overall performance in most reasoning-heavy tasks such as GSM8K”. Is this compared to some base model performance? And if so, where is this reported?\n16. This statement in 6.1 could use some clarification: “For instance, on LLama3, the improvements are notable on the Math and AlpacaEval 2 benchmarks, with relative gains exceeding 7.5% and 18.2%, respectively.” Is this for DPO or SimPO?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper has several strengths:\n1. The paper provides new mathematical connections between imitation learning formulations (various forms of forward and reverse KL optimizations) and previously established RLHF methods like PPO and DPO. As far as I'm aware, these connections are novel and have not been highlighted in past work, making them valuable insights for the community to further build on. \n2. To optimize the proposed imitation learning objective, the paper integrates ideas from density ratio estimation [1] and a change-of-variables approach [2] (rewards -> policies) to directly learn the target policy $\\pi_{\\theta}$, avoiding complexities such as adversarial training.\n3. Strong empirical results: the new method DIL seems to generally outperform all baselines in both the Open LLM Leaderboard as well in the summarization and dialogue generation settings.\n\n[1] Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori. Density-ratio matching under the bregman divergence: a unified framework of density-ratio estimation. Annals of the Institute of Statistical Mathematics, 64:1009–1044, 2012.\n\n[2] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 2024." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new method called Direct Imitation Learning (DIL), which is derived based on an imitation learning perspective on the alignment problem. Specifically, instead of minimizing the forward KL divergence as in SFT, DIL aims to minimize the reverse KL instead. This turns out to require estimating the density ratio $\\frac{\\pi_{\\mathrm{chosen}}}{\\pi_{\\mathrm{ref}}}$, which the authors show can be done through a Bregman divergence objective. Then, through a similar change-of-variables trick as used in DPO, the authors show that this reward objective can be instead minimized directly in terms of the relevant policies. Hence, the final objective directly optimizes $\\pi_{\\theta}$ through the Bregman divergence objective. \n\nThe authors also show that PPO and DPO can be seen as special cases of the proposed imitation learning formulation. Specifically, reward learning in RLHF can be formulated as a forward KL between $\\pi_{\\mathrm{chosen}}$ and $\\pi_{\\phi}$, and the RL step can be seen as a knowledge distillation process (through minimizing a reverse KL) into a final policy $\\pi_{\\theta}$. \n\nFrom the experiments side, the authors use the UltraFeedback Binarized dataset for evaluation on the Open LLM Leaderboard and show DIL is generally the best method across the board. For dialogue generation and summarization they use the Anthropic HH dataset and the Reddit TL;DR dataset and show through win rates (as judged by GPT-4) that DIL generally performs best against the SFT, Chosen, and Average responses. Finally, the authors also investigate the likelihood patterns of DIL and SimPO, which generally seem to show that the likelihood of chosen responses stay roughly the same while the likelihood of rejected responses goes down. This is unlike SimPO for which the likelihood of chosen responses also decreases." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper has several weaknesses:\n1. While the empirical results seem to consistently outperform prior methods, I’m a bit worried about the statistical significance since the margins seem rather small sometimes (e.g. for Table 2, the improvements are almost always smaller than 1 percentage point). Could the authors include some significance tests or at least standard errors / CIs to provide a better sense of the significance of these improvements?\n2. The exposition of the math/theory in the paper could have been a bit clearer (section 4). It took me some time to understand what actually is the final objective that DIL optimizes, and how it came to be. This is because, for example, at the end of section 4.3 the authors state “With the estimated density ratio reward, the surrogate imitation learning objective in Equation (17) can then be solved with any RL algorithms.”, which initially made it seem like DIL would have to resort to RL optimization anyways. But then reading section 4.4 it turns out that’s not what happens and there is actually a different objective that’s maximized (eq. 24). Maybe one thing that could help here is to add a summary box either at the beginning or end of section 4 that summarizes the key steps to go from the general DIL objective (eq. 16) to the formulation in eq. 24. \n3. Some parts of the paper require further clarification - please see the Questions section for this." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1) You mention that DIL does not depend on Bradley-Terry but you introduce new reward training with different objectives such as LSIF, UKL, and BCE which are essentially replacements for BT, so doesn't the DIL still rely on some preference modeling assumption?\n\nQ2) In 6.3 you discuss learning dynamics DPO, SimPO, and DIL however Figure 3 does not have DPO, is the discussion from some other paper?\n\nQ3) Do you have additional results on MT-Bench or Arena Hard?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The paper is well-written and easy to follow.\n\n* The first paper to connect RLHF with imitation learning if not mistaken\n\n* Very strong results against popular DPO-variants" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper generalizes preference learning or RLHF frameworks to an imitation learning framework, (DIL). Using this framework they propose multiple offline preference learning methods with different preference modeling such as Bradley-Terry for DPO and LSIF for the best DIL model. Moreover, its performance on benchmarks like Alpaca Eval 2 and Open LLM leaderboard is considerably better than other offline preference learning objectives." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "## Is RLHF a form of imitation learning?\nPaper frames reward learning as imitation learning and RL as knowledge distillation (KD) and I dont think either of them is correct.\n\nRL: Equation 13 is the reverse KL between the behavior and optimal policy however knowledge distillation is forward KL between teacher (optimal) and student(behavior). KD (forward KL) is distribution fitting or mean seeking whereas reverse KL is mode seeking which makes the policy focus on high-reward regions rather than fitting the entire distribution with forward KL as in SFT. Overall, the KD claim by the paper is incorrect. Lastly, equation 13 is a known result from DPO paper which is the penultimate step of the optimum solution of equation 14th.\n\n\nReward Learning:\n\nIn standard RLHF, the reward model is a separate LLM with an additional MLP to predict the scalar reward. So by training a reward model, one does not imitate the expert or optimal policy. What we are doing is fitting a reward model to a predetermined preference model however the caveat is that the optima policy trained by the RL can be parametrized by the reward model trained with which was already proven by the DPO. Lastly, DPO parametrizes the reward model in terms of the policy so when the reward learning objective is trained, we obtain the actual policy.\n\nOn the other hand, this paper defines a Boltzmann distribution $\\pi_\\phi$ (equation 9) in an EBM framework which is the optimal policy induced by the $r_{\\phi}(x,y)$. This distribution is maximized on the chosen preferences generated by the $\\pi_{expert}$ or imitates $\\pi_{expert}$. Following derivations leads to reward likelihood training objectives whereas I am unsure whether $\\pi_{ref}$ approximation is free because it introduces rejected responses while IL objective only minimizes on chosen preferences. Nonetheless, this derivation is possible because the $pi_\\phi$ has a reward equivalence whereas it does not tell anything for other forms of policy. Overall, I would interpret it as imitating reward rather than policy, not vice versa.\n\n## Direct Imitation Learning\nI dont think DIL is novel because it is the backtracking of the derivation of the DPO objective. After all, the 16th equation is the same as the 14th equation of the DPO without the partition and assuming $\\pi_{expert} = \\pi^*$. DIL is redefining the reward function of DPO, excluding the density ratio estimation part. All in all, I believe this part(excluding density ratio) is already present in DPO." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024direct,\ntitle={Direct Imitation Learning: {RLHF} Secretly Performs Imitation Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2QdsjiNXgj},\nnote={under review}\n}" }, "abstract": { "value": "This work studies the alignment of large language models with preference data. We address this problem from a novel imitation learning (IL) perspective. We establish a close connection between alignment and imitation learning, which shows that existing alignment objectives implicitly align model and preference data distributions. Built upon this connection, we develop a principled method DIL to\ndirectly optimize the imitation learning objective. DIL derives a surrogate objective for imitation learning with direct density ratio estimates, allowing effective use of preference data. DIL eliminates the need for complex adversarial training required by current IL methods, and optimizes the IL objective through simple density ratio estimation losses, achieving lightweight and efficient fine-tuning for large language\nmodels. This paper provides a unified imitation learning perspective on alignment, encompassing existing algorithms as special cases while naturally introducing new variants. Bridging IL and RLHF, DIL opens up new opportunities to improve alignment by leveraging tools from imitation learning. Extensive experiments demonstrate that DIL consistently and significantly outperforms off-the-shelf methods on\nvarious challenging benchmarks, including Open LLM Leadboard and AlpacaEval 2.0. Code for DIL is available at https://github.com/Code-DIL/DIL." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Alignment" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a6580190a1f0884c103a835e98b66a8d45195a9d.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Direct Imitation Learning: RLHF Secretly Performs Imitation Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2QkWSUMQh5
Robustness of Truss Decomposition and Implications for GNN-based Edge Classification
main
Active
Graph mining;dense subgraph discovery;truss decomposition;robustness;edge classification
learning on graphs and other geometries & topologies
3;3;5;8
4;4;3;3
2;3;3;3
2;2;2;3
1;2;2;3
4.75
3.5
2.75
2.25
2
-0.855186
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can the authors go into more far-reaching detail about how truss robustness can help with the edge classification task?\n2. Are there any other edge classification models apart from TER+AER for which truss robustness can be applied?\n3. Are there any hyperparameters such as damping factor in edgerank? If so, how are these parameters chosen?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea of truss robustness and dependency graph is intuitive and interesting.\n2. Theoretical findings in section 4 make the process of computing truss robustness efficient." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper quantifies the effect of removing an edge from a graph on the truss decomposition result. The authors construct a dependency graph to compute truss robustness of each edge and propose a faster heuristic based on their theoretical findings. The authors also show the effectiveness and efficiency of the proposed truss robustness to the edge classification task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. I like the first half of the paper, including the whole idea and conceptualisation of truss robustness and subsequent optimisation. However, it is unclear how truss robustness or truss decomposition effect on edge classification tasks. The authors present an interesting and computable quantitative metric for each edge, but where the metric can be effectively applied should be elaborated. It seems intuitive to me that there exists a significant portion of graph edge classifications that are not sensitive to truss robustness at all. \n2. The applicability of truss robustness seems slightly narrower due to the fact that it can be used only as a feature for edge classification. Truss robustness is expected in the study of other edge-based tasks in graph representation learning such as link prediction.\n3. Moreover, only one edge classification model TER-AER was reported in experimental result. And more experiments to verify the effectiveness of truss robustness on edge classification tasks are expected. Given the results so far, it seems that the entire work's only proposes a new feature for one model on edge classification task, making it appear that the potential impact of the entire work is limited.\n\nAssorted minor comments:\n\n1. I recommend that all mentioned notations should appear in Table 3.\n2. In time complexity analysis: $|E^{1.5}| \\rightarrow |E|^{1.5}$\n3. I suggest that the authors use a different notation to indicate that the set of edges sharing a triangle with a particular edge (i.e., $E(e, G)$) to distinguish from the notation of set consisting of all the edges of the graph.\n4. In Line 137, $ts(e,G)=\\Gamma_{\\geq}(e,\\phi(e)\\text{-truss})/2 \\rightarrow ts(e,G)=|\\Gamma_{\\geq}(e,\\phi(e)\\text{-truss})|/2$ ?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Have you considered applying these measures to other edge-centric tasks beyond classification, such as link prediction or graph matching?\n2. How sensitive are the proposed measures to noise or small perturbations in the graph structure? Is there a way to quantify this sensitivity?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The study presented in the paper fills a gap in the literature.\n2. The toy exmple in Figure 1 is very helpful in understanding the concept.\n3. The proposed measures show potential in improving downstream tasks like edge classification, particularly for rare classes in imbalanced datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces novel measures for edge-based robustness in truss decomposition, a method for dense subgraph discovery. The authors propose constructing a dependency graph among edges to model truss robustness and introduce three measures: Edge Robustness, Edge Strength, and EdgeRank. They provide theoretical findings and an efficient algorithm for computing the dependency graph. The paper demonstrates the effectiveness of these measures in improving edge classification tasks using Graph Neural Networks (GNNs)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper primarily focuses on edge classification to demonstrate the effectiveness of the proposed measures. Exploring other applications could strengthen the work's impact.\n2. Comparison with core decomposition SOTA measures could provide more context for the proposed measures' effectiveness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In line 96, use the math symbol “$\\times$” instead of an English character “x”.\n2. In line 104, the word “cutting-edge” is overly strong.\n3. In Figure 2b, why use standard deviation (STD) to measure the importance of edge features? First, are all the features normalized to ensure their STDs are comparable? Second, if the goal is effective classification, why not use the idea of linear probing and report the classification performance of a linear classifier?\n4. In Section 3, last paragraph, continuing from the previous question, the role of this paragraph is unclear to me. These metrics are proposed to measure the truss robustness of an edge. Instead of showing how precise these metrics measure the truss robustness, this paragraph shows they are useful for edge classification. A paragraph showing how well these metrics measure robustness should be provided.\n5. In Section 4, what is the time complexity of the naive computation of the dependency graph? How much faster is the proposed algorithm?\n6. In Figure 3, the same problem as Question 4, showing the proposed metrics have different distributions from the existing ones does not justify their correctness. The important thing is to measure how accurate these metrics are in estimating truss robustness.\n7. In Table 2, the last two columns seem to be statistically tied. Could you provide the p-values from the t-test?\n8. In summary, in my opinion, it is important to study the truss robustness of the edges and have a fast algorithm. However, the writing of this paper and the title seem to emphasize its usefulness on edge classification. While the first part lacks crucial evaluations and data analysis, the second part lacks novelty. It would be helpful if the authors can clarify this." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Theoretical analysis is provided.\n2. The algorithm for fast computation is proposed based on theorems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to study the edge level truss robustness and improve the performance of edge classification.\nThe authors propose three metrics to measure the truss robustness based on the dependency graph.\nTo speed up the computation, they propose an algorithm based on the theorems of truss number computation.\nThe experiments of edge classification have been conducted on six real-world graphs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing of the paper can be improved.\n2. Crucial evaluations of the proposed metrics are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please refer to the weakness" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The abstract is well-written.\n2. The observation of this paper is insightful.\n3. The proposed method is interesting and mathematically grounded." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper, titled \"Robustness of Truss Decomposition and Implications for GNN-Based Edge Classification,\" addresses the sensitivity of truss decomposition in dense subgraph discovery. Truss decomposition is noted to be highly effective but sensitive to small changes, like edge removals, which significantly impact edge truss values. The authors propose a new framework for characterizing truss robustness on an edge level by constructing a dependency graph that captures the impact of each edge's removal on its neighbors. They further use the captured robustness and dependencies in downstream edge classification problem via GNN." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In section 2, the paper introduces several truss-related concepts (e.g., truss number and trussness support), which can initially be confusing, especially the distinction between trussness support and truss number. An example would help clarify these concepts and highlight their differences. Additionally, the definition of trussness support in the formula (line 137) is missing the cardinality notation \"∣∣\" and should be corrected for clarity.\n2. In Figure 2(a), the dependency graph does not fully align with the truss number definition. For example, there should be a single directed edge between e2 and e1, and an edge should also exist between e2 and e5, right? Furthermore, the statement “as is the case (e3, e5) for which are incident on the left but not connected on the right” contradicts figure 2(a), as e3 and e5 are indeed unconnected in the dependency graph. 3. \n3. In Section 3, the paper lacks a formula for computing EdgeRank, which reduces the transparency and reproducibility of the method.\n4. In Figure 2(b), Edge Robustness (ER) shows a relatively low standard deviation, yet no explanation is provided. It would be helpful to discuss why ER might show limited variability across classes.\n5. The Experiments section lacks a direct comparison with the baseline from Chen et al. (2021) and omits runtime data for other robustness indicators like RS_{OD}, RS_{ID}, degree, and core number, which makes the efficiency claims not fully supported by experimental results. Adding a comparison with Chen et al. (2021) and reporting the runtime of other measures would provide a more comprehensive evaluation of computational efficiency and better support the paper’s claims.\n6. According to Table 5, the improvement of the proposed method over coreness+degree is quite marginal, and perhaps coreness is easier to compute than the metrics proposed in this paper. Any justifications or explanations?\n7. Are there other combinations (among metrics proposed in this paper and previous degree, coreness, etc) that could achieve better results? Seems they can be combined?\n\nHuiping Chen, Alessio Conte, Roberto Grossi, Grigorios Loukides, Solon P Pissis, and Michelle Sweering. On breaking truss-based communities. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 117–126, 2021." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We quantify edge-based truss robustness and show its practical use for edge classification." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024robustness,\ntitle={Robustness of Truss Decomposition and Implications for {GNN}-based Edge Classification},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2QkWSUMQh5},\nnote={under review}\n}" }, "abstract": { "value": "Truss decomposition is an effective and practical algorithm for dense subgraph discovery. However, it is sensitive to the changes in the graph: dropping a few edges or a bit of noise can drastically impact the truss numbers of the edges. It is of practical importance to understand and characterize the robustness of truss decomposition. In this work, we study and utilize the robustness of truss decomposition in an edge-driven way. We propose to construct a dependency graph among edges to denote the impact of an edge's removal on the neighboring edges. By using the dependency graph, we introduce three measures to capture the diverse and unique properties of the edges. We provide theoretical findings and design an efficient algorithm to compute the dependency graph faster than the naive baseline. We also show that our new edge-based truss robustness measures capture intrinsic graph structures and have the potential to unearth peculiar differences that can help with various downstream tasks, such as edge classification. We integrate our measures into the state-of-the-art GNN for edge classification and demonstrate improved performance on multi-class datasets. The overhead of computing our edge-based measures is insignificant when compared to the training time. We believe that utilizing edge-based truss and robustness measures can further be helpful in edge-driven downstream tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph mining", "dense subgraph discovery", "truss decomposition", "robustness", "edge classification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2c60d6b88fb3a3a2361fe30f9c805df355e4d412.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Robustness of Truss Decomposition and Implications for GNN-based Edge Classification" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2R7498e2Tx
PersonalLLM: Tailoring LLMs to Individual Preferences
main
Active
Personalization;LLM;Alignment;benchmark;dataset;reinforcement learning from human feedback;language models;RLHF;preferences
datasets and benchmarks
5;5;6;8
5;4;4;3
2;2;3;3
2;2;2;4
2;2;2;3
6
4
2.5
2.5
2.25
-0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see the weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. PersonalLLM provides a way to enhance the personalization of LLMs, which is an impactful direction to enhance the user experience. \n\n2. The benchmark includes extensive open-ended prompts with responses from state-of-the-art LLMs. \n\n3. The paper highlights the use of meta-learning to address data sparsity issues by leveraging historical interactions, which is crucial for real-world applications where personalized models lack sufficient user-specific data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces PersonalLLM, a public benchmark designed to personalize Large Language Models (LLMs) to better align with individual user preferences. The benchmark focuses on simulating diverse personal preferences using a set of pre-trained reward models. The dataset consists of open-ended prompts paired with multiple high-quality LLM responses, and the goal is to optimize personalization by leveraging historical user data. Basic baselines, including in-context learning and meta-learning, are explored to showcase the utility of this benchmark, setting the stage for future research into personalization algorithms for LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The personal preference models used to simulate diverse user preferences are not convincing enough to represent real users. First, it is difficult to verify whether the linear combination of scores from reward models aligns with the distribution of user rewards in the real world. Second, the candidate responses generated by LLMs may not cover real-world user-specific responses, making it challenging for LLMs to learn user-specific preferences or align with user-specific backgrounds. For instance, users may have particular preferences or habits that general reward models inherently struggle to account for when providing accurate rewards.\n\n2. The paper lacks an overarching figure that illustrates the construction logic of the dataset and what the samples within the dataset look like.\n\n3. The comparison of the paper with other relevant personalized LLM benchmarks, such as the LaMP dataset.\n\n4. Some related concepts are not clearly explained, such as 'interaction history', 'preference data', and 'user data,' which are not well defined." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Answering and solving the weakness questions clearly can greatly help the reviewer target the focus of the paper. For the reviewer, these issues require a lot of time to carefully polish the paper before they can be completed. In addition, the review would ask:\nWhat is the relationship between PERSONALLLM and recommender system? Is it a replacement of existing ones or a more general preferenc-based system incuding RS? Why?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper uses multiple LLMs to generate various responses to improve the confidence of dataset.\n2. The paper provides a specific analysis of the dataset." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper aims to propose a dataset called PERSONALLLM for the personalization AI area, which contains users’ preference illustrated by a prompt with eight responses. Specifically, the user responses are built up by various LLMs, e.g., GPT4, Claude 3.\n\nThe authors then propose in-context learning and meta-learning methods as baselines for two scenarios from PERSONAL. The results show that there is much room for improvement in solving the personalization problem in the proposed PERSONAL." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is unclear about what the preference is in the data; is it user preference of items in recommender systems or any replacement of NLP tasks or others?\n2. The paper is uclear about how the PERSONALLLM is formulated, the author presented the reward model, but how it is trained/built up.\n3. The author illustrates the heter preference PERSONALLLM involves in which differs from the home ones, but how these two preferences demonstrate is not clear." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "It's worth it to double-check that including the LLM responses in a dataset is within the relevant terms of use -- my impression is that generally they are, but it should be double-checked." }, "flag_for_ethics_review": { "value": [ "Yes, Legal compliance (e.g., GDPR, copyright, terms of use)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) Are there other sources of high-quality reward functions that can be used?\n2) Were the leading LLMs used to sample the 8 preferences prompted with personas?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: The paper proposes (as far as I know) an original method for generating diverse user preferences.\nQuality: The paper both creates a high-quality dataset, as well as empirically validates that its methodology creates diverse preferences at least as diverse as a persona-based method.\nClarity: The paper is clearly written.\nSignificance: The paper establishes a dataset and methodology for generating diverse user preferences, which is very important for studying LLM personalization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper builds a dataset of open-ended prompts and high-quality responses where users might be expected to have different preferences, a method of sampling direct different user preferences based on reward models, and proposes different algorithms for personalization using data across multiple users. In addition, they empirically validate that their proposed method of sampling user preferences beats a baseline persona-based method for generating diverse user preferences." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The paper uses reward models from a leaderboard (as opposed to fine-tuning to persona data or something), which means that the reward models are all high-quality, but may result in reward models which are less distinct from each other than they might otherwise be. The paper clearly justifies this as not preventing their resampling method from reaching higher diversity than persona-based prompting, but are there other sources of high quality reward functions that might be more different from each other?\n2) Similarly, were the leading LLMs used to sample the 8 preferences prompted with personas? The different LLMs might be somewhat more similar to each other than they need to be, but of course resampling the dataset could be quite expensive, and the dataset is quite valuable as is." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* In line 76 it says “in doing so we are able to simulate an entire user base”. On the other hand it says in line 102 that “We do not claim our simulated personal preference models provide a high-fidelity depiction of human behavior”, so this may be a bit confusing and you may want to rephrase these statements. After reading the first one I was hoping for some evaluation of how realistic the simulated users are. This is actually done in “Comparison to Human Preferences” in Section 3, so I guess you are doing some of that? If the goal is to obtain high *coverage* rather than matching the distribution of users, perhaps this can be made explicit and possibly evaluated against real user behavior? Perhaps some measure of support instead of Wasserstein? It would be also interesting to compare the results in Figure 5 to those from standard personas baselines.\nActually, if the goal is coverage then random preferences should give better coverage, but are probably not very useful, so just optimizing coverage doesn’t seem to be a good objective.\nCan you please clarify the objective here?\n* Another potentially interesting baseline is to have each user choose one of the rewards, a hard choice instead of a weighted sum. There will only be 10 user “types”, so it may be interesting to see how the results change in that case.\n* Sometimes there are long multi-line sentences that could be simplified to improve readability and flow. It is easier to read a paper that has no sentences that span more than 2 lines. Some examples:\n * “Given the expected data sparsity in this setting, beyond a particular user’s data, such personalized language systems will likely also rely on historical data from other (similar) users to learn how to learn from a small set of new user feedback (see Figure 2).” Could be simplified/broken (by an LLM): “These personalized language systems will likely use more than just one user's data due to the expected data sparsity in this setting. They will also depend on historical data from other similar users. This helps them learn effectively from a small amount of new user feedback (see Figure 2 for more details).”\n * “We do not claim our simulated personal preference models provide a high-fidelity depiction of human behavior, but rather offer a challenging simulation environment that provides the empirical foundation for methodological innovation in capturing the complex array of human preferences that arise in practice.” Could be made easier to read (by an LLM): “We don't claim that our simulated personal preference models perfectly mimic human behavior. Instead, they offer a challenging simulation that provides a basis for developing new methods. This helps in better capturing the complex range of human preferences encountered in real life.”\n * “While human evaluation like that of Kirk et al. (2024) is a gold standard, wherein fine-grained preference feedback is gathered from a representative sample of diverse and multicultural participants, it is impractical or even impossible to get this feedback throughout the methodology development cycle, meaning that synthetic personal preference models will ultimately be needed.” I had to read this one slowly a couple of times…\n * Line 354: “Two first-order problems…” can be losslessly simplified to “Two problems…”.\n* Line 254: choosing only 500 personas may be too little if the goal is to achieve heterogeneity, especially since 1000 users are sampled for PersonalLLM. Can you please include results with 1000 personas? It may actually be interesting to see how the results change when increasing the sample size for both persona and PersonalLLM.\n* Line 257: “we can see that the top response receives a majority user vote for only about half of the prompts, while that figure is closer to 90% for the persona prompting baseline.” Sorry, I could not read that from the figure, can you please explain how the results show this?\nAlso in line 258: “Also, for roughly 60% of prompts, at least 5 different answers are chosen as the best by at least 1 under our set of personas; for LLM persona prompting, it is roughly 30%.” Please explain.\n* Line 274: “With respect to changes across the left 3 columns, we can observe that as α increases, preferences become more uniform. However, if α is set too low, user preferences cluster very tightly around the base reward models; we observe this behavior for α = 0.01.” — looking at the figure, it actually seems like there is not much difference between the first 3 columns. Is there a better way to show this difference?\n* Line 294: “In Figure 5 (right), we compare the entropy in the population preferences over the responses to a given prompt based on keywords, comparing words we would expect to inspire heterogeneity (e.g., imagine, opinion, poem) to prompts beginning with “who”, “when”, and “where”, which evoke more objective answers.” This was not clear to me, maybe add a formal definition and/or an equation for the entropy? Also, how do standard personas compare to the proposed approach in this task?\n* In Section 4.2, is it mentioned how response (and prompt) embeddings are computed?\n\nMinor/typos:\n* Line 32: Christiano et al., 2017, not 2023\n* In Figure 6 (left), the dashed line is missing from the legend. I am guessing this is the zero-shot performance." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The large-scale dataset could be useful for LLM development.\n* The alternative to persona-based simulated users seems novel." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new dataset of simulated preferences. The data consists of 10K prompts X 8 responses from different LLMs for each prompt X 10 rewards from different reward models. 1000 simulated users are sampled, where each user’s preferences are defined by a weighted sum of rewards (the weights are sampled from a Dirichlet distribution). The data is then used in in-context learning (ICL) for improving the LLM responses w.r.t. the user’s preferences.\n\nPersonalization is achieved by ICL, adding examples of good/bad responses according to the weighted reward. The results (Figure 6 left) show that using ICL with historical preferences can improve performance compared to zero-shot.\n\nLearning across users is proposed, retrieving other users with similar preferences from a set of simulated users, and using their preferences for ICL. The results (Figure 6 right) show a small improvement when using both positive and negative preferences compared to ICL using only the user’s history." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* It is stated in the paper that the goal is not to match preferences of a distribution of real users, but rather to generate diverse preferences that are more heterogeneous/diverse. I think that this requires more justification since random preferences would give even higher diversity but may not be useful.\n* Clarity/readability could be improved (see detailed questions)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024personalllm,\ntitle={Personal{LLM}: Tailoring {LLM}s to Individual Preferences},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2R7498e2Tx},\nnote={under review}\n}" }, "abstract": { "value": "As LLMs become capable of complex tasks, there is growing potential for personalized interactions tailored to the subtle and idiosyncratic preferences of the user. We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user. Departing from existing alignment benchmarks that implicitly assume uniform preferences, we curate open-ended prompts paired with many high-quality answers over which users would be expected to display heterogeneous latent preferences. Instead of persona prompting LLMs based on high-level attributes (e.g., user race or response length), which yields homogeneous preferences relative to humans, we develop a method that can simulate a large user base with diverse preferences from a set of pre-trained reward models. Our dataset and generated personalities offer an innovative testbed for developing personalization algorithms that grapple with continual data sparsity---few relevant feedback from the particular user---by leveraging historical data from other (similar) users. We explore basic in-context learning and meta-learning baselines to illustrate the utility of PersonalLLM and highlight the need for future methodological development." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Personalization", "LLM", "Alignment", "benchmark", "dataset", "reinforcement learning from human feedback", "language models", "RLHF", "preferences" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ef95d073d0933e544391f42678b1a19474978fa9.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c773caff40399be446d5213e0533de5c3f1e8c99.zip" }, "title": { "value": "PersonalLLM: Tailoring LLMs to Individual Preferences" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2RNGX3iTr6
Tabby: Tabular Adaptation for Language Models
main
Active
tabular;generative;llm;mixture-of-experts;synthesis;transformer
foundation or frontier models, including LLMs
1;3;5
3;5;3
1;3;2
1;2;2
2;3;2
3
3.666667
2
1.666667
2.333333
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Can you please detail the various architectures MMLP, MH and MMLP+MH? \nWhy does MMLP+MH underperform, even though it is more complex?\nDo you replace every layer with MOE?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The proposed method outperforms other methods on most datasets and metrics presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to use a MOE LLM fine tuned on table data for synthetic table data synthesis. The authors find that their method outperforms previous methods on table synthesis benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* There is no significant difference between tabby and the non-tabby (I assume no MOE?) baseline. Given that MOE has a lot more parameters, this is a negative finding.\n* The papers contributions are very minor - applying MOE to a narrow problem (table generation). And the results are not all that strong.\n* It's not easy from the presentation what exactly do the tasks require, what exactly are the baselines and model variations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Have you conducted experiments on the recently released large language models? If yes, which model sizes did you choose?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Tabby achieves strong performance in benchmark evaluation. It generates high-quality synthethic tabular data in comparison with the baseline methods.\n2. The introduction of MoE shows effectiveness in helping the model understand tabular data structure and generate higher-quality tabular data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a new model called Tabby for tabular data. Tabby is an architecture modification that enables transformer-based language models to synthesize more realistic tabular data. It introduces Gated Mixture-of-Experts layers to better model the complex interdependencies and diverse data types found in tabular datasets. Tabby outperforms previous tabular data synthesis methods, achieving outstanding performance on multiple benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The design of MoE layer is complex. For a table with V columns, this article should design an MoE model with V experts to adapt to the table. This is not generalizable to data of diverse formats. It is suggested to modify the model design to be more compatible and more generalizable.\n2. Scalable experiments are advised to be conducted. This study needs to provide experimental results on datasets of larger scales and also more commonly used datasets.\n3. The experiments are advised to be conducted on contemporary large language models, including Llama, Qwen, Mistral, instead of Distilled-GPT2." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1: Have you computed the FLOPs for training on different datasets? It seems that Tabby uses a fixed pattern to organize tabular data, which may require more tokens for computation.\n\nQ2: Regarding Claim 2, could you provide a scaling curve showing performance relative to model size or data quantity? It would be interesting to see how Tabby impacts different models and how the amount of Tabby data influences the learning process. Additionally, a comparison of the scaling curve between Tabby data and natural data would serve as evidence of Tabby data being a scalable alternative to natural data.\n\nQ3: I'm not sure if the modification to original network is necessary. Is there an ablation study?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The model modifications and data organization are well-motivated and intuitive.\n- The distribution of the synthesized data is very close to the natural data.\n- The experimental results looks good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors present a tabular data synthesis approach, Tabby. The novelty of Tabby lies in two main aspects: (1) modifying the original transformer model by applying MoE-like techniques to better model tabular data, and (2) designing a specialized data format for tabular data. Experimental results show that Tabby achieves comparable performance to the previous state-of-the-art, Tab-DDPM, and outperforms GTT NT." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Tabby seems achieve comparable results to Tab-DDPM with marginal performance gain in Table 2.\n- The method is quite simple and not much effective in final performance." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose Tabby, a post-training architecture modification to transformer-based Large Language Models, which enables the synthesis high-fidelity tabular data." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024tabby,\ntitle={Tabby: Tabular Adaptation for Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2RNGX3iTr6},\nnote={under review}\n}" }, "abstract": { "value": "While advances in large language models (LLMs) have greatly improved the quality of synthetic text data in recent years, synthesizing tabular data has received far less attention. Many of the top-performing approaches to this problem rely on techniques that adapt models originally developed for other modalities, potentially leaving generative performance on the table. We address these disparities in attention and performance for tabular data by introducing Tabby, a simple but powerful post-training modification to the standard Transformer-based language model architecture that enables its use for tabular dataset synthesis. Tabby relies on Gated Mixture-of-Experts layers, allowing each data column to be modeled by a dedicated set of parameters within the transformer multi-layer perceptrons or language modeling heads. Applying Tabby to Distilled-GPT2 improves synthetic data quality up to 7% compared to previous tabular dataset synthesis methods, achieving performance near or equal to that of real data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "tabular", "generative", "llm", "mixture-of-experts", "synthesis", "transformer" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/97d9e718c09cc09476e56955554bf6edba8256f0.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/0ac87dc92bf75d0560e22a3ebe5bc7372e9a19fd.zip" }, "title": { "value": "Tabby: Tabular Adaptation for Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2RQokbn4B5
Dataset Size Recovery from Fine-Tuned Weights
main
Active
Model Forensics
applications to computer vision, audio, language, and other modalities
3;3;5;5
4;4;3;4
3;2;2;2
2;3;2;3
3;3;3;3
4
3.75
2.25
2.5
3
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My questions are listed in Weaknesses section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* This paper introduce a novel task correlating to model inversion and membership inference attacks. The size of the training dataset will produce extra knowledge for these tasks. Besides, the authors propose a benchmark for evaluation.\n* The paper is well-written and easy to follow.\n* The authors provide code for reproducibility check." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new task of dataset size recovery, which aims to infer the size of the training dataset used to fine-tune a pre-trained model.\nThrough experiments, the authors uncover a clear negative correlation between dataset size and the norm and spectrum of the fine-tuning weight matrices.\nLeveraging this insight, they propose the DSiRe algorithm to predict dataset size based on these spectral features.\nAdditionally, the authors propose the LoRA-WiSE benchmark for evaluating on this task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Lack of theoretical support:** Although the authors reveal a quantitative relationship between dataset size and the characteristics of fine-tuning weight matrices, their evaluation is limited to diffusion tasks, lacking broader empirical evidence. Furthermore, the authors do not provide theoretical insights or justification to explain why this relationship exists.\n2. **Experiments:** The authors should validate the effectiveness of the proposed method across a wider range of tasks, such as image classification.\n3. **Experiments:** The authors claim that knowing the size of dataset could aid in model inversion and membership inference attacks. Could the authors provide additional experiments to support this claim?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper proposes an interesting and, to the best of my knowledge, novel problem: recovering the dataset size based on fine-tuned model weights. This approach seems potentially useful for tasks such as model inversion and membership inference attacks.\n\n2. The paper constructs a large-scale dataset, including 2,000 diverse LoRA fine-tuned models along with corresponding fine-tuning dataset information, which could be valuable for future research.\n\n3. The observed correlation between fine-tuning dataset size and both the weight norm and spectrum provides meaningful insights. The results presented with the proposed method appear reasonable across the benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new task, called \"dataset size recovery,\" which aims to identify the size of the fine-tuned dataset based on changes in model weights before and after fine-tuning. The authors define a data-driven pipeline to achieve this: several fine-tuned weights and their corresponding dataset sizes are provided as training samples, and during testing, a newly fine-tuned model is given. The goal is to predict the dataset size of this test model. Specifically, they propose extracting spectral features from the model weights and using these features to predict dataset size with a nearest neighbor algorithm. For experiments, the authors introduce a new benchmark named LoRA-WiSE, where various stable diffusion models are fine-tuned with LoRA parameterizations across different dataset sizes. They demonstrate the efficacy of the proposed algorithm by presenting mean absolute error (MAE) scores across three data regimes: low (up to 6 samples), medium (up to 50 samples), and high (up to 1,000 samples)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The method appears to predict the fine-tuning dataset size for a given model only when “training samples”—pairs of model weights and corresponding fine-tuning dataset sizes—are available. However, it remains unclear how one would construct these training samples in practice, particularly without prior information about the actual fine-tuning dataset used by the model. \n\n2. Beyond dataset size, other factors likely influence the norms and spectra of the learned weights, such as the diversity of the fine-tuning dataset or its divergence from the pretraining dataset. Without direct knowledge of the fine-tuning data, these factors remain uncontrolled. For instance, a model fine-tuned on a large but homogeneous dataset may exhibit more overfitting than one fine-tuned on a small yet diverse dataset, resulting in higher norms or spectral values. This raises concerns regarding the method’s practical applicability.\n\n3. As shown in Figure 2, the distinctions between different fine-tuning dataset sizes diminish as dataset size increases, making it unclear how effective the method remains for larger datasets.\n\n4. The experiments focus solely on a stable diffusion model, leaving questions about the method’s generalizability to other model types. Additionally, why is the method restricted to fine-tuned weights? Could it be extended to estimate the dataset size for a model trained from scratch, and would the trends observed in Figure 2 apply in that context?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please kindly see the weakness section" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This work studies an interesting topic, which aims to find out the training data size from a given fine-tuned model.\n\nThe proposed DSiRe method shows promising results in predicting dataset sizes, suggesting that the spectral and norm-based characteristics of fine-tuned weights are indeed useful signals for this task. \n\nThis work offers a practical resource for future research by proposing a benchmark kit." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the challenge of estimating the training data size of a fine-tuned pre-trained model. The authors find that the norms and spectral properties of model weight are correlated with the dataset size used during fine-tuning. Based on this insight, they propose an algorithm called DSiRe. DSiRe utilizes a nearest-neighbours approach to classify each layer independently, with the final dataset size prediction determined by a majority vote across layers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors formulate dataset size recovery as a classification problem, whereas it may be more appropriate to approach this as a regression problem. Since dataset size is inherently a continuous variable, a regression framework might offer a more precise and interpretable estimation than classification.\n\nThe number of samples (1~1000) used in the experiment is very limited, which may not get reliable conclusions in real-world scenarios.\n\nThe study does not discuss the potential effects of data augmentation on dataset size recovery. Given that data augmentation is a common practice in model training, understanding its impact on the proposed method's accuracy is crucial. It would be valuable to include experiments or discussions on how data augmentation could alter spectral and norm properties in fine-tuned weights.\n\nWhile the paper explores estimating dataset size, it would be insightful to discuss how this information could impact model inversion techniques or the general machine learning community. For example, does knowing the dataset size improve an adversary's ability to reconstruct original training samples?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to weaknesses above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The problem of dataset size recovery for foundation models is interesting.\n2. The correlation of dataset size to the Frobenius norm and singular values of the weight matrices is relevant.\n3. A benchmark with pre-trained weight matrices of foundation models for dataset recovery is released." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the problem of dataset size recovery for fine-tuned foundation models and consequently a strategy to infer dataset size using spectral analysis of the weight matrix. A benchmark is designed to evaluate various approaches for this problem and their proposed method is evaluated on it." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The analysis of the correlation between dataset size and the Frobenius norm and the singular values is underwhelming. It is not clear if this trend holds across different model architectures, and if so, no theoretical evidence is advanced for this correlation.\n2. The proposed method for dataset size recovery is way too simple to offer any insights.\n3. The authors only study dataset size recovery for foundation models fine-tuned with a few samples. However, this problem is very general and should be explored in a broader framework." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024dataset,\ntitle={Dataset Size Recovery from Fine-Tuned Weights},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2RQokbn4B5},\nnote={under review}\n}" }, "abstract": { "value": "Model inversion and membership inference attacks aim to reconstruct and verify the data on which a model was trained. However, these methods cannot guarantee to find all training samples, as they do not know the training set size. In this paper, we introduce a new task: dataset size recovery, which seeks to identify the number of samples a given model was fine-tuned on. \nOur core finding is that both the norm and the spectrum of the fine-tuning weight matrices are closely linked to the fine-tuning dataset size. Leveraging this insight, we propose DSiRe, an algorithm that accepts fine-tuned model weights, extracts their spectral features, and then employs a nearest neighbor classifier on top, to predict the dataset size. Although it is training-free, simple, and very easy to implement, DSiRe is broadly applicable across various fine-tuning paradigms and modalities (e.g., DSiRe can predict the number of fine-tuning images with a mean absolute error of $0.36$ images). To this end, we develop and release LoRA-WiSE, a new benchmark consisting of over $25k$ weight snapshots from more than $2k$ diverse LoRA fine-tuned models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Model Forensics" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/92660b1cfd1def9a4adb688c380e9d1755dff7e4.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/bdea67824e2787687db3253ea6af4c8416f47cb1.zip" }, "title": { "value": "Dataset Size Recovery from Fine-Tuned Weights" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2RcTuBc4mA
Generalized Attention Flow: Feature Attribution for Transformer Models via Maximum Flow
main
Desk Reject
Attention Flow;Feature Attributions;Transformers;Barrier Regularization;Maximum Flow
interpretability and explainable AI
Behrooz Azarkhalili Aghmiyouni;Maxwell Libbrecht
~Behrooz_Azarkhalili_Aghmiyouni1;~Maxwell_Libbrecht1
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": { "value": "Margin violation" }, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": { "value": "Submission Desk Rejected by Program Chairs" }, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper presents Generalized Attention Flow, an extension of Attention Flow that uses attention weights, their gradients, maximum flow, and the barrier method to define information tensors for generating feature attributions in Transformer models." }, "_bibtex": { "value": "@misc{\naghmiyouni2024generalized,\ntitle={Generalized Attention Flow: Feature Attribution for Transformer Models via Maximum Flow},\nauthor={Behrooz Azarkhalili Aghmiyouni and Maxwell Libbrecht},\nyear={2024},\nurl={https://openreview.net/forum?id=2RcTuBc4mA}\n}" }, "abstract": { "value": "This paper introduces Generalized Attention Flow, a novel feature attribution method for Transformer models that addresses the limitations of existing approaches. By generalizing Attention Flow and substituting attention weights with an arbitrary Information Tensor, the method leverages attention weights, their gradients, maximum flow, and the barrier method to generate more accurate feature attributions. The proposed approach demonstrates superior theoretical properties and resolves issues associated with previous methods that rely solely on simple aggregation of attention weights. Comprehensive benchmarking in NLP sequence classification tasks reveals that a specific variant of Generalized Attention Flow consistently outperforms state-of-the-art feature attribution methods across most evaluation scenarios, offering a more accurate explanation of Transformer model outputs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Behrooz_Azarkhalili_Aghmiyouni1", "~Maxwell_Libbrecht1" ] }, "authors": { "value": [ "Behrooz Azarkhalili Aghmiyouni", "Maxwell Libbrecht" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Attention Flow", "Feature Attributions", "Transformers", "Barrier Regularization", "Maximum Flow" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "aghmiyouni|generalized_attention_flow_feature_attribution_for_transformer_models_via_maximum_flow" }, "pdf": { "value": "/pdf/77bf7201d7f76cb90754860c7a126c14ffb9c5ba.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f3d613932cdd85ae594625205c0d411887360247.zip" }, "title": { "value": "Generalized Attention Flow: Feature Attribution for Transformer Models via Maximum Flow" }, "venue": { "value": "ICLR 2025 Conference Desk Rejected Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Desk_Rejected_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2RfWRKwxYh
Boost Self-Supervised Dataset Distillation via Parameterization, Predefined Augmentation, and Approximation
main
Active
dataset distillation;self-supervised learning
unsupervised, self-supervised, semi-supervised, and supervised representation learning
5;5;6;8
4;5;3;3
3;3;3;3
2;3;3;3
2;3;4;3
6
3.75
3
2.75
3
-0.738549
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I've highlighted a few of the issues/suggestions for the Authors to consider in the rebuttal phase above in the Weaknesses Section. These are crucial in determining the significance of the work and wide scale adoption." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The key strengths of this paper include:\n\n1. More diverse datasets: Not many dataset distillation papers venture beyond the CIFAR/ImageNet datasets, however these authors included results on CUB2011 and StanfordDogs. Additionally, the ViT performance has been reported, and overall it appears that the authors performance improvement is maintained on Transformer architectures, albeit smaller.\n\n2. The basis and coefficient initialization ablation provides interesting insight into the sensitivity of the proposed framework.\n\n3. Personally, I found the use of the approximation networks to be a clever solution to reducing memory usage while preserving the essence of image augmentation. By learning a mapping between and subsequently the shift in distribution of the unaugmented distilled representation into it's augmented views, one can efficiently store simply the network rather than all the augmented views.\n\n4. Strong baselines: This work accurately surveyed some of the most seminal and current SOTA in the field of dataset distillation (with the exception of a few missing citations that should be added). I find the included competitive methods to be comprehensive enough to support the statements however, further comments on the benchmarking are included in the Weaknesses section." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work targets the cross architecture generalizability challenge in dataset distillation. When performing distillation, the data is often biased to the model used in the distillation process -- in this work the proposed self-supervised approach parameterizes the representations of images while studying/leveraging the effects of augmentations. This approach features a 5 stage method involving pertaining a network on the source dataset, followed by image parameterization (encoding the images and augmentations via low-dimensional bases vectors), bi-level optimization on the images, approximation to handle the distribution/representation shift, and reconstruction of the images using the bases and learned features. The method reports strong performance improvement on a variety of datasets against most of the current SOTA methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite the interesting approach taken in this work, I find a few crucial weaknesses:\n\n1. I find that the experimental support is a bit lacking. As is common in Dataset Distillation works, it is generally good practice to show the scaling over different memory budges (N) on various datasets, rather than just a single dataset, in order to show generalizability.\n2. I noticed that the resolutions on ImageNet scale to 64 x 64 -- however recently, the field has shifted to higher resolutions such as 128x128 or even 512 x 512 -- I think it would be important to see if the method can scale well to larger resolutions.\n3. I think another important criteria that should be included is Applications -- as alluded to in the paper tasks like continual learning or neural architecture search (line 43) are important in the field, however none of these results were included in the main paper -- I think it is important to test the applicability of the method in order to determine significance and impact.\n4. Given that this approach involves multi-level optimization, I think efficiency metrics should be compared as well (time per step, GPU memory etc). -- This will demonstrate wether the gain in performance is justified over other methods when comparing the relative compute demands.\n\n[Minor] Some missing citations including DataDAM (ICCV'23), CAFE (CVPR'22)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The datasets in the experiments are CIFAR 100 and datasets with similar image attributes. I can understand it is possible to get a distilled dataset in a lab environment and the datasets are very feature-controllable. Do you have space to show that your experiment can be successful in other different scenarios? For example, some randomly taken images. \n\n2. Though this is a memory saving method, a very large portion of the whole method is still computing intensive. Do you have any benchmark to show that the whole method could be executed in an efficient way?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper demonstrated a very strategic parameterization.\nThe use of bases for image and representation parameterization is a sophisticated approach to compress dataset information without sacrificing accuracy. This addresses both storage efficiency and computational cost.\n\n 2.Effective Augmentation Handling:\nBy predefining augmentations, the method successfully mitigates the bias introduced by random augmentations, a notable challenge in SSL distillation methods.\n\n3. Improved Memory Efficiency:\nThe inclusion of approximation networks to predict representation shifts from unaugmented to augmented views significantly reduces memory usage by eliminating the need to store augmented representations. This makes the approach more scalable.\n\n4. Transfer Learning Potential:\n\nThe method shows strong transferability to downstream tasks, making it particularly appealing for real-world applications where labeled data is scarce, and transfer learning is critical.\n\n5. Ablation Studies and Hyperparameter Analysis:\n\nThe paper includes ablation studies that isolate the contributions of parameterization, augmentation, and approximation networks, offering clear insights into each component's impact on performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel approach to self-supervised dataset distillation aimed at reducing training costs by creating compact datasets that maintain model performance. This method, intended to address challenges in self-supervised learning (SSL) for dataset distillation, introduces three key contributions: 1. Parameterization 2. Predefined Augmentation and feature approximation 3. Optimizations with approximation Networks. Generally they have shown a very contributing method. \n\nThe paper introduces a solid contribution to self-supervised dataset distillation, with innovative approaches to parameterization, augmentation handling, and memory efficiency with upgraded existing method named as KRR-ST. While the approach is complex, it provides a promising direction for reducing training costs in SSL, particularly in resource-limited settings. With further optimization and extension to diverse tasks, this method has the potential to make dataset distillation more accessible and applicable in real-world scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Complexity and accessibility\nCritique: The method involves several sophisticated techniques, including low-dimensional basis parameterization, predefined augmentations, and approximation networks. This complexity may make it difficult for practitioners to implement and tune the method without extensive expertise in self-supervised learning and dataset distillation.\n\n2. Computational and memory trade-Offs\nCritique: While the method claims to be memory-efficient due to approximation networks, the additional computational overhead introduced by these networks might reduce the method’s overall efficiency, especially in resource-constrained environments.\n\n3. Dependence on Synthetic Data for Evaluation:\nThe experiments rely heavily on benchmark datasets like CIFAR100. However, these datasets have well-structured labels and relatively consistent image quality, which may not fully represent real-world data variability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The topic is both valuable and practical, especially in the era of large datasets. While most current research on data distillation focuses primarily on classification tasks, which may be too narrow, this work seeks to improve self-supervised tasks. This approach is more general and can better support feature learning for downstream applications.\n\n2. The paper is well-written and easy to follow, with a straightforward method that is simple to understand. For each component, the authors clearly explain the rationale behind its inclusion.\n\n3. The experiments demonstrate the method’s effectiveness, as it consistently outperforms baseline methods in both transfer learning and linear probing tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a self-supervised data distillation method based on image decomposition. By initializing with principal components and learning the impact of data augmentation, the performance of the distilled dataset is enhanced. The experiments provide a comprehensive analysis of the method’s effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I did not find any major weaknesses in this paper. However, there are some concerns regarding its novelty. The techniques employed are largely derived from previous work on data distillation for classification tasks. It would be helpful if the authors could clarify what unique challenges exist for self-supervised data distillation and how their method specifically addresses those challenges." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could the authors provide more details about the approximation networks, such as the number of networks used, structure, and layers?\n2. Could the authors show a comparison of the distilled data sizes?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Reducing data size is a critical direction in self-supervised learning research.\n2. Fixing the issue of incorporating data augmentation into data distillation is important, as it significantly improves performance.\n3. The authors conduct a wide range of experiments, evaluating model performance with various network architectures and different numbers of training examples." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose a method for dataset distillation based on KRR-ST. Two techniques are introduced: (1) PCA-based dimensionality reduction, which transforms images and their representations into lower-dimensional bases and coefficients; and (2) Data Augmentation, which employs predefined data augmentations and approximation networks to address the limitation of KRR-ST in utilizing data augmentation during dataset distillation. The authors conduct an extensive experimental evaluation and demonstrate significant improvements over previous baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed techniques in the paper are not new, such as PCA and augmentation approximation networks.\n2. The proposed technique leverages data augmentation while minimizing bias, and similar ideas have been explored in self-supervised learning. It is important to cmopare it with other analogous methods [1][2][3].\n\n[1] Improving Transferability of Representations via Augmentation-Aware Self-Supervision. NeurIPS 2021\n[2] Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration. NeurIPS 2021\n[3] RSA: Reducing Semantic Shift from Aggressive Augmentations for Self-supervised Learning. NeurIPS 2022" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024boost,\ntitle={Boost Self-Supervised Dataset Distillation via Parameterization, Predefined Augmentation, and Approximation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2RfWRKwxYh},\nnote={under review}\n}" }, "abstract": { "value": "Although larger datasets are crucial for training large deep models, the rapid growth of dataset size has brought a significant challenge in terms of considerable training costs, which even results in prohibitive computational expenses. Dataset Distillation becomes a popular technique recently to reduce the dataset size via learning a highly compact set of representative exemplars, where the model trained with these exemplars ideally should have comparable performance with respect to the one trained with the full dataset. While most of existing works upon dataset distillation focus on supervised datasets, \\todo{we instead aim to distill images and their self-supervisedly trained representations into a distilled set. This procedure, named as Self-Supervised Dataset Distillation, effectively extracts rich information from real datasets, yielding the distilled sets with enhanced cross-architecture generalizability.} Particularly, in order to preserve the key characteristics of original dataset more faithfully and compactly, several novel techniques are proposed: 1) we introduce an innovative parameterization upon images and representations via distinct low-dimensional bases, where the base selection for parameterization is experimentally shown to play a crucial role; 2) we tackle the instability induced by the randomness of data augmentation -- a key component in self-supervised learning but being underestimated in the prior work of self-supervised dataset distillation -- by utilizing predetermined augmentations; 3) we further leverage a lightweight network to model the connections among the representations of augmented views from the same image, leading to more compact pairs of distillation. Extensive experiments conducted on various datasets validate the superiority of our approach in terms of distillation efficiency, cross-architecture generalization, and transfer learning performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "dataset distillation", "self-supervised learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/57912b41d57eefeb03bfbcf50775f686aea246d2.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Boost Self-Supervised Dataset Distillation via Parameterization, Predefined Augmentation, and Approximation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2Sn0ty7zoI
Learning through Conditioning on Natural Language Feedback
main
Withdraw
Social Learning;Natural Language Feedback;Instructive Learning
foundation or frontier models, including LLMs
Dylan Hillier;Cheston Tan;Jing Jiang
~Dylan_Hillier1;~Cheston_Tan1;~Jing_Jiang1
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "In retrospect rushed and not ready for review, don't want to waste reviewers time" }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": { "value": "We explore whether we can finetune language models by letting them generate answers conditioned on prior feedback." }, "_bibtex": { "value": "@misc{\nhillier2024learning,\ntitle={Learning through Conditioning on Natural Language Feedback},\nauthor={Dylan Hillier and Cheston Tan and Jing Jiang},\nyear={2024},\nurl={https://openreview.net/forum?id=2Sn0ty7zoI}\n}" }, "abstract": { "value": "In this paper we explore the simple idea of teaching models by allowing them to condition their answers on natural language feedback. Motivated by the idea that natural language interactions provide a targeted, flexible, and level-appropriate reward signal, we study the ability of small instruction-tuned models to leverage feedback from a larger frontier model. We find while the frontier model provides generally high quality feedback, especially smaller models can struggle to use this due to noise in their generative output. After incorporating techniques like negative sampling, we find that models trained on these feedback-conditioned responses can perform similarly to those trained directly on teacher responses. We explore training using supervised finetuning and preference learning algorithms over a broad set of tasks including Big-Bench Hard. These findings are broadly applicable and our methods rely only on the ability of models to give and receive linguistic feedback. As such, they contribute to a growing body of work exploring how to best utilise the linguistic capabilities of language models for human-like instructive learning." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Dylan_Hillier1", "~Cheston_Tan1", "~Jing_Jiang1" ] }, "authors": { "value": [ "Dylan Hillier", "Cheston Tan", "Jing Jiang" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Social Learning", "Natural Language Feedback", "Instructive Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "hillier|learning_through_conditioning_on_natural_language_feedback" }, "pdf": { "value": "/pdf/602938364839100b831b463126306b129a3e6944.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": { "value": "No" }, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning through Conditioning on Natural Language Feedback" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2TIYkqieKw
DICE: Data Influence Cascade in Decentralized Learning
main
Active
Decentralized Learning;Data Influence
interpretability and explainable AI
3;5;6
3;4;2
2;3;3
2;3;3
2;3;3
4.666667
3
2.666667
2.666667
2.666667
-0.327327
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The DICE framework is the first to systematically measure the cascading propagation of data\ninfluence in decentralized learning environments, providing an effective method to assess data\ncontributions among nodes and filling a gap in data influence evaluation within decentralized\nnetworks.\n2. The experiments cover different network topologies (such as ring and exponential graphs) and\ndatasets (such as MNIST, CIFAR-10, and CIFAR-100), validating the applicability and consistency\nof the DICE framework across various scenarios.\n3. The DICE framework provides accurate contribution measurement, laying the foundation for\ndesigning fair and effective incentive mechanisms in decentralized learning systems, with the\npotential to foster equitable collaboration within decentralized networks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the DICE framework for measuring the cascading propagation of data influence\nin decentralized learning networks. Decentralized learning enables large-scale model training\nthrough distributed computation, yet the lack of effective incentive mechanisms can lead to unfair\ncontributions and malicious behavior among nodes. The DICE framework introduces data influence\ncascades (DICE-GT and DICE-E), which respectively measure the direct and indirect influence of data\nwithin the network, addressing the limitations of existing data influence measurement methods in\ndecentralized environments. Experiments validate the consistency and accuracy of DICE across\nvarious network topologies and demonstrate its potential in practical applications like anomaly\ndetection and collaborator selection" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Figure 1 lacks legend information, making it difficult to understand.\n2. The performance differences of the DICE framework under different parameters (such as learning\nrate, batch size, etc.) have not been thoroughly discussed. It is recommended to add parameter\nsensitivity experiments to demonstrate the impact of different parameter selections on the\nperformance of the DICE framework, thereby enhancing its practicality." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) Please motivate the approach with practical use-cases. \n2) Please discuss link with clustered federated learning, in particular techniques that use gradients to cluster clients. \n3) Please provide all necessary details to replicate the results.\n4) Please evaluate the impact of batch size (smaller and larger values), to show the scalability of the technique and its robustness in showing the compatibility among clients." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-organized, with clear definitions, figures, and explanations that make the methods and results easy to follow.\n- The paper provides a solid theoretical framework, supported by rigorous proofs and analyses." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for quantifying the impact of data points in decentralized machine learning settings. The influence is measured not only at immediate neighbors but the entire network. This method can be useful for machine unlearning or to develop new incentive mechanisms." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Need for more details about the practical use of this technique: While the authors use LLMs as one of the examples in the introduction, it might not be the best example to use in this case. It hard to see how this research addresses a practical problem or application that has real-world significance, or how this framework would be relevant for practitioners.\n- Link with other papers that use gradient to cluster clients should be added, particularly interesting and relevant in the collaborator choice part. \n- Experiments seem non-exhaustive and many details are missing to replicate the experiments. For instance, no indication on what the anomaly is vs normal client. This is particularly important when using gradients. I expect that the framework would perform differently if the anomaly is label flipping vs if it was noisy features. Additionally, evaluation of the impact of batch size would be particularly important for both scalability and compatibility among clients." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper summarizes previous work on measuring data influence and highlights the gaps in applying these methods to distributed scenarios.\n2. This paper proposes a sound “gold standard” and its first-order approximation to quantify individual contributions in decentralized learning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes DICE as a framework for measuring data influence cascades in decentralized environments. The framework explains how data influence propagates through the communication network, emphasizing the interaction between the original data and the network structure in shaping data influence within decentralized learning. The experimental results show that the first-order approximation of the “gold standard” for evaluating data influence in decentralized environment can approaching the truth, and this framework can used for detecting mislabeled anomalies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experiments are weak, and Section 5.3 is unfinished.\n2. The notation η^t in Theorem 1 is previously appears as η_t in Algorithm 1." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce DICE, the first comprehensive framework for measuring data influence cascades in decentralized learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024dice,\ntitle={{DICE}: Data Influence Cascade in Decentralized Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2TIYkqieKw},\nnote={under review}\n}" }, "abstract": { "value": "Decentralized learning offers a promising approach to crowdsource computational workloads across geographically distributed compute interconnected through peer-to-peer networks, accommodating the exponentially increasing compute demands in the era of large models. However, the absence of proper incentives in locally connected decentralized networks poses significant risks of free riding and malicious behaviors. Data influence, which ensures fair attribution of data source contributions, holds great potential for establishing effective incentive mechanisms. Despite the importance, little effort has been made to analyze data influence in decentralized scenarios, due to non-trivial challenges arising from the distributed nature and the localized connections inherent in decentralized networks. To overcome this fundamental incentive problem, we propose DICE, the first comprehensive framework for analyzing Data Influence CascadEs in decentralized environments. Our framework characterizes how data influence cascades across the communication network and highlights the interplay between original data and network structure in shaping data influence in decentralized learning. We anticipate that DICE can open new avenues for incentive mechanism design and enable impactful applications of influence in decentralized learning, including anomaly detection, collaborator selection and machine unlearning." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Decentralized Learning", "Data Influence" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1b8dc27ac069bcb6f0990d8dbae30490fdb0bbb9.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "DICE: Data Influence Cascade in Decentralized Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2TasVD7FXp
InvestESG: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma
main
Active
multi-agent reinforcement learning;climate change;ai for climate
datasets and benchmarks
5;5;6
4;4;4
3;2;4
2;2;2
3;3;3
5.333333
4
3
2
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I think some of the key points to consider are:\n- questioning assumptions, e.g. rationality of the agents, why they would behave as if employing RL, etc.\n- convincing us that simulating a system with a limited number of agents provides us with insights that are relevant for systems for very large number of agents\nI clearly recognise that such issues may be more generally valid for the case of MARL environments and broader than for the case of this paper only. However, here, in view of the importance of the application, I find these issues particularly relevant." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The main strength of the paper is the topic is focuses on, and the idea to bring some momentum towards the development of a general platform to simulate a multi-agent system with focus on climate risk and company behaviour. Another strength (but which may also be seen as a weakness - see below) is that the framework is simple, in the sense that it is easily interpretable and flexible enough to interact with e.g. policy makers. The authors also aim to produce some relevant results, which may be seen as of value by policy-makers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper concentrates on developing a multi-agent reinforcement learning framework (which they name InvestESG), to be used to student individual and collective outcomes from company investment and climate risk. It is an overall well written paper on a timely and relevant problem, for which we clearly need to better understand how to drive investors decisions to align individual and collective objectives. The paper is completely application-driven, in the sense that the authors do not develop new methodology, or a new solution approach. They mainly focus on describing how the framework should look like for the purpose of the application. They then generate a lot of simulation results to study various aspects of the problem (e.g., greenwashing, different levels of ESG consciousness, etc.)" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "As mentioned by authors in a last part of the paper, maybe the main weakness is the simplicity of the framework, which may prevent a broad audience from accepting that it realistically model a real-world situation, and that it may be ring some relevant insights to be used as input to policy-making. In my opinion, it feels like an oversimplified and stylised approach where, depending on a few assumptions and a few modelling changes, we could get the model to do completely different things. Therefore, I believe that quite more work is necessary for such a paper, starting with the importance of the underlying assumptions, assessing the impact of modelling choices, sensitivity analyses, etc. I am not critical of the fact the authors are engaging in such developments - I am saying instead that I feel more work is necessary before sharing this work/paper with the world." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I think the paper could be extended in several possible directions, as indicated in the Weaknesses section, for a more significant contribution. Another possible direction would be to implement and assess the impact of other PPO policies than IPPO on the overall behaviour and insights.\n\nPotential relevant papers are suggested below.\n\nBisaro, Alexander, and Jochen Hinkel. \"Governance of social dilemmas in climate change adaptation.\" Nature Climate Change 6, no. 4 (2016): 354-359.\n\nBettini, Matteo, Amanda Prorok, and Vincent Moens. \"Benchmarl: Benchmarking multi-agent reinforcement learning.\" Journal of Machine Learning Research 25, no. 217 (2024): 1-10.\n\nBettini, Matteo, Ryan Kortvelesy, and Amanda Prorok. \"Controlling Behavioral Diversity in Multi-Agent Reinforcement Learning.\" arXiv preprint arXiv:2405.15054 (2024).\n\nBrogi, Marina, Antonella Cappiello, Valentina Lagasio, and Fabrizio Santoboni. \"Determinants of insurance companies' environmental, social, and governance awareness.\" Corporate Social Responsibility and Environmental Management 29, no. 5 (2022): 1357-1369." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper represents a novel contribution in a highly relevant, high-impact domain, at the intersection between climate change and MARL. It is beautifully written and self-contained, with rigorous specifications of the InvesESG environment. The implementation details and code are provided, and overall the paper makes a good case for a MARL benchmark for studying climate investment through the social dilemma paradigm, via two agent types: companies and investors. InvestESG is designed to simulate and analyse the impact of varying Environmental, Social, and Governance (ESG) disclosure policies on corporate climate investments. In InvestESG, companies allocate capital across mitigation, greenwashing, and resilience, with varying strategies influencing climate outcomes and investor preferences. The findings are consistent with empirical research using real-world data. The results capture the positive impact of companies using information about global climate risks to determine their level of investment in mitigation, even without investor involvement." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents InvestESG, a novel multi-agent reinforcement learning (MARL) benchmark designed to simulate and analyse the impact of varying Environmental, Social, and Governance (ESG) disclosure policies through the social dilemma paradigm. InvestESG uses two types of agents: companies and investors. Companies allocate capital across mitigation, greenwashing, and resilience, with varying strategies influencing climate outcomes and investor preferences. The findings are consistent with empirical research using real-world data. They capture the positive impact of companies using information about global climate risks to determine their level of investment in mitigation, even without investor involvement. The paper is beautifully written and rigorous." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of the paper consists in its simplifying assumptions, in terms of the types of agents, and the considered scenarios, analysis and discussions. These limitations are acknowledged in the paper. Due to these reasons, I believe that, in the current format, the paper makes an insufficient contribution for a top conference like ICLR.\n\nFor a more significant contribution, this work could be extended in one or more possible directions: extend the agent types (possibly consider insurance companies/market?), add more complex agent behavior, learn parameters and behaviors from real data, include more social outcome metrics (in addition to the final climate risk level and the final total market wealth, at the end of the simulation period) and/or include additional features, such as agent bankruptcy, and a dynamic number of agents.\n\nAssuming the agent-types remain just companies and investors, increasing the number of companies and investors, and learning their behavior from real world data, may be a sufficient extension for a more significant contribution.\n\nIn the longer term, the initial vision of InvestESG would benefit from a more diverse agent space, for a more realistic climate-change problem specification (however, this is not essential for a significant contribution)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What does a sensitivity analysis of the Beta parameter do to the results?\n\n- How would more longer capital investment timelines (e.g. min 5 year lock-in) impact the trained agents?\n\n- Do observations include past climate events?\n\n- How did you calibrate the 0.5% investment of capital into mitigation?\n\n- Why do you think that agents are so insensitive to the value of Beta as shown in figure 6 b)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "# Originality\n\n* **Novel MARL application to ESG disclosures**: Even though Zhang et al. explore MARL in the policy space, as far as I know, this is the only MARL simulator that looks at ESG disclosure impact in this scenario.\n\n* **Novel problem formulation**: the authors cleanly describe the relationship between companies and investors with a two agent type system, as well as an ESG disclosure component.\n\n# Quality \n\n* **Relevant problem setup**: key decisions are captured by the problem setup. The ESG disclosure abstraction is simple and elegant. The reward structure effectively represents a social dilemma.\n\n* **Extensive experimental results**: the authors go through many scenarios with InvestESG to analyze different outcomes.\n\n# Clarity\n\n* The paper is **well structured**, and makes for a smooth read with little to no cognitive breaks.\n\n* The work is **well situated** within the literature on MARL simulators, and they contrast well with similar work.\n\n* The design and implementation of InvestESG is **clearly laid out**. \n\n* The work makes judicious use of **relevant visualizations**, such as Schelling diagrams.\n\n# Significance\n\n* The analysis is **timely and relevant** given the current discussions around ESG disclosures.\n\n* The conclusions around the preferences of investors for climate-active companies is impactful.\n\n* The use of MARL to study social dilemmas is an important subject of study." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents InvestESG, a MARL environment that studies the impact of ESG disclosures on company and investor agents. The benchmark is meant to simulate companies' investment decisions into climate mitigation, green washing and resilience spending as a social dilemma.\n\nSpecifically, the contributions are:\n\n* InvestESG, a climate-economic environment in which investors fund companies, which make decisions about how much to invest in climate-related spending over 100 years starting in 2020.\n * Climate risks grow linearly in the absence of any mitigation\n * Companies decide how much to spend on mitigation, greenwashing and resilience\n * Investors decide which companies to invest in based on their preferences, which trade off between profits and climate efforts documented by ESG disclosures.\n * As the simulation proceeds, companies make profits which they return to investors, while climate risks grow, resulting in a higher probability of extreme events.\n * Agents are modelled using IPPO\n\n* A set of experiments shedding light on agent behaviour in InvestESG\n * With no ESG disclosures, purely profit-driven decisions result in suboptimal collective outcomes\n * The impact of ESG disclosures depending on how many and how much investors care about ESG reports when choosing which companies to invest in\n * Whether companies leverage greenwashing when it is allowed in InvestESG\n * Whether visibility of the climate-related risk probabilities impacts agent behaviour\n\n* Conclusions for policymakers and researchers\n * Mandatory EST disclosure paired with ESG-conscious investors can drive corporate mitigation efforts.\n * Knowledge of climate risks motivates investors and companies\n * Agent behaviour is consistent with empirical evidence\n * InvestESG is an example of using MARL to tackle complex social dilemmas in real-world, high-impact domains" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# Soundness\n\n* **The economic agents are not grounded in the economics literature**. This leads to issues such as capital being perfectly flexible across time steps. In traditional economics models, investments in capital last and they are not flexible. Here, there seems to be an implied assumption of perfectly flexible capital, which is unrealistic. Starting with an existing model of economic agents (with a citation), highlighting its limitations for InvestESG and then explaining how you extend to agent to accommodate for these limitations would be a much more compelling presentation. \n\n* **Investor decisions are binary**, as opposed to continuous across all companies. Making investor decisions floats, i.e. a vector whose sum is capped at one, would allow for proportional investments across different companies. This is essential for investor diversification, which would also enable interesting extensions like regional damages to companies (i.e. climate events could affect subsets of agents either chosen at random or chosen somehow).\n\n* Figure 7 b) is highly confusing. It looks like **with climate information, risk is *maximized* and market wealth is *minimized***. I'm not sure what exactly is going on in this plot, but it doesn't fit with the storyline of the paper. That is, it certainly does not look like more information improves decision making in this plot, if anything the effects of more information are catastrophic for both climate risk and market wealth.\n\n* Figure 2b could be improved by showing the average number of events at each year across many episodes, as opposed to a single episode. \n\n* The **number of agents is limited**. Granted, it is more than 2. However, it would be interesting to scale it up to more and see what types of behaviour emerge. There are group size effects that can emerge at scale in economics, e.g. see https://www.aeaweb.org/articles?id=10.1257/mic.20200290. This shows in section 9.2 of the paper in the appendix, but given the implications of such a result, it would be very important to expand upon these results.\n\n# Presentation\n\n* The paper is well structured, but the plots are a pain to read. The labels and ticks are too small, and the axes are not annotated. \n\n* If you use a pdf format for your images instead of png, you can avoid the graininess when zooming in, which is necessary because of the label sizes.\n\n* The results section could benefit from additional structure. It would be less dense and easier to read if you highlighted which of your results you consider the main results, and which you consider additional.\n\n* I found the description of schelling diagrams fairly unclear, it took me a minute to get it.\n\n* It should be ICLR 2025, not 2024. Please make sure the template you used is up to date.\n\n* Inconsistent use of \"MARL\" and \"multi-agent RL\"\n\n- Might benefit for a problem setting section, where you introduce important concepts like bifurcated equilibria\n\n- Typo: 3.2 \"self-interest\" -> \"self-interested\"\n# Contribution\n\n* The importance of the contributions are weakened by what Figure 9 d) is suggesting, since there are many companies in the world. It seems to me that, without addressing the concerns raised by this result, your conclusions for policymakers do not hold." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduced, InvestESG, a novel multi-agent reinforcement learning (MARL) benchmark designed to study the impact of Environmental, Social, and Governance (ESG) disclosure mandates on corporate climate investments." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024investesg,\ntitle={Invest{ESG}: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2TasVD7FXp},\nnote={under review}\n}" }, "abstract": { "value": "InvestESG is a novel multi-agent reinforcement learning (MARL) benchmark designed to study the impact of Environmental, Social, and Governance (ESG) disclosure mandates on corporate climate investments. The benchmark models an intertemporal social dilemma where companies balance short-term profit losses from climate mitigation efforts and long-term benefits from reducing climate risk, while ESG-conscious investors attempt to influence corporate behavior through their investment decisions. Companies allocate capital across mitigation, greenwashing, and resilience, with varying strategies influencing climate outcomes and investor preferences. Our experiments show that without ESG-conscious investors with sufficient capital, corporate mitigation efforts remain limited under the disclosure mandate. However, when a critical mass of investors prioritizes ESG, corporate cooperation increases, which in turn reduces climate risks and enhances long-term financial stability. Additionally, providing more information about global climate risks encourages companies to invest more in mitigation, even without investor involvement. Our findings align with empirical research using real-world data, highlighting MARL's potential to inform policy by providing insights into large-scale socio-economic challenges through efficient testing of alternative policy and market designs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "multi-agent reinforcement learning", "climate change", "ai for climate" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d1647f343e8ca13d83abdf353fa7cf8c7bc2d7e7.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/50dbe817ace9436383dc67ab587772e3b4c554bf.zip" }, "title": { "value": "InvestESG: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2TiU1JTdSQ
Selective LoRA for Domain-Aligned Dataset Generation in Urban-Scene Segmentation
main
Active
Dataset Generation;Urban-scene Segmentation
applications to computer vision, audio, language, and other modalities
3;5;5;6
5;4;4;5
2;3;3;3
2;3;2;3
2;3;3;1
4.75
4.5
2.75
2.5
2.25
-0.229416
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see Weaknesses. If the authors address my concerns, I will raise my rate.\nAdditionally, It would be better if:\n1. Provide quantitative metrics on the quality of the generated segmentation maps, comparing them to ground truth or to maps generated by other methods. Discuss any observed differences in segmentation map quality between their method and baseline approaches, particularly in relation to the selective learning process.\n2. If possible, include a qualitative analysis (e.g., visual examples) of any artifacts or inconsistencies in the generated segmentation maps that might be attributed to the selective learning process." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-organized, the motivation description is clear and concise. The pipeline figures are easy to understand.\n2. The core idea is interesting, use language distinction to learn concept difference, and eliminate the requirement of paired visual data to learn specific concepts. I think the learning procedure is practicable. \n3. The experimental settings are extensive, including both in-domain, few-shot and damain generalization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a new approach to generate training samples with sepecific concepts variants. The proposed method learn specific concepts, such as style or viewpoint, by selectively manipulate the gradients. The method claim that it improve domain alignment and sample diversity. In experiments, the method are compared with baseline and the DatasetDM, and results show improvements in in-domain, few-shot segmentation. The wide-scope experiments prove its practicability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The method mainly focus on introduce their Selective LoRA. However, the whole pipline includes training a label generator (stage 3 in Figure 2). The technical details in this part are not well delivered. How the label generator receive the intermediate features from T2I models and generate semantic maps? In addition, in line 190-197, the authors say their use Mask2Former as label generator as same as DatasetDM. As I know, the DatasetDM use only \"perceptual decoder\" which only includes a decoder architecture instead of whole Mask2Former segmentaiton. Clarifying this distinction could provide a clearer understanding of the contributions of the current approach.\n2. While the method aims for simultaneous sample and segmentation map generation, it requires a two-stage training process for the T2I model and label generator separately, contrasting with DatasetDM’s one-stage training. This additional stage could indeed limit practicality for real-time or large-scale augmentation, and a comparison in training efficiency or practical adaptability would be beneficial.\n3. The dataset used for evaluation is Cityscapes and BDD100K, which includes only city streets. Since singlar scene makes learning specific concepts changes easiler, it would be improved if the authors prove their method on more general dataset, e.g. coco, ade20k. Since the main comparison method DatasetDM use more general dataset, I wonder performances of Selective LoRA on other datasets.\n4. If the selective learning process affects the reliability of generated segmentation maps? The authors seem not provide relavant discussion.\n5. Minor erros: the box of viewpoints and styles in stage 4) of Figure 2 are reversed.\n\nReference:\n[1] DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion Models, NIPS 2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1: From the design motivation, it seems that training all LoRA parameters may lead to overfitting, resulting in reduced diversity in the generated images. In contrast, Selective LoRA selects a subset of parameters that are most associated with the concepts, effectively training fewer parameters and better preserving the original T2I model's capabilities. The original LoRA setting applies training to all linear layers in the UNet with LoRA. I wonder if training LoRA only on certain layers' cross-attention (few parameters) could achieve a similar effect as Selective LoRA.\n\nQ2: I hope the authors can address the concerns raised in the \"Weaknesses\" section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1: The paper \"Selective LoRA\" introduces a new training strategy for fine-tuning pretrained T2I models to generate diverse datasets for segmentation tasks, addressing the challenge of data scarcity.\nS2: Extensive experiments show that the generated datasets enhance the performance of prior segmentation models in urban-scene segmentation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a method for fine-tuning pre-trained T2I models to generate datasets specifically for urban-scene segmentation, addressing the challenge of data scarcity. Traditional methods often utilize pre-trained T2I models directly or apply LoRA for fine-tuning, which can lead to generated samples that fail to align with the target domain or lack diversity. To overcome these issues, the paper introduces Selective LoRA, a novel fine-tuning approach that selectively identifies and updates the weights that are most closely associated with specific concepts for domain alignment. This approach reduces the number of parameters that need training, improving training efficiency while ensuring that the original T2I model's generalizability is preserved. Extensive experiments demonstrate that the generated datasets improve the performance of previous segmentation models in urban-scene segmentation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1: The writing in this paper is somewhat challenging to understand, and the technical descriptions lack clarity, which can lead to confusion during reading. Below are a few examples, though not exhaustive. For instance, in Figures 3 and 4, what do the layer indices represent? Are they the projection layers for all attention in the network? However, according to the description in line 251, it seems that all linear layers in the network are being trained with LoRA. Additionally, Section 3 only covers the first two stages, with the third and fourth stages not being described in detail, making this part less clear. The structure of the experimental section is also somewhat disorganized. \n\nW2: The design of the tables lacks standardization, leading to confusion for the reader. Here are a few examples, though not exhaustive. For instance, many tables do not clearly explain what the numerical values represent, making interpretation difficult. In Table 3, the baseline names should be listed directly. Additionally, the entries under the \"Data Ratio\" column correspond to various methods, which creates some confusion. Furthermore, for the methods used to generate datasets that enhance baseline performance in Table 3, it would be clearer to label them as \"Baseline + Real FT\" rather than just \"Real FT.\"\n\nW3: Additionally, I noticed that the baseline appears to be from a 2022 paper. Are there any more recent baselines available for comparison?\nW4: Some modules may not appear particularly novel from a technical perspective. LoRA are also commonly used in various papers." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Additional minor questions are listed as follows.\n 1. The method uses selective LoRA to solve the problem of data scarcity in cross-domain segmentation. However, there are some similar methods to select LoRA weights in other fields, like LoRA-SP [3], GS-LoRA [2], Tied-LoRA [1] etc.. The authors should discuss these papers. \n\n[1] Tied-LoRA: Enhancing parameter efficiency of LoRA with Weight Tying \n\n[2] Continual Forgetting for Pre-trained Vision Models \n\n[3] LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models\n\n2. The authors should compare more text-driven or image-driven generated dataset baselines, such as Instruct-Pix2Pix [4], PTDiffSeg [5], DATUM [6], etc.\n\n[4] InstructPix2Pix: Learning to Follow Image Editing Instructions \n\n[5] Prompting Diffusion Representations for Cross-Domain Semantic Segmentation \n\n[6] One-shot Unsupervised Domain Adaptation with Personalized Diffusion Models" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and easy to follow.\n2. The idea of using concept loss to find important weights is reasonable.\n3. The ablation studies and analytical experiments are interesting and inspiring." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the challenge of data scarcity in semantic segmentation by generating datasets through fine-tuned text-to-image generation models. Existing methods often overfit and memorize training data, limiting their ability to generate diverse and well-aligned samples. This paper proposes Selective LoRA that selectively identifies and updates only the weights associated with necessary concepts for domain alignment while leveraging the pretrained knowledge of the image generation model to produce more informative samples.\nThe authors demonstrate its effectiveness in generating datasets for urban-scene segmentation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method outlines how to fine-tune LoRA for generating informative target-style images. However, the definition of \"informative samples\" is not clear. This lack of clarity may hinder the reader's understanding of the intended contributions. For example, Figure 1 would benefit from including examples of informative data to provide a clearer context for what constitutes an informative sample. \n\n2. In Figure 1(b), the results of the LoRA-finetuned images for both foggy and night-time conditions appear remarkably similar, suggesting that the fine-tuning process may not have effectively generated between these two target styles. It raises concerns about the method's capability compared to the pretrained approach. \n\n3. The proposed Selective LoRA generates images in a specific style and containing particular content, but it lacks a comparative analysis with existing text-driven diffusion models, such as Instruct-Pix2Pix. A comparison in terms of both the quality of generated images and adaptation performance would significantly enhance the paper's contributions and provide the reader with a clearer understanding of how the proposed method stands in relation to established techniques.\n\n4. I find the results presented in Table 3 somewhat confusing. If I understand correctly, the baseline results are derived from fine-tuning Mask2Former using generated images, while RealFT represents the results from fine-tuning on real data. However, it is unclear how the authors obtained the labels for the generated data. Were these results obtained through an unsupervised training approach, or was an additional decoder trained similarly to the DatasetDM?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "I believe that the proposed method demonstrates sufficient technical novelty and shows effectiveness through its quantitative experimental results. However, I think it would be beneficial for the authors to revise certain aspects to make the paper easier to understand. Improving the clarity of the text would enhance the overall presentation of the work." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The proposed method differentiate the weights to align with target domain while preserving valuable information of the pretarined model. From this, the proposed method selectively fine-tunes the model. This approach effectively addresses challenges when to adapt large pretrained models to different domains, enabling better domain specific performance without losing the benefits of the pretrained features.\n\nThe data scarcity problem addressed in this paper is a critical issue not only for semantic segmentation but also for a variety of vision tasks. Therefore, the proposed technique for generating image and ground truth pair datasets can be considered a core technology in the advancement of deep learning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel approach to address the data scarcity in semantic segmentation by generating datasets (image-mask pairs) using text-to-image models. To solve the issue, it is necessary that the generative images align with the target doman and provide useful information beyond the training dataset. Therefore, this paper intorudces Selective LoRA, a finetuning approach for the pretrained text-to-image model that preserves the distributional diversity of the original pretrained model while aligning with the target domain. The proposed method selectively update weights for key concepts, such as style and viewpoint, which need to be aligned or to maintain the diversity. The authors shows that the proposed method generates datasets with desired distribution via ablation studies and improves in both in-domain and domain generalization settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I believe that the proposed method tackles an important problem and offers a reasonable approach to addressing the challenges, which I highly commend. However, there are some weaknesses to consider.\n\nFirst, the method relies on Stable Diffusion, trained on a large dataset. Although leveraging the distributional diversity learned by this pretrained model is the motivation behind the approach, it inherently sets an upper bound on the applicability of the proposed method based on the knowledge of the pretrained model. This is a fundamental limitation.\n\nDefining the desired concepts, identifying the critical parts of the architecture where these concepts are expressed, and retraining the model are all highly manual processes that depend heavily on individual characteristics of target data . To define the desired concepts, the user of this method must analyze the distributions of the pretrained model and the target domain, identify the differing concepts, and guide the process with appropriate text prompts. Additionally, finding the associated weights and determining their importance requires experimental work, which lacks standardized criteria.\n\nGenerating images aligned with the desired distribution is crucial, but creating high-quality masks to accompany these images is equally important for semantic segmentation models. This aspect has not been sufficiently addressed. While the current method leverages intermediate features, there could be consideration of various other ways to generate masks from the images. Given that the proposed method utilizes a model trained on a large dataset, it might also be worth exploring the use of models like SAM (Segment Anything Model) for mask generation (of course, there are lots of candidates). I am not requring additional experiments using SAM. It woud be beneficial to analyze the quality of the masks for the generated images. \n\nFrom a presentation, the paper is challenging to read. Figures 2 and 3 are difficult to understand, and it is not easy to infer the intended meaning from the related sections in the text. Additionally, the paper mentions L_Concept, but the figures use terms like L_style and L_viewpoint, which are not defined in the main text, causing confusion. The authors should clarify this and revise the figures accordingly. More detailed explanations about the process of creating text prompts are also necessary.\n\nThe experimental setup, including the ablation study, is not sufficiently explained. For instance, in experiments like those in Table 3, it is unclear how extensive the generated dataset is and how it is used.\n\nOverall, the paper needs to be written in a way that is easier to understand." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose Selective LoRA, a novel fine-tuning method that generates well-aligned and informative segmentation datasets by updating only weights related to desired concepts. We improve existing urban-scene segmentation models in various settings." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024selective,\ntitle={Selective Lo{RA} for Domain-Aligned Dataset Generation in Urban-Scene Segmentation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2TiU1JTdSQ},\nnote={under review}\n}" }, "abstract": { "value": "This paper addresses the challenge of data scarcity in semantic segmentation by generating datasets through fine-tuned text-to-image generation models, reducing the costs of image acquisition and labeling. Segmentation dataset generation faces two key challenges: 1) aligning generated samples with the target domain and 2) producing informative samples beyond the training data. Existing methods often overfit and memorize training data, limiting their ability to generate diverse and well-aligned samples. To overcome these issues, we propose Selective LoRA, a novel fine-tuning approach that selectively identifies and updates only the weights associated with necessary concepts (e.g., style or viewpoint) for domain alignment while leveraging the pretrained knowledge of the image generation model to produce more informative samples. Our approach ensures effective domain alignment and enhances sample diversity.\nWe demonstrate its effectiveness in generating datasets for urban-scene segmentation, outperforming baseline and state-of-the-art methods in in-domain (few-shot and fully-supervised) settings, as well as domain generalization tasks, especially under challenging conditions such as adverse weather and varying illumination, further highlighting its superiority." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Dataset Generation", "Urban-scene Segmentation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2d54cc9c7c34de08be4a4734b3344660c4f68e12.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Selective LoRA for Domain-Aligned Dataset Generation in Urban-Scene Segmentation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2TuUXtLGhT
Long-Context Linear System Identification
main
Active
autoregressive;linear;statistics;low rank;mispecification
learning on time series and dynamical systems
3;3;6;8
4;3;4;3
2;2;3;3
2;3;3;3
2;2;3;4
5
3.5
2.5
2.75
2.75
-0.235702
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1) In Section 5, the authors emphasize that the sample complexity bounds derived remain unaffected by mixing times, highlighting the \"learning-without-mixing\" result. Could the authors discuss more the slow mixing setting (i.e., when the system is marginally stable), would the learning rates deteriorate? \n\n2) The misspecification results in Section 4 suggest that shorter context lengths can still capture useful structure in long-context systems. Could the authors provide insights into specific applications where such misspecified models are particularly advantageous?\n\n3) I am curious about how coordinate descent minimization could be used to learn $P^\\star$ in polynomial time for this setting of long-context linear system identification and the implications of non-isotropic data when updating $P$. \n\n**Minor**: The abbreviation for Ordinary Least Squares (OLS) is used early but is only formally defined in Section 3.2." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**Clarity of exposition:** The paper is well-written and well-organized and systematically introduces the problem setting, contributions, and theoretical derivations. Definitions and assumptions are clearly stated, and the logical progression through each theoretical component makes the paper easy to follow.\n\n**Intuitive and well-discussed results:** The concept of \"learning-without-mixing\" is well-motivated by the authors. This result aligns with the literature on \"learning-without-mixing\" for linear system. In particular, the authors show that for long-context linear system identification, where long contexts naturally entails a strong sample dependency, it does not necessarily inflate the bounds. Moreover, the low-rank representation learning setting and misspecification scenarios are well-explained, with clear justifications for how each condition affects the error bounds.\n\n**Theoretical contribution:** The theoretical contributions are significant, providing error bounds that extend classical linear system identification results to long-context models, and the learning rates aligns with the literature of learning-without-mixing for linear systems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors study the problem of identifying long-context linear systems where the state at any given time depends on a sequence of previous states over an extended context window. In contrast to traditional linear system identification that typically assumes first-order dependencies, this paper focuses on autoregressive processes of order \n$p>1$. The authors establish sample complexity bounds, demonstrating a \"learning-without-mixing\"-type of result. In particular, they show that a slow mixing does not inflate their learning rates. In addition, the authors further extend their results to the setting where the long-context linear model admits a low-rank representations. They also explore the implications of context length misspecification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Misspecification Results and Assumptions:** Section 3.4, particularly Assumption 3.9, imposes a constraint on the misspecified model that may be too restrictive for practical applications. The requirement that $|| (MA^\\star - MA^\\star_{1:p'})L^\\star ||_{\\text{op}} \\leq D'$ implies that misspecification must remain controlled to a certain degree. The authors could discuss the limitations of Assumption 3.9 if this assumption does not hold in practical settings or offer heuristics for relaxing this constraint." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics concerns were found." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Some minor questions and suggestions include:\n\n1. The first question is about the claim in line 379 that \"Importantly, the constant $C$ and the logarithmic terms are independent of the mixing related quantity $\\text{max}(1/1-\\rho,p).$ Here, $\\rho$ is the operator norm of $M_{\\textbf{A}^\\star}$\". However, as stated in line 263, the explicit constant $C(\\delta)$ depends on the diameter $D$ which is one of the constraints listed in Assumption 3.4 where it is assumed that the operator norm of $M_{\\textbf{A}^\\star}$ is less than or equal to $D$. These two statements seem to be contradictory. Can you elaborate on these dependencies?\n\n2. In Equation (14), the variable $E$ is not clearly defined in the main text although it is available in the appendix. It may be helpful to add it in the main text to prevent confusion.\n\n3. The last inequality in Equation (18) appears to be a typo.\n\n4. There are multiple typos in the appendix, and it would be helpful to do a careful revision of the text. For example, line 1184 formatting, line 1287 \"martices\", line 1378 \"rearrainging\", to name a few. \n\n5. Considering the main results depend strongly on the condition number of $L_\\star$, it would be helpful to include discussions about how this condition number typically behaves. For example, how does this condition number relate to the condition number or singular values of a matrix $A$ if $A_1^\\star = A_2^\\star = \\ldots = A$? How does it behave if $A_i^\\star$ have elements sampled i.i.d. from normal distributions? Does the condition number of $L_\\star$ also depend on $T$? \n\n6. While in Equation (8) it is stated that the result depends on polylog$(\\kappa)$, can you elaborate on whether this result depends on $\\log (\\kappa)$, or is it actually dependent on O($\\kappa$) or other polynomials of $\\kappa$? It is not immediately obvious from the proof, however, in many contexts theoretical guarantees of estimators are linearly related to logs of condition numbers. Since $\\kappa$ is already the log of the condition number, I think it makes sense to clarify this dependency." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "While the topic of linear identification is certainly not new, the theoretical results developed in this paper are novel. The authors clearly stated problem formulations, main results and motivations. Overall, the paper was well written with an enjoyable read." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper considers the linear system identification problem under a long context framework. More specifically, the paper presents:\n\n1. A main result on a theoretical guarantee of the constrained least squares estimator under mild assumptions on the design matrix and sub-gaussianity of noise. This result is shown to parallel previously existing results in the i.i.d. setting under some additional logarithmic factors.\n2. An extension of the main result to a low rank setting, showing an improved statistical rate depending on the rank constraint.\n3. A further extension of the main result to the case of misspecified context windows, suggesting partial learning occurs for misspecified models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are several minor questions and suggestions regarding confusions in the main text (these are deferred to the questions section below). Experiments were minimal and only provided in the appendix." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "(1) The authors discuss how their results do not depend on the mixing time of the Markov Chains involved. The authors can provide better intuition on how they need not consider mixing time in the non-asymptotic bounds obtained.\n\n(2) Can the authors provide more details on the simulations reported. The problems being considered do not admit closed-form solutions and include non-convex problems and thus should be difficult to solve. How are these challenges reflected in the simulations section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The results provide non-asymptotic bounds on three problems that are well motivated; earlier works have not covered the case where the process has dependency on the past with a context length." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The article focusses on the problem of identifying the $A_k$ matrices for $k=1,\\ldots,p$ where $p$ is the context length when the data is being generated via\n\n$$ x_t=\\sum_{k=1}^p A_k^* x_{t-k}+\\xi_t $$\n\nwhere $A_1^*,\\ldots,A^*_p$ are $d\\times d$ matrices and $\\xi_t$ is i.id. noise. The article discusses three problems \n\n$(1)$ Minimizing the empirical loss function under an induced-norm bound on the $A^*$ matrices where the loss function is given by\n\n$$\\ell({\\bf A})=\\frac{1}{NT}\\sum_{n=1}^N\\sum_{t=p}^T\\left\\| x_t^{(n)}-\\sum_{k=1}^p A_k x_{t-k}^{(n)}\\right\\|.$$\n\nHere, $T$ is the length of the trajectory and $N$ trajectories are collected.\n\n$(2)$ Minimizing the empiricial loss function with induced-norm constraints on the $A_k$ with an added rank constraint on $A_k$\n\n$(3)$ The same loss function is minimized with an induced norm contraints on $A_k$ matrices and a bound that captures a context length $p'$ which is smaller than the actual context length $p.$\n\nFor all the three problems above the article provides non-asymptotic bounds on the Frobenius norm of the error of the estimates with respect to $A_k^*$ that will result from an optimal solution to the problems (1), (2) and (3). The authors provide a discussion that why standard approaches of lifting the state will face challenges, and provide reasons on why the bounds they have obtained are independent of the time taken by the Markov Chain to reach any steady state distribution. The authors further comment on stability conditions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) The non-asymptotic bounds are not with respect to any specific algorithm that takes data and solves the related optimization problems. The authors indicate possible approaches for solving these problems but do not analyze any specific algorithm; however, it stands to reason that the sample complexity will depend on the approach being taken. The rank constrained problem is a particularly challenging one as its not a convex problem. The authors assume a an optimal solution to the problems. The authors need to comment on whether advances with respect to other works are also in the same spirit or if they analyze known solutions to the optimization problems. Proper justification of the utility of the results need to be provided if the article provides analysis assuming the existence of the optimal solution.\n\n(2) The mathematics is presented in a dense manner; here the approach and the problem description can be better presented and explained. In the Sketch of the Proof, some of the matrices such as $E$ are not defined in the main body (its defined in the Appendix; however, the main body should be self-contained; the definitions are buried deep in the Appendix). Some suggestions are to show the matrix operations in more detail to help a reader along." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "How do you select the proper rank when it's not known?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper provides a low rank approach for long-context linear system identification." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper provides a low rank approach for long-context linear system identification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It's unclear how to tune the rank when we don't know it's true value (e.g. real dataset)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper provides sample complexity for long-context linear dynamical systems." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024longcontext,\ntitle={Long-Context Linear System Identification},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2TuUXtLGhT},\nnote={under review}\n}" }, "abstract": { "value": "This paper addresses the problem of long-context linear system identification, where the state $x_t$ of the system at time $t$ depends linearly on previous states $x_s$ over a fixed context window of length $p$. We establish a sample complexity bound that matches the _i.i.d._ parametric rate, up to logarithmic factors for a broad class of systems, extending previous work that considered only first-order dependencies. Our findings reveal a ``learning-without-mixing'' phenomenon, indicating that learning long-context linear autoregressive models is not hindered by slow mixing properties potentially associated with extended context windows. Additionally, we extend these results to _(i)_ shared low-rank feature representations, where rank-regularized estimators improve rates with respect to dimensionality, and _(ii)_ misspecified context lengths in strictly stable systems, where shorter contexts offer statistical advantages." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "autoregressive", "linear", "statistics", "low rank", "mispecification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/aadece8180a69f680555a80b72a8db2380236d90.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Long-Context Linear System Identification" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2U8owdruSQ
Has the Deep Neural Network learned the Stochastic Process? An Evaluation Viewpoint
main
Active
evaluation;deep neural network;stochasticity;complex systems;forecasting
other topics in machine learning (i.e., none of the above)
5;5;5;8;8
2;2;3;5;4
2;3;2;3;4
2;2;2;1;4
3;1;2;3;4
6.2
3.2
2.8
2.2
2.6
0.910182
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- I don't understand the second part of the critical question, \"is it encountering different stochastic behaviours\" (different from what)? how is the \"differentness\" relevant?\n- While it's pretty clear to me how to use this immediately in my work, I think anyone who wasn't already aware they wanted exactly this might struggle. Could you provide something like a \"practical users guide\" for non-domain experts?\n - if the clarity of the plots can be improved, the naming of the stat/metric you're introducing, and improve it's \"usability\" to the community, I would be happy to upgrade my score. You've done great work and this would bring the paper to the level it deserves." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Great paper, wonderfully practical and insightful; I've been looking for something like this for 5+ years! Nice eval on real-world data.\nI started writing a thing I would like to you add and then discovered it was already in the paper (long horizon behaviour)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "the metric of expected calibration error is introduced and studied as a way to capture fidelity of a learned representation to an underlying stochastic process (rather than a single realization of that process, as with typical metrics like AUC or MSE)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While overall the paper is very clear, some of the captions and explanations of the experiments/insights from them and how they tie to the figures could be improved. \nSome specifics:\n- first fig should say what you mean by realization, and F2R and F2SP should be bolded (not ital) to make them easy to find in the text. Observed GT should be explained a bit more, or maybe it would be enough to move the sentence currently after F2R to be the second sentence of the paragraph.\n - Fig 5 is unclear to me. What is the data, what is S-level, why is it \"good\" that the 20 vs 10 lines are far apart? All of this should be clear from the caption\n - the clarity wanes a bit as the paper goes on, and it's a bit confusing that you call it ECE vs. F2SP vs Statistic-GP. Do these different namings really serve something? It could be a lot more clear if you just have one naming." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "A question arises regarding Figure 1: Can ECE be an effective metric for measuring F2R compared to other available metrics? Figure 1 suggests that the answer may be *no*.\n\nAn important indicator that ECE is a reliable measure is its diagonal pattern, showing low scores only when training and test S-Levels align, as illustrated in Figure 4. Could the authors provide theoretical insights to support this indicator?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The topic of evaluating DNNs within stochastic complex systems is both intriguing and important.\n\nIn the primary evaluations, the author conducted experiments across various settings, including different DNN architectures, comparisons with multiple evaluation metrics, and diverse simulation tasks.\n\nThe main text clearly explains the difference between ECE in classical assessment and stochastic process settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a study evaluating deep neural networks (DNNs) within stochastic complex systems, emphasizing the importance of Expected Calibration Error (ECE) in measuring fidelity to stochastic processes. The findings are validated through multiple experiments and comparisons." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper is somewhat difficult to follow. For example, providing a brief introduction to the structure of each section would enhance clarity, particularly in Sections 2 and 3. Additionally, it is difficult to grasp the main messages conveyed by the table in Figure 2(b). Furthermore, in lines 229–240, the macro-level concept is introduced abruptly, which may disrupt the clarity and readability of the main text.\n\nThe main findings' practical applicability appears limited. In real-world scenarios, data generally provides only a single observed outcome centered on observable ground truth (line 117). Since the primary evaluation is simulation-based, the controlled stochasticity falls short of capturing real-world complexity. The Statistic-GT is basically derived by normalizing the frequency of target state occurrences across multiple Monte Carlo simulations.\n\nMinors:\n\nM1. The original text for the abbreviation RV is not given.\n\nM2. In Table 1, what about the possibility of recovery in the Host-Pathogen problem?\n\nM3. In line 152, maybe consider using an alternative symbol for Moore neighborhood, instead of $\\mathcal{N}$ (normally representing Gaussian distribution)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "L50: Is --> is (lowercase)\nFig1: no need to write the whole name, you can use acronyms because they're already defined in the text, however MSE is not defined at this point.\nL88: fidelity to realization --> F2R (it was already defined previously, so you can use the acronym)\nL99: the notation of the dimension of the real vector O_t is confusing, what is (R^n)^(H x W), is n = H x W? If so, make that explicit.\nTable 1: some rows end with full stop, other don't. Please make it consistent. Either all with or all without.\nI find it odd to place Figures in columns as Figure 1 (which has a large top white margin) and Figure 3. I would suggest column figures into one row figure with multiple subfigures as you did with Figure 2. \nL201: Isn't the indicator variable already defined as B_t in L99? Why defining again with different notation?\nL298: MSE already defined in text previously, no need to write the whole name again.\nL516: ECE already defined in text previously, no need to write the whole name again.\nTable 2 and Table 7: highlight the best performing DNNs." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper makes a significant contribution by introducing the concept of Fidelity to Stochastic Process (F2SP), a novel evaluation criterion specifically designed to assess a DNN's ability to learn the underlying stochastic interactions in complex systems.\n\nThe authors provide a rigorous formalization of F2SP within a stochastic framework, establishing clear criteria for its valid measurement. The use of Expected Calibration Error (ECE) as an evaluation metric is well-justified." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a study on evaluating deep neural networks designed to forecast the evolution of stochastic complex systems. The authors identify a gap in traditional evaluation methods—such as threshold-based classification metrics and error-based scoring rules—which focus on a model's ability to replicate observed ground truth but fail to assess how well the model has learned the underlying stochastic process. To address this issue, they introduce a new property called Fidelity to Stochastic Process, representing the DNN's ability to predict the statistical ground truth of the stochastic process.\n\nThe paper proposes using the Expected Calibration Error (ECE) as an evaluation metric that satisfies the necessary conditions for assessing fidelity to statistical ground truth. This work underscores the importance of capturing the underlying stochastic processes in deep neural networks evaluations for complex systems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I found it hard to read the paper because there was a lack of consistency in the acronyms, the authors would redefine them in several parts of the text again and again. I addressed my comments on text in the questions section. \n\nIn the tables, the best neural networks based on each criterion are not highlighted, which makes it difficult to the reader to infer and correlate the arguments in the text. I addressed my comments on text in the questions section. \n\nThe focus of the paper is primarily on binary or discrete prediction tasks, leaving out regression tasks where the definition of calibration is more complex. While the authors acknowledge this and suggest it as an area for future work, the current scope limits the immediate applicability of the findings to a broader range of problems involving continuous outcomes.\n\nAdditionally, the use of the NDWS dataset, which is restricted to next-day predictions, prevents the assessment of ECE over longer time horizons, which are common in many complex systems. Could you elaborate on how future work might address this limitation? \n\nThe paper highlights the lack of open-source complex system datasets as a barrier to broader validation. Are there any ongoing initiatives or plans to develop, collect, or standardize such datasets?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As mentioned in the \"Weaknesses\" part." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper offers a new perspective on evaluating DNNs by considering DNNs as stochastic processes and uses a widely used criteria in Bayesian Deep Learning application to assess the fidelity to stochastic process. This work clearly explains the Expected Calibration Error is used to assess DNN modes in three synthetic cases and one real world case." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work offers a new perspective on evaluating DNNs in stochastic complex systems by emphasizing the importance of capturing underlying the stochastic process. Traditional evaluation methods assess the DNN’s ability to replicate the observed ground truth but fail to measure the DNN’s learning of the underlying stochastic process. This paper proposes a new property called Fidelity to Stochastic Process, representing the DNN’s ability to predict the ground truth of the stochastic process, and introduces an evaluation metric that exclusively assesses fidelity to the ground truth of the stochastic process. The Expected Calibration Error is used to evaluate the fidelity to ground truth of statistic process. Empirical experiments on synthetic datasets (including wildfire, host-pathogen, and stock market models) and real-world wildfire data are used to show the measurement of fidelity to stochastic process by Expected Calibration Error." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper is well organized and well written, several minor issues should be addressed: (1) The explaination of figures is not sufficient, e.g., in Figure 2 (1), the label for x-axis is not specified (I guess it is time?), either add a label or explain it in the captions. Same problems also exist in Figure 4. (2) This work examines ECE on three synthetic environments (forest fire, host-pathogen and stock market models) and a real world wildfire spread dataset. I can tell that these datasets are all multivariate either for classification or regression. Maybe due to the limit of pages, the authors didn't include the experiments on images. I suggest the authors add some discussions or comments in the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I'm curious about how the author evaluates ECE at time $t$ based on Statistic-GT $P_{t}$. Do we have to simulate it again from $t=0$ for $N$ times or we can sample states from $t-1$ and go forward $N$ times (the system is Markov)? Can we still apply ECE on Statistic-GT when the system is not Markov?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Evaluating model fidelity on the stochastic system is significant and has wide applications.\n2. The paper is well-motivated and both the dataset and experiments are thorough." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel stochasticity-compatible evaluation strategy for assessing existing models in the context of complex systems. The author justifies the Expected Calibration Error (ECE) as suitable for assessing the model fidelity of stochastic systems through both simulation environments and real-world data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the author attempts to explain the difference between their work and ECE in deep learning in Lines 282-288, it appears to me this work is still a direct application of using ECE to evaluate the model performance on a stochastic system. The author is encouraged to discuss more in-depth about the distinction between ECE in the proposed method (stochasticity comes from evolving in the environment, aka, Statistic-GT) and ECE in previous works (stochasticity comes from the output distribution).\n2. In Lines 243-244, the author claims that Statistic-GT is more stable than classification-based metrics, but I could not find any evidence related to calculating ECE on Statistic-GT is less sensitive to the system variance than MSE. Is there any theoretical support for using ECE over MSE on stochastic systems with different noise levels and could the author clarify it a bit more?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A novel evaluation criterion to assess whether DNNs modeling stochastic complex systems have learnt the underlying stochastic process" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024has,\ntitle={Has the Deep Neural Network learned the Stochastic Process? An Evaluation Viewpoint},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2U8owdruSQ},\nnote={under review}\n}" }, "abstract": { "value": "This paper presents the first systematic study of evaluating Deep Neural Networks (DNNs) designed to forecast the evolution of stochastic complex systems. We show that traditional evaluation methods, such as threshold-based classification metrics and error-based scoring rules, assess a DNN's ability to replicate the observed ground truth (Observed-GT) but fail to measure the DNN's learning of the underlying stochastic process. To address this gap, we propose a new property called *Fidelity to Stochastic Process (F2SP)*, representing the DNN's ability to predict the *Statistic-GT*—the ground truth of the stochastic process—and introduce an evaluation metric that exclusively assesses fidelity to Statistic-GT. We formalize F2SP within a stochastic framework and establish criteria for validly measuring it. We demonstrate that the Expected Calibration Error (ECE) satisfies the necessary conditions for evaluating fidelity to Statistic-GT. Empirical experiments on synthetic datasets—including wildfire, host-pathogen, and stock market models—show that ECE exclusively measures F2SP. We further extend our study to real-world wildfire data, highlighting the limitations of conventional evaluation and discussing the practical utility of incorporating F2SP into model assessment. This work offers a new perspective on evaluating DNNs in stochastic complex systems by emphasizing the importance of capturing the underlying stochastic process." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "evaluation", "deep neural network", "stochasticity", "complex systems", "forecasting" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3d15ba9c12c76f9d777203be175c2d81d2094d2b.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Has the Deep Neural Network learned the Stochastic Process? An Evaluation Viewpoint" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2UozyR49ZB
Learning a Bi-directional Driving Data Generator via Large Multi-modal Model Tuning
main
Active
multi-modality;synthetic data generation;auto-annotation;driving;LLM applications
applications to robotics, autonomy, planning
3;3;3;3;6
4;3;5;3;4
2;1;2;1;3
2;1;2;2;3
1;1;3;3;3
3.6
3.8
1.8
2
2.2
0.133631
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How well does Bi-Gen generalize to other driving domains beyond multi-car racing? Could the model effectively handle scenarios with more varied driving behaviors, such as urban or highway driving in Waymo dataset?\n2. How does Bi-Gen handle instances where trajectory descriptions or generation prompts are ambiguous or open-ended?\n3. How the map and trajectory being tokenized? Are you using global coordinates? Do you do any transformation on the input data? In multi-agent complex scenarios such as Waymo dataset, there are huge number of elements in the scenes: multiple agents with different types and rich features (shape, velocity, type etc), map features with different types (stop sign, different types of lanes etc), and even traffic light. So I am very concerning about whether the proposed method can be extended to other data format." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The model’s ability to handle both trajectory-to-language and language-to-trajectory generation tasks offers a novel approach to understanding and generating human driving behaviors. I like the idea of treating map and trajectory tokens as the same latent space as language.\n2. By incorporating lightweight encoders and a small language model (TinyLlama), Bi-Gen achieves annotation performance comparable to larger models like GPT-4o while remaining computationally efficient and suitable for real-time applications.\n3. Flexible Multi-turn Interaction: The model’s multi-turn question-answering framework supports dynamic, interactive annotations and diverse trajectory generation, demonstrating versatility in handling complex driving scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "1. The paper introduces Bi-Gen, a bi-directional multi-modal model for human driving data generation and annotation, particularly aimed at low-data domains like multi-car racing. \n2. Bi-Gen combines language-conditioned trajectory generation and trajectory-conditioned language generation, allowing it to serve both as an automated annotator and as a synthetic data generator. \n3. The model integrates a language model with lightweight encoders and decoders to map trajectories and static map data into a shared feature space, enabling it to interpret and generate diverse driving behaviors based on limited real data. \n4. Experimental results demonstrate Bi-Gen’s ability to match the annotation accuracy of larger models, like GPT-4o, while significantly reducing the data requirements for downstream tasks by generating high-quality synthetic data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experiments focus on a racing domain with specific trajectory types, which may not generalize well to broader driving scenarios or other real-world applications without additional testing. I want to know if it's possible to extend to multi-agent scenarios, for example Waymo or Nuplan scenarios.\n2. While the use of lightweight encoders and TinyLlama enhances efficiency, it might limit the model's capacity to capture finer details in complex, multi-modal interactions compared to larger models.\n3. Bi-Gen’s performance relies on well-defined question-answer and generation prompts, which may limit its adaptability to novel or unexpected queries in deployment.\n4. The paper does not explore the impact of different architectural choices (e.g., encoder and decoder sizes, tokenization approaches), which would strengthen understanding of the model's design trade-offs.\n5. Following point 4, the tokenization approaches to the map and trajectory are unclear. See questions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How would the authors justify the use of a simplistic experimental setup in the question-answering task? Does this truly showcase the LLM's auto-regressive generation capabilities?\n2. Have the authors considered collecting a larger and more diverse dataset, possibly including human-human or human-agent interactions, to better capture the complexity of driving behaviors?\n3. Can the authors provide more evidance to demonstrate Bi-Gen's generalization capabilities and address concerns about overfitting?\n4. Would the authors consider moving some of the important details from the supplementary material to the main paper and removing redundant information to improve clarity and completeness?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The bi-directional approach allowing both trajectory description and generation within a single end-to-end framework is novel and interesting. Prior work has typically focused on only one direction.\n2. The motivation of learning a model that can comprehend and generate multi-modal human driving data, especially in low-data regimes like racing, is sound and the proposed methodology of embedding multi-modal inputs into an LLM's latent space makes intuitive sense.\n3. The paper is generally well-written, with a clear explanation of the model architecture, training process, and experimental setup. The figures help illustrate the approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Bi-Gen, a bi-directional large multi-modal model that enables both trajectory description (auto-annotation of driving data in language) and trajectory generation. The model leverages a pre-trained LLM and learns to embed multi-modal inputs (map, ego trajectory, opponent trajectory) into a shared latent space. The authors demonstrate Bi-Gen's capabilities on a racing car dataset, showing it can annotate trajectories comparably to GPT-4o and generate synthetic data to augment real datasets for downstream tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the motivation and methodology are sound, the experimental setup seems too simplistic to fully validate the capabilities of Bi-Gen. The authors mention that there are only 19 possible answers in their question-answering task, which is more akin to a classification problem. This limited setup may not adequately demonstrate the LLM's ability to freely annotate trajectories in an auto-regressive manner. More open-ended annotation would be valuable.\n2. The dataset used for training and evaluation is relatively small, with only 877 trajectories collected. Moreover, the participants were racing against fixed trajectories rather than human players or other agents, which limits the diversity and complexity of the driving behaviors captured. A larger and more varied dataset would provide a more robust evaluation of Bi-Gen.\n3. Given the large capacity of LLMs, it is possible that Bi-Gen is overfitting to the training data. The authors do not provide sufficient qualitative results to assess the model's generalization capabilities. It would be beneficial to compare Bi-Gen with a baseline method that uses classification-based annotation and recurrent trajectory generation, rather than solely comparing it with GPT-4o.\n4. The binary classifier used to validate the quality of the generated trajectories may not be a strong indicator of performance if the data distribution is too simple. It is unclear whether the higher accuracy achieved by the classifier is due to the quality of the generated trajectories or the simplicity of the data distribution.\n5. The paper heavily relies on the supplementary material to provide important details about the methodology and results. Some of this information should be included in the main paper to improve clarity and completeness. Additionally, there is redundant information in the main paper, such as the repeated mention of the model pipeline components (system prompt, map, opponent trajectory, and ego-trajectory)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Did the authors consider different LLM backbones?\n2. What are the statistical details of the used dataset? The size, the distribution, and the collection platform?\n3. Similar to the second point in the weakness part, how to evaluate the quality of generated scenarios?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tThe idea of using LLM to generate scenarios is an interesting and promising topic. Since LLMs have limitations on processing other modalities, finetuning LLMs is also a promising way to direct them to the generation task.\n2.\tThe paper is generally well-written and well-organized. Figure 1 clearly shows the training and generation processes of the proposed method. Figure 2 also describes the two generation tasks with model details." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper identifies that learning driving behaviors requires a lot of data with carefully labeled events, causes, and consequences. However, such data may be more difficult to obtain in rare driving domains, such as in high-performance multi-car racing. Therefore, this paper proposes Bi-Gen, which is a bi-directional multi-modal model that connects a trained encoder-decoder architecture with a pre-trained LLM, enabling both auto-annotation and generation of human driving behaviors. The experimental results show that Bi-Gen matches the performance of much larger models like GPT-4o in annotating driving data. Additionally, Bi-Gen generates diverse, human-like driving behaviors, offering a valuable tool for synthetic data generation in resource-constrained settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe examples shown in Figure 5 are quite confusing. First, the generated trajectories violate the vehicle dynamics and are out of road in most times. The first point of the generated trajectory in the middle figure is behind the last point of the history trajectory. The first point of the generated trajectory in the right figure is too far away from the last point of the history trajectory. Second, it is hard to identify Spinout, Stay-behind, and Overtake in these figures. In summary, I think the generated trajectories have low quality and low realism.\n2.\tI feel the evaluation of this paper is quite limited. It seems that this paper only focuses on high-performance multi-car racing scenarios, as mentioned in the abstract. Even though, I think it is still important to show quantitative results of the average performance of the proposed method. However, the only numerical evaluation now is the overtake classification task shown in Figure 4. I think it is necessary to show the evaluation of realism, diversity, and instruction following. In addition, scenario generation has been a widely investigated area, which means it is easy to find comparable baseline methods, for example, LCTGen [1] and ProSim [2].\n3.\tThere is no evidence to show the benefit of using the generated scenarios for downstream tasks. The only example is the overtake classification task. But I am not sure how large the value is of identifying if a scenario is overtaking or not. I think it is more important to show that the generated scenarios help with the training and testing of autonomous agents in terms of performance and safety.\n\n---\n[1] Tan, Shuhan, Boris Ivanovic, Xinshuo Weng, Marco Pavone, and Philipp Kraehenbuehl. \"Language conditioned traffic generation.\" arXiv preprint arXiv:2307.07947 (2023).\n[2] Tan, Shuhan, Boris Ivanovic, Yuxiao Chen, Boyi Li, Xinshuo Weng, Yulong Cao, Philipp Krähenbühl, and Marco Pavone. \"Promptable Closed-loop Traffic Simulation.\" arXiv preprint arXiv:2409.05863 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Both the training and test data are collected within a single racing track by driving in simulators (as indicated in Appendix.A). The number of trajectories collected are limited as well. How do you ensure that your model does not overfit to this narrowly defined domain?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The paper tackles the issue about interpreting and annotating unlabeled driving trajectories in the low-data domain of high-performance\nmulti-car racing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents Bi-Gen, a large multi-modal model designed to generate and annotate human driving data, particularly in complex racing environments with limited training data. It effectively handles both trajectory description and generation, demonstrating strong performance in comprehending driving behaviors. The study highlights the model's ability to produce realistic and varied driving scenarios, positioning it as a competitive alternative to larger models like GPT-4o." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The model's performance in more diverse and complex driving scenarios beyond the tested environments may require further exploration and validation.\n2. The evaluations appear to be inadequately conducted. The authors assert the existence of 19 potential answer classes; however, they report only the quantitative results from the overtaking prediction task. Furthermore, the zero-shot testing with GPT-4o is the sole baseline selected for comparison. There are also no numbers supporting the claim that training trajectory description and trajectory generation at the same time would be a more favorable approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "An acceptable exploration in using LLMs to benefit understanding driving data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript presents an exploration of finetuning a TinyLLaMa for generating multi-modal human driving data to benefit the community. While the proposed solutions sound acceptable, the motivation, experiment results, readability, and method description should all be improved. So far the weakness outweighs the strengths." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Unmatched motivation and proposed solutions. To my understanding, the stated motivation in the abstract is the lack of multi-modal data and the high cost of obtaining labeled data for training, but the proposed solution is a model that can match the performance of LLMs but can be adopted in a resource-constrained setting. There is a gap between them. In the context of generating multi-modal data, why do we need a resource-constrained model? Do the authors try to generate data during driving, or deploy the generation model in the vehicle? If not, why is there a need to design such a model? \n\n2. Insufficient contribution. As the authors said between lines 88 and 97, the existing LLMs cannot fully comprehend the complicated multi-modal connections between trajectories and languages due to the lack of readily accessible world-knowledge. Hence, we would expect that with the proposed solutions, the generation performance should at least outperform the existing LLMs, despite the model size. But so far, the annotation performance is only compatible, and hence the proposed solution is not as effective as the authors claimed. \n\n3. Poor readability in terms of images and texts. The images are not aligned with the text around it. For example, Fig. 3 is too far away from the text describing it. Fig. 4 is 2-pages away from the corresponding text. Readers may get confused by the images and find it difficult to find the text and hence fail to follow.\n\n4. Unclear method description. Maybe I miss something. In the trajectory generation part, the loss is the auto-regressive loss between the generated trajectories and the actual ones. If the authors aim to fine-tune the model based on this task, then we are assuming that the model is the core component controlling how the trajectories are generated. But as we include human languages or prompts here, is it possible that the inputs are affecting the generating performance? Do we consider any loss in terms of the discrepancy between the generated trajectories and what the prompts ask for?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning a Bi-directional Driving Data Generator via Large Multi-modal Model Tuning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2UozyR49ZB},\nnote={under review}\n}" }, "abstract": { "value": "Understanding human driving behaviors is crucial for developing a reliable vehicle and transportation system. Yet, data for learning these behaviors is scarce and must be carefully labeled with events, causes, and consequences. Such data may be more difficult to obtain in rare driving domains, such as in high-performance multi-car racing. While large language models (LLMs) show promise in interpreting driving behaviors, the integration of multi-modal inputs (e.g., language, trajectory, and more) and generation of multi-modal output in low-data regimes remains under-explored. In this paper, we introduce Bi-Gen: a Bi-directional Driving Data Generator, Bi-Gen is a bi-directional multi-modal model that connects a trained encoder-decoder architecture with a pre-trained LLM, enabling both auto-annotation and generation of human driving behaviors. Our experiments show that Bi-Gen, despite its smaller size, matches the performance of much larger models like GPT-4o in annotating driving data. Additionally, Bi-Gen generates diverse, human-like driving behaviors, offering a valuable tool for synthetic data generation in resource-constrained settings. Taken together, our experiments are a significant step towards applying LLMs to complex, multi-agent driving data." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "multi-modality", "synthetic data generation", "auto-annotation", "driving", "LLM applications" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d70643550f7c50cdc62e362559ddab4fc19da919.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning a Bi-directional Driving Data Generator via Large Multi-modal Model Tuning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2VhFZPYqjE
How to Get Your LLM to Generate Challenging Problems for Evaluation
main
Active
Evaluation;Synthetic data;Benchmarking;Question Answering;Code Generation;Math Reasoning
datasets and benchmarks
3;3;6
4;2;4
2;3;3
2;2;3
3;3;3
4
3.333333
2.666667
2.333333
3
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "--The paper is written at an impressive quality, especially the figures and the elucidation of the problem's motivation and challenges. \n--The authors consider three sufficiently diverse tasks and benchmarks to showcase the utility of their approach.\n--The results are fairly compelling, and the benchmark indeed succeeds in yielding performance drops even from advanced models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce CHASE, a unified framework to synthetically generate challenging problems \nusing LLMs without human involvement. For a task that is given, the approach builds a hard problem\n in a bottom-up manner from simpler components. It decomposes the generation process into\nindependently verifiable sub-tasks to ensure a high level of quality and correctness.\n\nCHASE is designed to address two challenges that the authors state succinctly on pages 1 and 2:\nfirst, how can it be used to create hard and realistic problems, and secondly, how can it be used\nto automatically verify the correctness of the generated data? This second challenge is especially\nprevalent in other work of this nature that is attempting to construct synthetic evaluation benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "--Experimental results could have been deeper in the main text. It is for this reason that I am not inclined to give the paper a stellar rating. \n--The approach is simple and has some nice properties, but I am not too sure about its sensitivity and robustness. I felt inadequate attention was paid to this in the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why does Figure 1 only provide an overview of constructing CHASE-QA and CHASE-Math, but not CHASE-Code? I believe all three should be at the same hierarchical level.\n2. Without human verification, how can we ensure that the data in the CHASE-QA, CHASE-Code, and CHASE-Math datasets are correct? Is there a possibility that the ground truth/golden answer in the dataset themselves are incorrect?\n3. In lines 333-340, you mentioned that approximately 33% of the data was filtered out through sampling and self-consistency, and subsequent experiments (e.g., Table 2) suggest that CHASE-QA generates more challenging data. I think it is unconvincing. If the 33% of the data were added back, how would the experimental results change? Would you still claim that CHASE-QA is a more challenging dataset?\n4. From the examples given in the paper, CHASE-Math seems to concatenate a series of atomic problems. Intuitively, if the tested LLMs reason and calculate sentence by sentence, the accuracy may be significantly higher than under your current naive prompt. Could you elaborate further on how CHASE-Math is more challenging, given the point I raised?\n5. What is the motivation behind the experiments in lines 469-477 and lines 486-493? In my understanding, the \"Impact of context size\" is not the focus of this paper. Also, the experiment in lines 486-493 only fine-tuned weaker models. Would the same conclusion apply to fine-tuning stronger models?\n6. Could you provide some comparative experiments between the CHASE dataset and other synthetic datasets, such as a comparison between CHASE-QA and existing long-context benchmarks?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The problem addressed by this paper is critical to the evaluation of current LLMs -- the lack of comprehensive and challenging datasets.\n- The paper is well-structured, with comprehensive appendices, such as a detailed list of prompts used in CHASE.\n- This paper presents a novel paradigm for data construction, which may have significant potential in the field of synthetic data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To address the high cost of manually labeled data and the low quality of synthetic data, this paper proposes the CHASE framework. CHASE is a framework for automatically generating QA pairs, code problems, and math problems. It adopts an innovative bottom-up structure and divides the overall task into individually verifiable sub-tasks. This allows a seed problem to progressively increase in difficulty through multiple rounds of generation and verification. Experimental results show that the data generated by the CHASE have a certain degree of difficulty." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Some issues with the details of the paper. For example, in the main figure (Figure 1), the bottom-right corner should say \"12 pens\" instead of \"18 pens.\"\n- The current dataset is relatively small, which may result in a high degree of randomness in evaluation results when using this dataset.\n- The experiments are not sufficiently thorough. Some experimental designs lack strong motivation, and there is a lack of experiments that demonstrate the advantage of CHASE over other synthetic data generation methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How to organize functions into files to build repositories from scratch in CHASE-CODE?\n2. Could you specify more details on rejection sampling?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The experiments are comprehensive, with a good set of LLMs covering representative proprietary and open-source models. \n2. The paper is well-written, which clearly describes the methods, experiments and results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces CHASE (CHallenging AI with Synthetic Evaluations), a framework for generating challenging evaluation benchmarks using large language models (LLMs). The authors implement CHASE to create benchmarks in three domains: document-based question answering, repository-level code completion, and math reasoning. Experiments with 15 LLMs show that the generated benchmarks are challenging, with even top models achieving only 40-60% accuracy across domains. The authors demonstrate CHASE's utility in differentiating between state-of-the-art models and revealing performance drops with increasing context length. They argue that this approach offers advantages in scalability, renewability, and ability to evaluate tasks difficult for humans to assess, while providing high-quality, challenging problems for LLM evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although overall I believe it is valuable to explore data synthesis for benchmark construction, I think the authors should be more careful in selecting appropriate settings. I think the most important motivation for this paper is that it is expensive and sometimes impracticable to create benchmarks with challenging problems. However, in some settings present in the paper, I feel that this may not be the case. For example, SWE-bench [1] also focuses on repo-level code generation, and they take existing Github issues as queries, and the modifications made by real users as the ground truth. The current state-of-the-art performance is only 43% in the leaderboard, which indicates its difficulty. Compared to CHASE-CODE, I think the pipeline used in SWE-bench is a better way to collect repo-level code generation data.\n2. To demonstrate that this pipeline is scalable, I think it is important to generate data of large size and apply it to training. If the API cost is a concern, I think the authors can use open-source models, e.g., Llama-70B. \n3. Typo in Figure 1: Jill has 12 pens in the bottom right corner. \n4. In line 443-444, I don’t quite understand why better performance of models different from the generator and verifier can indicate better data quality.\n5. One advantage of the CHASE claimed by authors is to mitigate data contamination, but I think this may not be a big concern for challenging benchmarks that involve intensive reasoning. For example, even if codellama [2] has been intensively trained on Github data, its performance is still low on SWE-bench (which uses the Github data).\n\n[1]. Jimenez, Carlos E., et al. \"Swe-bench: Can language models resolve real-world github issues?.\" arXiv preprint arXiv:2310.06770 (2023). \\\n[2]. Roziere, Baptiste, et al. \"Code llama: Open foundation models for code.\" arXiv preprint arXiv:2308.12950 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a framework for synthetically generating challenging problems to evaluate LLMs." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024how,\ntitle={How to Get Your {LLM} to Generate Challenging Problems for Evaluation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2VhFZPYqjE},\nnote={under review}\n}" }, "abstract": { "value": "The pace of evolution of Large Language Models (LLMs) necessitates new approaches for rigorous and comprehensive evaluation. Traditional human annotation is increasingly impracticable due to the complexities and costs involved in generating high-quality, challenging problems, particularly for tasks such as long-context reasoning. Moreover, the rapid saturation of existing human-curated benchmarks by LLMs further necessitates the need to develop scalable and automatically renewable evaluation methodologies. In this work, we introduce **CHASE**, a unified framework to synthetically generate challenging problems using LLMs without human involvement. For a given task, our approach builds a hard problem in a bottom-up manner from simpler components. Moreover since we want to generate synthetic data for evaluation, our framework decomposes the generation process into independently verifiable sub-tasks, thereby ensuring a high level of quality and correctness. We implement CHASE to create evaluation benchmarks across three diverse domains: document-based question answering, repository-level code completion, and math reasoning. The performance of state-of-the-art LLMs on these synthetic benchmarks lies in the range of 40-60\\% accuracy, thereby demonstrating the effectiveness of our framework at generating hard problems. Our experiments further reveal that the Gemini models significantly outperform other LLMs at long-context reasoning, and that the performance of all LLMs drastically drops by as much as 70\\% when we scale up the context size to 50k tokens." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Evaluation", "Synthetic data", "Benchmarking", "Question Answering", "Code Generation", "Math Reasoning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d39fa44d1794322cd7d223fc07d5079aed7e9ba0.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/98873f3e6e494ea208ecdc1dbc67d23a6b6bbae7.zip" }, "title": { "value": "How to Get Your LLM to Generate Challenging Problems for Evaluation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2VmB01D9Ef
AutoHijacker: Automatic Indirect Prompt Injection Against Black-box LLM Agents
main
Active
Large Language Model;Prompt Injection Attack;LLM Agent
alignment, fairness, safety, privacy, and societal considerations
3;3;5;6
4;2;4;4
2;3;3;3
2;2;2;3
3;3;2;3
4.25
3.5
2.75
2.25
2.75
0.555556
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1 Autohijacker is composed of two stages, including a training stage and a test stage. Therefore, my questions is how the authors divide the training data and the test data in their experiments.\n\n2 Autohijacker needs three assistant LLMs, including a prompter, and attacker and a scorer. My question is how to choose those models in authors' experiments. Will stronger attacker bring higher ASR?\n\n3 The authors show that AutoJacker can attack GPT-4o. How about other models such as Claude and Gemini?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1 This paper is easy to follow.\n\n2 The experiments are quite solid.\n\n3 The soundness of the proposed method is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose autohijacker, an automatic indirect black-box prompt injection attack. The results on two benchmark datasets indicate that it can be effective to both open-source and closed source models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1 My biggest concern is the novelty of the proposed method. Although in Table 1 and Table 2, the results indicate that AutoJacker can achieve outstanding performances. However, the technical contribution only include a batch-based optimization framework and a trainable memory. It is a little marginal to me. However, I am open to this problem and delighted to further discuss with authors and other reviewers.\n\n2 Details of the baseline attacks are needed. As far as I know, baseline methods such as PAIR are sensitive to various settings. Therefore, more details are required to provide to demonstrate the comparison is fair." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. When constructing N training data points, does the study explore the success probability of attacks in relation to different attack goals, variations in external data, and user instructions? Could the testing phase generate specific attack targets based on different query types and attack categories?\n2. How does the scorer LLM contribute to optimization performance, and could its role be discussed in more detail?\n3. What is the source and collection methodology for the meta prompts used in the training process?\n4. How do the hyperparameters ktop and kbottom affect model performance, and could a more thorough analysis of these parameters improve the method's robustness?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The batch-based optimization moves beyond single-injection attacks by utilizing multiple, diverse data to perform batch-based optimization, effectively addressing the sparse feedback issue that typically limits indirect prompt injection attacks.\n2. The method shows state-of-the-art performance across multiple benchmarks, surpassing other attacks, and demonstrates high success on a real-world LLM agent." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a black-box prompt injection method that leverages LLMs as optimizers to inject prompts indirectly into LLM agents, utilizing minimal feedback and a trainable memory framework." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Text and images need a better presentation. \"Epochs\" in figures need improvement for better readability. Terms like Mi,n, Di,n, Si,n are inconsistent which detracts from understanding.\n2. The paper could further explore the use of diverse victim LLMs within the optimization process, examining how this might impact transferability across models or scales. Does the size or type of this victim LLM affect the overall results?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The overall idea of the paper does not appear to be novel. The core concept still revolves around LLM-as-optimizers, which uses LLM responses to optimize attack prompts. This makes the paper's contribution seem somewhat incremental.\n\n2. The evaluation results need further refinement. The paper describes the “combined attack” as a grey-box attack, but in practice, it’s often easy to know the purpose of an LLM application (especially for task-specific LLMs) and craft fake answers accordingly. Constructing a \"combined attack\" requires no optimization, which is much more efficient than AutoHijacker. Notably, the paper mentions a log length of 30, implying that a successful AutoHijacker attack requires at least 30 optimization iterations. Yet, the results show that AutoHijacker only achieves comparable performance to the combined attack. This suggests that the proposed attack is significantly less efficient.\n\n3. The authors consider various defenses in Table 3, yet these defenses have been shown to be relatively ineffective in [1]. Why not test your attack against more robust defenses, such as Known-Answer Detection [1] or StruQ [2]?\n\n[1] Formalizing and Benchmarking Prompt Injection Attacks and Defenses\n\n[2] StruQ: Defending Against Prompt Injection with Structured Queries\n\n4. I recommend including visual examples of AutoHijacker attacks to make the paper easier to understand. For instance, illustrations of specific attack strategies and guides used in the first step, \"Meta Prompt Generation,\" would be helpful." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper analyzes the limitations of previous LLM-as-optimizers-based methods and proposes improvements to address them.\n\n2. The proposed attack is black-box, making it applicable to certain closed-source LLMs, and therefore more broadly applicable than white-box attacks.\n\n3. Experiments are conducted on two different benchmarks, comparing the effectiveness of various attacks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces AutoHijacker, an automatic black-box prompt injection attack. Built on the concept of LLM-as-optimizers, AutoHijacker constructs an attack memory through batch-based optimization and selects the most effective prompt injection case during the attack. Experimental results show that AutoHijacker outperforms previous attacks in effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The contributions of the paper appear to be incremental.\n\n2. The improvement in the results does not seem significant, especially in comparison to the combined attack.\n\n3. The paper lacks evaluation against effective defenses." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The work presents AutjoHijacker as an automated black-box indirect prompt injection attack, which bridges the current research gap.\n- The work did a good work in presenting the challenge of sparse feedback in indirect prompt injection tasks., and solve it in a simple and reasonable way.\n- The results are promising with improvement over existing attacks on several LLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces AutoHijacker, an automated black-box indirect prompt injection attack. It leverages the concept of LLM-as-optimizers. Specifically, it introduces a batch-based optimization framework to handle sparse feedback and also leverages a trainable memory to enable the effective generation of indirect prompt injections without continuous querying. Experiments are done on two benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I didn't see major flaws in the work and think it would be a good contribution to the community. I only have some questions for the authors regarding the evaluated defenses:\n- The author did a great job in including defenses from the benchmarks. But I'm still curious how some state-of-the-art defenses could work for the attack: for example, in the work [Yi et al.], they show their white-box defense can reduce indirect prompt injection attack to nearly zero. Would the attack also work for such kinds of LLMs (optimized for defending against indirect prompt injection attacks)?\n- I would recommend the author when introducing the concept of LLM-as-optimizer, can explain a little bit more before jumping into the challenge of sparse feedback.\n\nMinor:\n- missing \".\" line 185" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce an automatic black-box indirect prompt injection attack against LLMs and LLM agents." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024autohijacker,\ntitle={AutoHijacker: Automatic Indirect Prompt Injection Against Black-box {LLM} Agents},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2VmB01D9Ef},\nnote={under review}\n}" }, "abstract": { "value": "Although large Language Models (LLMs) and LLM agents have been widely adopted, they are vulnerable to indirect prompt injection attacks, where malicious external data is injected to manipulate model behaviors. Existing evaluations of LLM robustness against such attacks are limited by handcrafted methods and reliance on white-box or gray-box access—conditions unrealistic in practical deployments. To bridge this gap, we propose AutoHijacker, an automatic indirect black-box prompt injection attack. Built on the concept of LLM-as-optimizers, AutoHijacker introduces a batch-based optimization framework to handle sparse feedback and also leverages a trainable memory to enable effective generation of indirect prompt injections without continuous querying. Evaluations on two public benchmarks, AgentDojo and Open-Prompt-Injection, show that AutoHijacker outperforms 11 baseline attacks and achieves state-of-the-art performance without requiring external knowledge like user instructions or model configurations, and also demonstrates higher average attack success rates against 8 various defenses. Additionally, AutoHijacker successfully attacks a commercial LLM agent platform, achieving a 71.9% attack success rate in both document interaction and website browsing tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Model", "Prompt Injection Attack", "LLM Agent" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9435633642f14c8c1558ac847dedaa58bb585fdb.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "AutoHijacker: Automatic Indirect Prompt Injection Against Black-box LLM Agents" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2XBPdPIcFK
Steering Language Models with Activation Engineering
main
Active
interpretability;steering;alignment;safety;sentiment
alignment, fairness, safety, privacy, and societal considerations
3;3;6;8
4;4;4;4
2;1;3;3
2;2;3;4
2;2;3;3
5
4
2.25
2.75
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Can you elaborate on how you search for injection coefficient c and injection layer l? How expensive is this process?\n* In Figure 2, what is the x axis? Why should we expect perplexity to go down when x axis increases?\n* In Figure 3, how is “P(steered completion contains wedding related words)” determined? Can you be more explicit about this in the paper? \n* Can you elaborate on what the p value in table 3 and 4 is? That is, what is the null hypotheses you are testing (and the corresponding alternative hypothesis)?\n* In Figure 5/S4.5, referring to the model’s behavior as “off-target answer probabilities” is rather misleading. That phrase reads as the model’s distribution over the answers for non-target tokens, whereas it seems that the actual probabilities being referred to is the P@K.\n* How do you determine which example to use to determine the steering vector? Did you do any studies on variance across the effectiveness for vectors derived from different examples?\n* Are there any experiments to support the claim in the intro that activation engineering can enable composition of multiple traits, e.g. speech eloquence and mathematical content? If not, I would remove this to avoid overclaiming.\n* The notation in Algorithm 1 could use some improved clarity. For example, what is @? In code it can refer to a matmul; even though this seems like an indexing operation the ambiguity is confusing for the reader." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: The idea of activation engineering as “perturbing activations during the forward pass” is an important and simple idea. While it seems that much concurrent or previous work has also worked with this idea of editing activations, e.g. the ROME paper (Meng et al 2022), adding steering vectors (ActAdd) to control model outputs is to my knowledge original (and the authors do well to cite concurrent work in Li et al 2023b). \n\nQuality: Experiments are overall fairly thorough and demonstrate that ActAdd is a promising, intuitive, and simple approach to control model outputs.\n\nClarity: The overall flow of the paper is clear and well written.\n\nSignificance: This is an important contribution to interpretability and control of models using activation-level edits. The idea that you can controllably transform model behavior by adding a vector to the residual stream is important." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors introduce a paradigm of controlling model outputs/behavior which they term activation engineering. In activation engineering, a user controls model behavior by editing intermediate activations/hidden states of the model during the forward pass. They propose and focus on a specific method in the class of activation engineering called Activation Addition (ActAdd), in which a vector encoding an axis (e.g. love vs hate) can be added to the intermediate activations to make the model shift along that axis, e.g., in sentiment from negative to positive. They compute this vector by taking the difference along a single paired example (e.g. a love vs hate example) and demonstrate effectiveness in experiments on sentiment analysis and toxicity reduction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The biggest weaknesses in my read are a lack of clarity in the algorithm and some of the experiment setup and results. I leave specific questions/suggestions on this point for the Questions section of the review.\n\nAlso, the authors should be careful to clarify their definitions and contributions. In the intro/abstract, they define activation engineering as “the inference time modification of activations in order to control model outputs”. However, section 2 states “Activation engineering invovles creating vectors of activation which cause desired changes to output text when added to the forward passes of a frozen LLM”. This latter definition sounds more specific than the original one; there are many works which fall under the first definition but not necessarily the second one. From my read, I would be careful to claim that you are introducing activation engineering and might instead recommend stating it as highlighting AE as a class of methods to control behavior, under which ActAdd (your primary contribution) falls." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Questions added in Weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The proposed approach is straightforward, lightweight, and demonstrates effectiveness on certain benchmarks. However, the experiments conducted only partially support the claims made in the paper (see more details under weaknesses).\n\nThe algorithm is well-presented, though some aspects of the experiments could benefit from further clarification." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes ActAdd, a method to _steer_ a Language Model's generation in a particular direction. ActAdd is lightweight and merely involves using contrasting prompts (related to the direction you want to steer the LM in). These contrasting prompts are used to compute a steering vector that can be applied at inference time to change the model's behavior.\nThe authors experimented with various tasks such as steering the topic of the LM's generation, steering to reduce toxicity, and steering to change sentiment. \nThe authors also show that ActAdd preserves the model's knowledge by showing that when the model's accuracy remains unchanged on ConceptNet when asked to steer towards a certain topic." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper’s experiments are interesting but could benefit from further depth and clarity. In some cases, it’s challenging to fully understand the conclusions drawn from certain experiments. Additionally, some benchmarks created by the authors are quite small, which makes the results appear more anecdotal than empirical. There are also a few discrepancies with the baselines, as well as cases where only portions of larger benchmarks are used (eg. why use only a subset of RealToxicityPrompts and Sentiment? The current experimentation is performed on ~10% of the test split)\n\nThe paper would greatly benefit from demonstrating how ActAdd performs on larger benchmarks specifically designed for steering and alignment, such as HelpSteer (1 and 2)[1,2]. Also comparisons to methods that involve alignment training might give some indication on if ActAdd can be used instead of or in tandem with some these approaches in practice [3]. \n\nI've summarized my concerns as questions for certain parts of the experiments section\n\nQuestions\n1. ACTADD CAN CONTROL WHAT THE MODEL TALKS ABOUT\n- Which dataset serves as the starting point for the prompts? Is the experiment based on a single prompt with 100 generations? If so, **using a single prompt might make it difficult to fully verify the claim that \"ActAdd can steer the model to talk about a topic.\"**\n- Why does ActAdd perform well for certain topics but not others (e.g., Art)? Is it effective only for steering toward specific topics? Additionally, it is unclear what accounts for the drop at c=0.5 for weddings? This might indicate some experiments on how reliable ActAdd is. \n\n2. ACTADD CAN REDUCE TOXICITY\n- The results in this section could be clearer. The only baseline models are the unsteered model, prompting, and PREADD, while other comparisons, such as FUDGE and AirDecoding, are tested on GPT-2, making direct comparison difficult given the model-dependent nature of the task.\n- Some discrepancies in results are also notable—for instance, the paper draws baselines from this paper ((https://aclanthology.org/2023.findings-acl.636.pdf)), but there are differences in the results for the unsteered OPT (0.152 vs. 0.134 toxicity, 49.9 vs. 8.9 fluency). Such large changes in fluency might suggest a difference in experimental setups, which could potentially affect the interpretation of ActAdd's fluency improvements.\n- Regarding the other results there seem to be a lot of discrepancies -- The authors pick most of their baselines from (https://aclanthology.org/2023.findings-acl.636.pdf). However, the unsteered OPT result is very different. (0.152 vs 0.134 toxicity and 49.9 vs 8.9 for fluency). With such a large change in fluency, it seems there might be a difference in the experimental setup of the two papers. This throws some doubt if the ActAdds better fluency comes from a different experimental setup. \n\n3. ACTADD PRESERVES THE MODEL’S GENERAL KNOWLEDGE\n\nThere are some concerns regarding the setup here. ConceptNet, as a knowledge base, typically requires single-word answer predictions. Showing that the model performs similarly with and without ActAdd doesn’t entirely demonstrate that ActAdd avoids side effects on the model’s factual accuracy. Perhaps this could be bolstered with verifying if the factuality of longer form generations remain unaffected. The FactScore benchmark [4] might be a good place to start. \n\nFinally, while I attempted to review the provided code for further insights, it was challenging to navigate, and the links listed in tab 5 of the appendix did not seem to work.\n\n\nOverall I believe the approach has potential and the paper could heavily benefit from more thorough and comprehensive experimentation.\n\n\nRefs\n[1]https://arxiv.org/abs/2311.09528\n[2] https://arxiv.org/pdf/2406.08673\n[3] https://arxiv.org/abs/2310.05344\n[4] https://arxiv.org/abs/2305.14251" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "n/a" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed activation engineering method can be applied during inference and does not require gradient-based optimization (thus making it computationally fast to compute and apply).\n2. The proposed activation engineering method does not modify the original LM’s weights, and therefore would not change the model’s performance on tasks if the activation engineering method wasn’t applied. This is a unique advantage, as many related “steering” methods that modify a LM’s weights may harm model performance on tasks unrelated to the “steering”-related tasks.\n3. Paper provides many compelling examples of where “AddAct” has been able to successfully steer an LM output (i.e., sentiment, topic, reducing toxicity) across many model architectures." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Paper proposed “Add Act”, a type of activation engineering that, when applied to language models (LMs), can “steer” the model the output during inference. “Steering” an LM, in this context, would mean enabling the user to enhance or control some high-level property of the generated text such as topic or sentiment of the text." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The authors have missed some related work in the area of activation engineering, and their paper may benefit from further comparing and contrasting the proposed “AddAct” method to these works: \n\n[a] Sakarvadia, Mansi, et al. \"Memory injections: Correcting multi-hop reasoning failures during inference in transformer-based language models.\" arXiv preprint arXiv:2309.05605 (2023).\n\n[b] Heimersheim, Stefan, and Neel Nanda. \"How to use and interpret activation patching.\" arXiv preprint arXiv:2404.15255 (2024).\n\n[c] Vig, Jesse, et al. \"Investigating gender bias in language models using causal mediation analysis.\" Advances in neural information processing systems 33 (2020): 12388-12401.\n\nSpecifically, I would like the authors to discuss the computational cost of computing the steering vector, especially if one must test multiple steering vectors for multiple target layers (as it is not obvious which layers/vectors would work best for a specific “steering” goal, and thus a user may need to do (costly) experimentation. Specifically, the “AddAct” method relies on generating the “steering vector” by doing two partial forward passes for the steering prompt pair. This itself is computationally expensive compared to a recent related work [a] which demonstrated that one could compute a “steering vector” simply using the model’s (un)embedding matrix, rather than running the steering prompts through the top N layers of an LM.\n\nFurther, the “AddAct” “steering vector” is layer-specific within a given LM. For example, if a steering vector is generated for layer N, it is not clear if the same vector can be applied to layer N+1. This is a drawback of the method as it may not be apparent to the user which layer would be best for an \"AddAct\" injection. Again, I would be interested if the authors could discuss how their proposed layer-specific steering vector generation strategy compares to related work [a] which proposed a steering vector that is layer-agnostic." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- What is meant by \"this section does not have complete statistics\" in Line 533?\n- How was grid search performed for ActAdd's hyperparameters? Were the results reported for the best set of parameters? If so, was a similar hyperparameter search conducted for the baselines to ensure accurate comparisons?\n- Could you clarify the hyperparameter “a” discussed in Appendix C and explain its function?\n- For which experiments are the prompts mentioned in Lines 1218-1223 used? Appendix C presents a collection of unrelated details, making it difficult to follow and understand how it fits into the overall context of the paper. Could the authors clarify the connection to the experiments?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-written and easy to follow.\n- Activation addition is an intuitive and powerful technique that enables fine-grained control over model outputs.\n- The results convincingly show that activation addition outperforms included baselines in both sentiment control and toxicity reduction tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces ActAdd, a controlled text generation technique that modifies the inner activations of an LLMduring forward passes to guide text generation towards a specific property. These modifications are applied using steering vectors, computed by taking the difference between the activations of a positive and negative prompt at a specific layer. The results demonstrate that ActAdd outperforms the baselines on tasks such as toxicity reduction and sentiment control." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The primary issue with the paper is that it is outdated. The paper refers to several works published in 2023 as \"contemporary,\" implying that they are based on the presented work. This suggests that the paper may have been rejected in previous conferences and is now being resubmitted to ICLR without any major modifications. However, works from 2023 cannot be referred to as contemporary in a submission to ICLR 2025.\n\nMoreover, the claim that both Liu et al. (2023) and Zou et al. (2023) are based on this work is questionable. A quick review of these papers reveals that Liu et al. (2023) merely cites ActAdd as related work, and Zou et al. (2023) actually outperforms ActAdd on one of the tasks. Therefore, I do not believe ActAdd presents any novel idea or result. This undermines the relevance of the method, and I believe this alone is sufficient for rejection. However, if I have misunderstood this point, the authors could clarify their claims.\n\nAdditional (and significant) weaknesses include:\n- Outdated Models: Most of the experiments were conducted on outdated models (OPT, GPT2-xl, and Llama-2). While a few experiments were rerun on Llama-3, there were no baseline comparisons for these models.\n- Inconsistent Baselines: The models used in the baselines do not match. For example, in Table 3, various models are used without a clear pattern. Ideally, all models should be run for every baseline to ensure fair comparison.\n- Outdated Baselines: Baselines such as Fudge and PreAdd have been surpassed by newer techniques (e.g., [1]). Additionally, the paper does not include any baselines that use white-box transformations to control model behavior, despite several relevant works from 2023 (Liu et al. (2023) and Zou et al. (2023)).\n- Inconsistent Perplexity Measurements: Perplexity for the included models was measured using Davinci 002, an old and less effective model. Furthermore, Lines 503-505 state that PreAdd's perplexity was measured using Davinci 001, making direct comparisons between the two methods problematic.\n- Omission of Fudge: In Lines 378-380, Fudge is omitted, despite performing better on certain aspects and only slightly worse on others. This is a strange misrepresentation of the results.\n- Redundant Experiments: The experiments in Sections 4.1 and 4.2 add little to the discussion, as they merely confirm that activation addition works. Furthermore, Tables 3 and 4 essentially present the same findings, but in a more interesting and applicable setting.\n- Basic Metrics: Perplexity and cosine similarity are insufficient metrics to fully capture fluency and relevance. Since controlled text generation methods edit the model's internals, they can yield unintuitive results that these metrics may not fully capture. The authors should include human or LLM-based evaluations to assess the outputs in Tables 3 and 4 and compare them with baselines.\n- Insufficient Code: The provided code lacks essential instructions and does not include scripts to reproduce the experiments. It only includes some notebooks for experimenting with activation addition, which overlooks the most important reason for providing the code. Additionally, the link to the GitHub repository that is present in the included code (playground.ipynb, top) violates the double-blind review process, as it is not anonymized.\n- Unconvincing Experiment in Section 4.5: Evaluating a model with activation addition on one or more recent, open-form reasoning benchmarks (such as GSM8k, MixEval, or MMLU-Pro) would be much more convincing than the benchmark with perplexity measurements.\n- Different Hyperparameters Across Experiments: If I am correct, the results for activation addition were generated using different values for top_p and temperature compared to some baselines (e.g., PreAdd), which undermines the validity of the comparisons. All non-critical hyperparameters should be kept consistent across baselines.\n\n[1] Dekoninck, Jasper, et al. \"Controlled text generation via language model arithmetic.\" arXiv preprint arXiv:2311.14479 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We steer LLMs by adding a bias term computed from the model's representations of simple prompt pairs." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024steering,\ntitle={Steering Language Models with Activation Engineering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2XBPdPIcFK},\nnote={under review}\n}" }, "abstract": { "value": "Prompt engineering and finetuning aim to maximize language model performance on a given metric, like toxicity reduction. However, these methods do not fully elicit a model’s capabilities. To reduce this gap, we introduce _activation engineering_: the inference-time modification of activations in order to control (or _steer_) model outputs. Specifically, we introduce the _Activation Addition_ (ActAdd) technique, which contrasts the intermediate activations on prompt pairs (such as “Love” versus “Hate”) to compute a _steering vector_. By tactically adding in e.g. the “Love”−“Hate” steering vector during the forward pass, we achieve SOTA on negative-to-positive sentiment shift and detoxification using models including LLaMA-3 and OPT. ActAdd yields inference-time control over high-level properties of output (like topic and sentiment) while preserving performance on off-target tasks. ActAdd is lightweight: it does not require any machine optimization and works with a single pair of data points, which enables rapid iteration over steering. ActAdd demonstrates the power of activation engineering." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "interpretability", "steering", "alignment", "safety", "sentiment" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e43a660247ef40a4de5372f90aba33f1996a3712.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/d4e3b28b1c04dcc3f336bad1755c1d1e2d9e242c.pdf" }, "title": { "value": "Steering Language Models with Activation Engineering" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2XdRkRHBT9
AVOIDING BARREN PLATEAUS VIA GAUSSIAN MIXTURE MODEL
main
Withdraw
Barren plateaus;Gaussian mixture model;Quantum circuits;Variational quantum algorithms
applications to physical sciences (physics, chemistry, biology, etc.)
Yun Shang
~Yun_Shang1
3;3;5;5
3;3;4;3
1;1;3;2
1;2;2;2
1;1;2;2
4
3.25
1.75
1.75
1.5
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1 In Figure2, how to determine the inactive parameters\n2 in theorem1, what’s the impact of the partial of f(\\theta)\n3 in theorem 2, there is no definition of M, how to determine the value of M for the different number of layers.\n4 what’s more, there is no comparison with other approaches," }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The novel parameter initialization strategy is provided for the barren plateau phenomenon. And the prove is given," }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers the variational quantum algorithms and deal with the barren plateau phenomenon. The new parameter initialization strategy is proposed combing with gaussian mixture models. The prove is provided that the initialization could avoids the barren plateaus problem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The provided parameter initialization strategy is not clear in the figure 1. What’s more, the comparison with other methods is not given. Furthermore, the induced Gaussian mixture models is not the firstly introduced." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- If I understand the result correctly, this only prevents “barren plateau” at initialization. I wonder if there is any comment the authors can make about the optimization trajectory (can you say anything about the norm of the gradient other than initialization?)\n- I understand the above point is empirically argued e.g., in Figure 4. But the interpretation starting from line 413: “Moreover, the gradient norm remains within a relatively large range throughout the entire training process. This enables our approach to escape … vanishing gradient problem … . *These observations are entirely consistent with the conclusions drawn in Theorem 1*.” But isn’t Theorem 1 ONLY about initialization?\n- In Theorem 1, the assumption of the parameters $\\theta$ is that it follows $\\mathcal{G}_1(\\sigma^2)$ ? But this is just the Gaussian distribution $\\mathcal{N}(0, \\sigma^2)$. Could you explain how to interpret this? Why is this not compared to [1]? (other than one line sentence in line 228, “This is in stark contrast to the exponential lower bound $O(1/L^N)$ found in previous works for global cost functions Zhang et al. (2022a); Wang et al. (2023).)\n- How is the experiment set up? Are these results of actually applying VQA to a quantum computer? Or are these some numerical simulations?\n- Line 370: “…we compare our proposed method with… , Gaussian distribution $\\mathcal{N} (0, \\frac{1}{4S(L+2)})$ : is this variance taken from [1, Theorem 4.1]? It should really be cited clearly…\n- Can you say anything about the solution quality? (converged $\\theta$ after some iterations)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "This paper proposes and proves that the mixture of gaussians as an initialization scheme avoids barren plateau (at initialization) even when the cost function is global. Experiments, even though settings are a bit unclear, seem to support their claim." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes using mixture of gaussians as initialization for variational quantum algorithms using mixture of gaussians. Theoretical results show the expectation of the norm of the gradient following the assumed distribution is lower bounded." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- My biggest concern is that the relation to the previous work [1] is hardly discussed. The authors should at least mention that [1] proposed for the first time the gaussian initialization precisely to prevent barren plateau at initialization—the exact setting this paper is addresses. I understand there are some differences discussed briefly starting from line 258, but even there the authors do not mention [1] uses Gaussian initialization. Such writing gives me the impression that the authors intentionally hide due to significant similarity with [1].\n- I believe some people refer to “barren plateau” not only at the initialization but also more generally, i.e., vanish exponentially with the size of the system, c.f. [2, 3]. The authors should clearly state the “barren plateau” that they mention is only with regards to initialization; this is only mentioned in line 52 as: “the phenomenon of the barren plateau is characterized by the *randomized initialization* of parameters $\\theta$ in VQAs,”…\n- Continuing the above point, in my opinion it is misleading for instance to write in line 87 as: \n$\\theta_{k+1} = \\theta_k - \\alpha \\nabla_\\alpha f(\\theta_k)$ … Therefore, typically $|| \\nabla_\\theta f (\\theta_k) ||^2$ is used to determine whether the cost function can be updated.” \nGiven that this paper is only about initialization, what it really shows is that $|| \\nabla_\\theta f (\\theta_0) \\|^2||$ has significant magnitude, but the result does not say *anything* about $||\\nabla_\\theta f (\\theta_k) \\|^2||$ for $k > 1$.\n- Please use parenthesized citations correctly; it’s very hard to read especially since the citation text colors are the same as the main text color\n- Typos:\n - line 214: “Then We expand…”\n - line 231: wrong quotation marks for “inactive parameters”, etc\n\n[1] Zhang, et al. (2022) “Escaping from the Barren Plateau via Gaussian Initializations in Deep Variational Quantum Circuits” \n\n[2] Fontana et al. (2024) “Characterizing barren plateaus in quantum ansätze with the adjoint representation”\n\n[3] Loracca et al. (2024) “A Review of Barren Plateaus in Variational Quantum Computing”" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* You claim your method \"avoids barren plateau\". However, the theoretical results only guarantee barren plateau is avoided at initialization. Do you have any insight on how this method may help avoid this phenomenon during training? \n* Where does the eq for $f(\\theta_{k+1})$ on line 87 come from?\n* What is the importance of the result given by eq. (5)?\n* How do you achieve a bound that does not depend on the number of qubits $N$ for Theorem 1? This seems surprising to me.\n* You interchangeably use $O$ and $\\mathbf{O}$ to describe observables. You also index this $O$ sometimes. Is this notation you did not define or simply a typo?\n* In Figure 4, your method seems to reach the desired solution, but then as iterations continue, it diverges away before coming back. What explains this phenomenon you think?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. As far as I am concerned, this is a novel contribution in the field of quantum machine learning.\n2. Given it concerns only parameter initialization, the proposed initialization scheme is simple and easy to implement and compute efficient.\n3. The method is backed up by solid theoretical guarantees which are also validated empirically through experiments. Also, the theoretical guarantees hold for rather practical situations, not just an idealized case.\n4. The proofs in the appendix seem correct and are easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel initialization scheme for parameterized quantum circuits optimized with variation quantum algorithms (VQA). The method employs a strategy based on Gaussian Mixture Models (GMMs) to initialize the parameters of an $L$ block $N$ qubit ansatz. The paper claims that for the considered ansatz, the initialization scheme avoids the barren plateau phenomenon (BP).\n\nTheoretically, the authors prove lower bounds on the expectation of the gradient norm under three different assumptions for the observable in the loss function. By showing the expected gradient norm is nonzero, they are able to theoretically guarantee absence of the barren plateau at initialization\n\nThe authors also validate their initialization strategy on synthetic experiments. These experiments confirm that this initialization scheme indeed avoids the BP in practice as well." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although your method is applied to a popular ansatz, it does not seem to generalize to other ansatz structures. Could you discuss potential for generalizing to different ansatz?\n2. I found the paper to be written in a style that hinders understanding. There are various errors/inconsistencies in the notation and long inline math sections (around lines 158, 236, 250 for instance) that I personally found difficult to read. Please see the minor comments section for examples. It would be constructive to break up long inline math sections and give more context around complex mathematical equations.\n3. It seems as though you did not validate your experiments using multiple runs/seeds. Number of runs/standard error/variance is not reported. If you indeed ran your experiments only once, this would be a major weakness of your experimental section; given your method is based on pseudorandom initialization, this would hinder the statistical validity of the results. Would it be possible for you to provide results from multiple runs along with error bars and confidence intervals?\n\nAlso here are some minor comments you may want to address for the final version:\n* Generally, it would be clearer if you define all variables present in a theorem in the theorem statement for clarity\n* In lines 86-100, you introduce the VQA problem and define the cost function. This should go in the notation/background section.\n* Line 54 typo BP is underlined for citation\n* Line 73 typo \"expressibilityRagone et al.\"\n* Line 227 typo, you have a citation in your big O\n* Line 1131 \"Theorem 1\" should be \"Lemma 1\" I believe\n* Generally inconsistent use of $cos$ and $\\cos$\n* In the proofs, inconsistent use of $I_S$ vs $I_s$" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please refer to the section above as often question associates to the weaknesses of the paper." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The paper adresses a fundamental problem in the context of variational quantum algoritms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Variational Quantum Algorithms (VQAs) are important tools to exploit the capabilities of Noisy Intermediate Scale Quantum devices. The basic idea is to leverage parametrized quantum circuits (PQCs), whose parameters are often angles of rotational gates, to find the ground state of a given Hamiltonian, which plays the role of the cost function. \n\nVQAs, however, suffer from several shortcomings. One of the most sever one is known as Barren Plateaus (BPs). When dealing with many qubits and layers in the quantum circuit, the number of variational parameters rapidly increases thus making the optmization problem more challenging. Furthermore, the optimization landscape becomes extrmely difficult to navigate and this results in very small gradient signals (during the optimization of the parametrized quantum circuits) which prevents to converge to global minima and, therefore, to the desired ground state wave function. \n\nIn this paper the authors propose to use Gaussian Mixture Models (GMMs) for the initialization of the parameters in PQCs. The main contribution of the paper claims to **rigorously** solve the problem of **barren plateaus** by initializing the parameters of PQCs which in turn leads to higher gradient signals throughout the optimization of the parameters." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The major weakness of this paper lies in its claim. In the abstract the authors claim that \"*rigorously prove that, the proposed initialization method consistently avoids the barren plateaus problem for hardware-efficient ansatz*\". While this may even be true in theory, I do find their numerical experiments and the PQCs considered in the paper not to be general such that this claim can hold with the current phrasing. This very **strong** claim is reiterated several times throughout the paper. I strongly advise to motigate such claim to something more aligned with the results of the manuscript. \n- Another important weakness of the paper is that the code for reproducing the experiments is not provided thus preventing reproducibility of the experiments and further investigation of the implementation. \n- Furthermore, I find the presentation of the paper hard to follow at times. The notation is often confusing cluttered. I believe that recalling what each variable refers to, use more display math (instead of inline equations) and provide some intuitive sketches may help.\n- The ansatzes used in the paper are not general and do not expliclty account for entanglement. As it is known in the field of quantum computing, a set of universal gates, e.g., a collection of gates able to represent any possible unitary transformation, consist of the rotational gates and entangling gates such as CNOT. I believe this aspect substantially limits the generizability of the proposed claims and theorems. Does the set of gates considered in this work represents a set of universal gates? If yes that should be made clear. \n- I find the numerical results shown in the main paper to be somewhat inconclusive. In particular, the authors omit many fundamental details such as the type of algorithm used to optimize the PQC, if it consists of global or local optimizations at each iteration, if measurement noise and/or hardware noise are taken into account for these experiments, whether the results change when with different algorithms etc. I believe that a thorough ablation study is necessary in order to support the strong claims of this work. \n- I furthermore find puzzling that other results on quantum chemistry simulation are limited to the appendix as I believe those may arguably be even more relevant (or at least complementary) compared to the Ising model.\n- About the layout of the paper, it looks to me as if Table 2 and Table 3, as well as Table 5 and Table 6 are duplicates of each other. Could it be, or am I missing something?\n- Another concern about the layout of the paper is about the citation style. I strongly recommend to adopt fix the citation styles, e.g., substituting \\cite with \\citep where needed in order to wrap refs around parenthesis where suitable. \n- The notion of \"observable\" may not be immediate to grasp for the general audience, I believe it would be useful to provide more concrete examples of real physical examples which could be mapped onto these generalized $\\mathbf{\\mathcal{O}}$ discussed in the paper. \n- The writing and the clarity of the paper I think can in general be improved. \n- Table 4: I think it would be good to mention that higher gradients at initialization is better. This may not be intuitive at first sight. Perhaps making the best results in bold without editing the caption would already help. \n- Line 32: I find the claim \"*[...] VQAs provides a feasible approach to solving complex problems [...]*\" to be far too general. I strongly encourage to be more specific about what VQA can be good at. \n- Line 83: I recommend to change the wording *complex distribution* to complicated/non-trivial as the former may be confused to have different meaning, e.g., distribution of complex values.\n- I storngly recommend to add a \"Related work\" section to give more structure to the paper and streamline the reading. Furthermore, I'd recommend a more thorough review citing also other work using ML methods to enhance optimization of PQCs such as Refs. [1-3] below. Similarly, I recommend to provide the list of contributions at the bottom of page 2 in bulletpoints so that they end up being more accessible and more evident to readers. \n- For the people non familiar with optimization of PQCs, I'd briefly introduce the parameter shift rule and how to compute gradients on quantum computers at the begining of the last paragraph before section 2. I believe this would be useful to make the paper self-contained. \n- Line 158: Something seems wrong with the parenthesis in the last equation of the sentence. As mentioned above, I strongly recommend to use more dispay math instead of inline equations which are often cluttered and hard to parse. If the authors need space I'd recommend removing one of the two figures Figure 1 and Figure 2 as I think the key messages therein can be merged into one figure. \n- On the other hand I'd find beneficial to have some intuitive sketch about the main results/theorem the authors claim in the paper. That might make the paper more accessible also to a general audience (more from the ML community). At the moment the paper seems very much suited to an audience of physicists. For instance the authors never define what's a pure state. This cannot be assumed as common knowledge in the broader audience targeting this conference. \n- To streamline the reading of the paper I'd find it useful to often recall what $q,n$ and $L,N,M$ are. That would help a lot to navigate both theorems and follow the sketch of proofs. \n- Why does the gradient norm shown in figure 4 (right panel) has this double peak structure? Does this has any physical meaning? Is it intuitive why someone should expect such a steep increase in gradient norm during optimization? I believe this associates to the capability of the proposed algorithms to overcome barren plateaus but this is not discussed explicitly neither in the caption nor in the text. This might make it hard to the reader not familiar with the problem of Barren Plateaus to immediately grasp this. \n- In line 470: \"*We validate our algorithm for diverse problems, [...]*\" I think this is not entirely correct. The paper only tackles (in the main part) the Transverse Field Ising Model with different setup. I think the authors should be clearer and more explicit here. This comment often applies to other parts in the paper where it would be useful to revisit the paper in ordare to ensure more precise claims. \n- In line 483: what does the HEA acronym mean?\n\n### References\n\n- [1] [Tamiya, Shiro, and Hayata Yamasaki. \"Stochastic gradient line Bayesian optimization for efficient noise-robust optimization of parameterized quantum circuits.\" npj Quantum Information 8.1 (2022): 90.](https://www.nature.com/articles/s41534-022-00592-6)\n- [2] [Nicoli, Kim, et al. \"Physics-informed bayesian optimization of variational quantum circuits.\" Advances in Neural Information Processing Systems 36 (2024).](https://proceedings.neurips.cc/paper_files/paper/2023/file/3adb85a348a18cdd74ce99fbbab20301-Paper-Conference.pdf)\n- [3] [Anders, Christopher J., et al. \"Adaptive Observation Cost Control for Variational Quantum Eigensolvers.\" Forty-first International Conference on Machine Learning.](https://openreview.net/pdf?id=dSrdnhLS2h)" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel parameter initialization strategy based on Gaussian Mixture Models which can avoids the barren plateaus problem for hardware-efficient ansatz with arbitrary length and qubits and any given cost function." }, "_bibtex": { "value": "@misc{\nshang2024avoiding,\ntitle={{AVOIDING} {BARREN} {PLATEAUS} {VIA} {GAUSSIAN} {MIXTURE} {MODEL}},\nauthor={Yun Shang},\nyear={2024},\nurl={https://openreview.net/forum?id=2XdRkRHBT9}\n}" }, "abstract": { "value": "Variational quantum algorithms is one of the most representative algorithms in\nquantum computing, which has a wide range of applications in quantum machine\nlearning, quantum simulation and other related fields. However, they face challenges\nassociated with the barren plateau phenomenon, especially when dealing\nwith large numbers of qubits, deep circuit layers, or global cost functions, making\nthem often untrainable. In this paper, we propose a novel parameter initialization\nstrategy based on Gaussian Mixture Models. We rigorously prove that, the\nproposed initialization method consistently avoids the barren plateaus problem\nfor hardware-efficient ansatz with arbitrary length and qubits and any given cost\nfunction. Specifically, we find that the gradient norm lower bound provided by the\nproposed method is independent of the number of qubits N and increases with the\ncircuit depth L. Our results strictly highlight the significance of Gaussian Mixture\nmodel initialization strategies in determining the trainability of quantum circuits,\nwhich provides valuable guidance for future theoretical investigations and practical\napplications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Yun_Shang1" ] }, "authors": { "value": [ "Yun Shang" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Barren plateaus", "Gaussian mixture model", "Quantum circuits", "Variational quantum algorithms" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "shang|avoiding_barren_plateaus_via_gaussian_mixture_model" }, "pdf": { "value": "/pdf/266ce4194f4eeceac1406f81c90f5805fa0285b6.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "AVOIDING BARREN PLATEAUS VIA GAUSSIAN MIXTURE MODEL" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2Y6xGE1K60
Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding
main
Active
Large language model; Knowledge fusion; Speculative decoding
foundation or frontier models, including LLMs
5;5;6
4;3;4
2;3;3
3;2;3
3;3;3
5.333333
3.666667
2.666667
2.666667
3
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In 3.2 Verification, for Tree-Based Verification, you claim to use benchmark datasets such as GSM8K to train the classifier, but then in your test, you incorporate the GSM8K dataset as well. Is there any information leakage in terms of that you are training your verifier on the test set so that it gains advantage over other models?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The authors have put effort in experimenting their proposed framework under different setups, including various draft and assistant models, different simulated scenarios, etc.\n- The proposed framework gains some advantage over the existing framework of Co-LLM in certain scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Collaborative Speculative Decoding (CoSD) that fuses LLM knowledge at test time. The algorithm employs a draft model to generate initial response sequences and a rule-based or decision tree to decide when to leverage an assistant model to improve the drafts.\nThe authors have conducted experiments using different pairs of LLMs and under various experimental setups." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In Table 2, I notice that in most cases, the fused model underperforms the draft model and the assistant model.\nFor instance, for Pair 1, none of the fusion methods outperform both draft and assistant model for GSM8K, HumanEval; for Pair 2, none of the fusion methods consistently outperform both draft and assistant model for GSM8K, and MMLU.\nThen I wonder what is the point of fusing knowledge in these cases if we can simply adopt one model instead of the other?\n\n- It seems that for Pair 3, CoSD-Rule performs exceptionally well on GSM8K, yielding 45.47 while the draft and assistant models yield 25.01 and 35.43, which is very different from the performance patterns for this same pair on other datasets such as MMLU and also other pairs. Could you give more insights into such a result? Could you present some examples that CoSD-Rule excel at under this situation that cannot be addressed by either the draft nor the assistant model?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Is the proposed algorithm suitable for collaboration among multiple LLMs? What will be the potential challenges?\n2. Can you explain more about the limitations of the current method? I'm curious when it doesn't work well." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper provides an interesting perspective to fuse knowledge between LLMs using speculative decoding, which leverages the strengths of different LLMs while still keeping the efficiency.\n2. The experiment setting is interesting, which tries complementary knowledge fusion, catastrophic forgetting recovery, capacity imbalance and different tokenizers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel collaborative speculative decoding algorithm which can efficiently fuse the knowledge from different LLMs during inference. The experiment setting is quite interesting and includes different types: complementary knowledge fusion, catastrophic forgetting recovery, capacity imbalance and different tokenizers. The results are better than different baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper only does the experiment in each pair of the LLMs. It would be interesting to see more LLMs collaboratively fuse knowledge.\n2. It would be better to show more details about the limitations of the proposed method and show some error analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- \"the algorithm regenerate and re-verify iteratively until all tokens are accepted\" How many iterations does it take on average? \n- during the training process of the decision tree, if neither the draft model's generation nor the assistant model's generation match the target, you drop the sample and continue the loop with i &larr; i+1. Any ideas of improvement other than simply dropping these samples?\n- typos: line 287, \"tree\" to \"three\", \"drat\" to \"draft\"" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- interesting and inspiring idea on fusing knowledge at the decoding time\n- The algorithm is clearly presented in this paper through both a workflow diagram and mathematical expressions.\n- Both Rule-Based verification and Tree-Based verification are well-designed and both make sense to me." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion during decoding. Upon the idea of speculative decoding, the paper proposes two novel decision-making rules: Rule-based and Tree-Based. The method features 1) efficiency (parallel generation, no additional model training), 2) transferability across different domains and models with different tokenizers, and 3) interpretability. CoSD successfully improves baselines by up to 10%." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I'm not sure if the goal of this algorithm is to A) achieve performance comparable to the assistant model but in a more efficient way, or if it's aimed at B) outperforming both the draft model and the assistant model individually (1+1>2). How do these objectives apply to the four scenarios of knowledge fusion discussed in section 4.1? If the goal is A, since the draft models in complementary knowledge fusion and catastrophic forgetting recovery scenarios are about the same size as the assistant model, and the algorithm involves autoregressive generation of the draft model, I doubt the algorithm improves efficiency. If the goal is B, I can't see improvement based on Table 2." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024speculate,\ntitle={Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2Y6xGE1K60},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) often excel in specific domains but fall short in others due to the limitations of their training. Thus, enabling LLMs to solve problems collaboratively by integrating their complementary knowledge promises to improve their performance across domains. To realize this potential, we introduce a novel Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion at test time without requiring additional model training. CoSD employs a draft model to generate initial sequences and an easy-to-learn rule or decision tree to decide when to invoke an assistant model to improve these drafts. CoSD not only enhances knowledge fusion but also improves inference efficiency, is transferable across domains, and offers greater explainability. Experimental results demonstrate that CoSD improves accuracy by up to 10% across benchmarks compared to existing methods, providing a scalable and effective solution for LLM-based applications." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large language model; Knowledge fusion; Speculative decoding" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8c7bc724e9f1abc87d659e8c61bf52a690295352.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2YzeOOjvOi
DET: Learn to Solve the Tunnel Traveling Salesmen Problem using Double-Encoder Transformer
main
Active
Combinatorial Optimization; Transformer; Deep Reinforcement Learning; Tunnel TSP
reinforcement learning
3;3;5;5
3;4;3;2
3;2;3;3
2;2;2;2
3;2;3;2
4
3
2.75
2
2.5
-0.707107
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "How does the proposed DET model perform in other TSP variations or combinatorial optimization tasks that have similar clustering constraints?\n\nCould further feature augmentation, beyond tunnel information, bring improvements, or would such modifications saturate the model's performance gains?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The DET model's architecture, with separate encoders for nodes and tunnels, is a practical adaptation that enables better handling of tunnel-specific constraints within TSP solutions. The authors demonstrate measurable performance improvements over existing solvers for this problem variant, validating DET’s utility in improving optimality gaps in tunnel TSP instances." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles a variation of the Traveling Salesman Problem (TSP) referred to as the tunnel TSP, introducing a model called the Double-Encoder Transformer (DET) to solve it. Unlike conventional TSPs, the tunnel TSP includes specific constraints for tunnel traversal, which traditional neural TSP solvers struggle to handle effectively. The proposed DET model enhances existing autoregressive neural TSP solvers by incorporating separate encoders for nodes and tunnels, allowing the model to more accurately process the unique interactions between these elements in the tunnel TSP. The authors demonstrate that integrating DET into established neural solvers (such as POMO) can reduce the optimality gap for tunnel TSP, enhancing solution quality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While DET shows practical utility, its novelty is limited due to its reliance on well-established architectures (POMO) and scoring techniques (regret). The approach primarily focuses on adapting feature encoding without introducing significant new concepts in neural TSP solving or reinforcement learning. Moreover, the evaluation lacks comparisons with a broader range of TSP variants and solvers, which would better contextualize DET’s relative efficacy. Lastly, the choice of DET may result in increased computational overhead due to the dual encoder, which the paper does not address in terms of efficiency or resource requirements." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Separating node and tunnel information into different encoding pipelines is a reasonable approach to improve the overall performance.\n- The plug and play design of DET allows it to integrate smoothly with existing methods, as demonstrated in the experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the tunnel traveling salesman problem using a deep reinforcement learning approach. It introduces the Double Encoder Transformer (DET) module, which encodes node and tunnel information through two separate encoders. The DET is compatible with existing neural solvers, allowing it to be utilized in a plug and play manner. Experimental results indicate that the proposed DET generally improves the performance of existing neural solvers for tunnel TSP." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The technical contribution is somewhat limited to the DET module and the separation of the node and tunnel features.\n- More explanation on some topics could be beneficial, for example,\n - How are the baseline models trained? Do they explicitly receive tunnel information as inputs to their single-encoders? Or is it only implicitly incorporated through the cost/reward?\n - What is the size of the test samples used to evaluate the models in Table 1? Reporting the variations across multiple training/testing runs would strengthen the claims about DET effectiveness, especially for claims such as guaranteed improvements (Line 468-469)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How do the authors incorporate tunnel information for baselines such as POMO? \nCould the authors provide a detailed computational analysis, including inference times and parameter counts, to better understand the practical implications of the additional neural network modules? \nGiven that Tunnel TSP represents a specialized case of Clustered TSP, what are the technical challenges in extending the proposed framework to more general CTSP instances or related vehicle routing problems (e.g., pickup and delivery)? \nCould the authors elaborate on concrete real-world applications where their framework provides practical advantages over existing approaches?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors present a comprehensive evaluation of the model, demonstrating its effectiveness across diverse instances of the Tunnel TSP problem. The proposed approach shows versatility by successfully enhancing multiple neural solvers in addressing the Tunnel TSP, making the design plug-and-play. The paper is generally well-structured and clearly presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work aims at a Transformer to solve the Tunnel Traveling Salesmen Problem. Previous single-encoder models are general to distinct vehicle routing tasks but in this work the Transformer is applicable to a specified variant. The performance is incrementally improved since the average optimality gap is still large, and Transformer's applicability obviously weakens." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The technical novelty appears limited. The primary contribution centers on introducing a tunnel-specific encoder and corresponding decoding modifications to existing architectures, rather than presenting fundamentally new insights or methodologies. \nThe method's applicability appears narrowly focused on Tunnel TSP, with insufficient exploration of its potential generalizability to broader combinatorial optimization problems. \nThe evaluation relies exclusively on synthetic datasets, raising questions about the model's robustness to varying problem sizes and other data distributions. \nThe computational complexity of the tunnel encoder appears comparable to the node encoder, potentially introducing significant overhead." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Tunnel TSP looks like the simplest special form of CTSP. Then can the definition of CTSP easily cover or extend to the one of tunnel TSP too? The comparison between them needs to be specified for better understanding.\n- Please provide clear motivations for the target task and proposed approaches. It will help the readers to understand the novelty and importance of this work.\n- Is there any challenge when DRL is applied to CTSP?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- It clearly redefines tunnel TSP task with a notation TTSP-m-n, where there are a total of m nodes and n tunnels. In this setting, a node can be connected or standalone, and the cost is similar to the original version, but includes a fixed distance, D(S).\n- The model utilizes two separate transformer encoders, which encode different information like node and tunnel. It enhances the overall performance via distinctly encoding tunnel information from graphs.\n- The proposed method can effectively solve scale-variant tunnel TSP problems, which is hard for existing approaches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a challenging variant of Clustered Traveling Salesman Problem (CTSP), called tunnel TSP, which incorporates an important constraint requiring the traversal of a prescribed set of tunnels. The authors utilize deep reinforcement learning (DRL) for this problem, where the method is called Double Encoder Transformer (DET). It encodes node and tunnel information and can be applied to the existing method to solve tunnel TSP problems. The experimental results show the effectiveness of the DET model on various scaled problems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It is unclear why the tunnel TSP task is important to systematically define and resolve. This task looks like a simple variation of CTSP. Please provide real-world examples to support its importance.\n- The explanation lacks clarity on why two encoders are necessary and what specific motivation supports this design choice. In addition, the overall method seems really simple and lacks a strong sense of novelty.\n- While this may be the first application of DRL to CTSP, its novelty is questionable. The proposed method appears to be a straightforward application, lacking clarity on any specific challenges or problems it addresses.\n- There are no experiments comparing costs. Additionally, it is unclear how the existing models would perform if the size of these models were increased." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024det,\ntitle={{DET}: Learn to Solve the Tunnel Traveling Salesmen Problem using Double-Encoder Transformer},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2YzeOOjvOi},\nnote={under review}\n}" }, "abstract": { "value": "We delve into a challenging variant of the Traveling Salesman Problem (TSP), namely tunnel TSP, which incorporates a new important constraint requiring the traversal of a prescribed set of tunnels. While traditional deep reinforcement learning (DRL) based neural TSP algorithms excel in optimizing routes without tunnel restrictions, they often struggle to achieve optimal performance in tunnel TSP due to the neglect of the crucial role of tunnel attributes during solution generation. To address this challenge, we propose a simple but effective and flexible technique, called Double-Encoder Transformer (DET), which can be seamlessly integrated into various existing autoregressive neural TSP solvers. DET processes node and tunnel location information separately and encodes them in two distinct feature spaces. Following an efficient fusion strategy, DET then integrates the encoded information from nodes and tunnels, harnessing their intricate interactions. Experimental validation demonstrates that integrating DET into existing autoregressive neural solvers significantly improves performance, enabling us to reduce the average optimality gap for tunnel TSP from 12.58% (of the previous Single-Encoder model) to 7.35%." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Combinatorial Optimization; Transformer; Deep Reinforcement Learning; Tunnel TSP" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2ae0ec46e9647974a42cf4d0e17b6b514b65f582.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/9d471e5272361ea9e7b73bd46d6092767b033482.zip" }, "title": { "value": "DET: Learn to Solve the Tunnel Traveling Salesmen Problem using Double-Encoder Transformer" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ZK8zyIt7o
Improving Long-Text Alignment for Text-to-Image Diffusion Models
main
Active
Long Text Alignment;Diffusion Models;Preference Optimization;Text-to-Image Generation
generative models
3;5;6;8
4;4;4;4
2;2;4;2
2;2;3;3
3;3;2;3
5.5
4
2.5
2.5
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "na" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see Weaknesses" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The motivation for using preference models is well-founded, and the paper is well-written.\n2. It is interesting to identify two distinct focuses within preference models, and the analysis provided is both reasonable and thorough." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel approach to enhance the alignment between long text descriptions and generated images in text-to-image diffusion models, introducing segment-level encoding to overcome input length limitations and decomposed preference optimization to mitigate overfitting and improve text-relevant alignment during fine-tuning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "weakness\n1. I am unsure why multiple <sot> tokens are retained; regarding the retention or removal of tokens, a more detailed explanation or analysis is needed, as it currently leaves me confused.\n2.After reweighting, whether there will be a noticeable difference in the aesthetic quality of the generated results (due to text-irrelevant components) remains unclear. For Appendix B.1, it would be beneficial to provide some visualizations of the outcomes from the two loss functions.\n3. Segmenting to leverage CLIP's alignment effect is an intuitive innovation, but does this become irrelevant in light of the development of Vision-Language Models (VLMs)? Can the current innovation still contribute to VLMs?\n4. On line 363, it mentions mitigating the risk of overfitting to Denscore. Could you clarify where the potential source of this overfitting lies?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N.A." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "[1] Does your proposed segment-level encoding strategy demonstrate significant effectiveness for texts of varying lengths? Specifically, how does the model perform with very short texts (fewer than 10 words) or very long texts (over 500 words)? Could you provide additional experiments to show comparative results under different text length conditions to verify the generalizability of the segment-level encoding strategy?\n[2] You mentioned using a reweighting strategy to mitigate the model's overfitting issue, but the description of this process in the paper is rather brief. Could you provide detailed steps or pseudocode to explain the implementation of this strategy? Additionally, does this method have any quantitative results to demonstrate its effectiveness in reducing overfitting in specific scenarios? Could you include comparative data from the experiments to validate the impact of this strategy?\n[3] How were the 5k images in the test set specifically selected from datasets like SAM and COCO2017?\n[4] Could you briefly explain the selection of models like CLIP-H and HPSv2 in the experimental section of Chapter 5, as well as the chosen evaluation metrics?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "[1] It introduces a segment-level encoding strategy that effectively handles long text inputs by dividing and separately encoding segments, overcoming traditional model input limitations and enhancing text-to-image alignment. \n[2] The preference model is innovatively decomposed into text-relevant and text-irrelevant components, with a reweighting strategy to reduce overfitting and improve alignment precision.\n[3] The paper conducts extensive experiments, demonstrating significant improvements in long-text alignment over existing models like PixArt-α and Kandinsky v2.2, proving the method's effectiveness for complex text generation tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel method to improve text-to-image (T2I) diffusion models in handling long text inputs. Due to the input length limitations of existing encoders like CLIP, it becomes challenging to accurately align generated images with long texts. To address this issue, the authors propose a segment-level encoding strategy, which divides long texts into segments and encodes them separately, combined with a decomposed preference optimization method to reduce overfitting and enhance alignment. Experimental results show that the fine-tuned model surpasses several existing foundation models in long-text alignment, demonstrating significant improvements in handling long text inputs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "[1] The paper proposes a segment-level encoding strategy to handle long texts but does not thoroughly validate the performance of this strategy under different text length conditions. For very short or very long texts, can the segment-level encoding still maintain the same alignment effectiveness? The lack of fine-grained comparative experiments makes it difficult to adequately demonstrate the applicability of segment-level encoding across a wide range of text lengths.\n[2] The paper proposes a reweighting strategy to address overfitting, but lacks detailed experimental data to demonstrate its effectiveness, failing to adequately prove its specific impact on reducing overfitting.\n[3] The segment-level encoding and preference optimization strategies proposed in this paper show excellent performance in the experiments, but lack an analysis of the method's limitations. It would be beneficial to discuss whether these segment-level encoding methods might lose part of their alignment effectiveness when dealing with texts that have complex contextual dependencies or require strong semantic understanding." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I apologize in advance if I missed it, but I do not really see clear details about the training of the Denscore model. B.1 has details on the training objectives and the fact that captions are generated by LLaVA-Next, but beyond this I do not see other implementation details (dataset, other choices etc.), so it would be great if the authors could point me to this." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper tackles the crucial challenge of long prompt following in a very effective manner. Using a text encoder that can take the entire long prompt is a sound idea, and the Denscore preference model looks like a useful contribution in general. \nApart from this, the reward fine-tuning with the orthogonal decomposition and the gradient reweighting looks like a good idea to deal with the \"reward-hacking\" problem.\nFinally, the results also appear quite strong from the evaluations presented in the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a method for enhancing the prompt following of text-to-image models specifically in the case of long prompts. The key contribution for tackling this problem is twofold: a) using a combination of CLIP and T5 encoders (as is becoming increasingly common these days e.g. SD3, Flux) b) the introduction of a preference model tailored for long prompts (Denscore) and applying reward fine-tuning with this Denscore model to enhance the prompt following of SD1.5 models for long prompts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "An important paper that is missed here is ELLA[Hu et al. 2024] for a couple of reasons. The first is that they propose replacing the CLIP encoder of SD1.5 with a T5-XL model and get significantly improved results (far superior numbers to those reported by Lavi-Bridge whose MLP adapter is used here). Therefore, this model might be a valid comparison (although the training cost of ELLA is a bit higher: 7 days with 8 A100s for SD1.5). Alternatively, the adapter provided by ELLA would have probably been a better alternative to the one used in the paper (from Lavi-Bridge). \n\nApart from the comparison/use of adapter, there's also DPG-Bench introduced in the paper which is a good benchmark for long prompt following (as compared to existing benchmarks like T2I-Compbench, DSG, TIFA etc.). Evaluating on DPG-Bench would be a useful addition since the 5k evaluation set of this paper is not fully understood and only a few models have been evaluated here. Additionally, from an evaluation standpoint, even on this 5k evaluation set, VQAScore[1] might be a good option to consider, since it uses a T5-XXL model which can take long prompts, and has shown some promising results for text-to-image evaluations. \n\nAnother aspect which is missing here is that all the experiments in this paper are conducted on SD1.5 which is a relatively older model, and there have been newer models in the past 2 years (e.g. SDXL). Therefore, it would have been nicer to also have results with any of the newer, more performant models, but I can understand that this might be a bit more computationally expensive (especially if the training has to be done at 1024 resolution). \n\nOverall, I dolike the paper, but I believe that incorporating these aspects (especially strengthening the paper with additional evaluations) could improve the paper significantly. \n\n[1] Lin et al. \"Evaluating Text-to-Visual Generation with Image-to-Text Generation\", ECCV 2024" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "(1) Please better explain why the proposed split-and-merge approach can address the long-text alignment issue.\n(2) Please provide the ablation study clearly." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "(1) Comprehensive Survey of Related Work.\nThe paper presents a thorough and comprehensive survey of existing work in text-to-image (T2I) diffusion models, demonstrating an impressive grasp of the field. By delving deeply into previous approaches and their limitations, the authors effectively set the stage for their contributions, clarifying the gaps their method aims to fill. This background provides readers with valuable context and insight into the evolution of T2I models, particularly in handling longer, complex textual inputs. The comprehensive nature of this survey also reinforces the authors' understanding of the field's current challenges and strengths, building confidence in the relevance and timeliness of the proposed approach.\n\n(2) Importance of the Problem and a Reasonable, Well-Motivated Solution.\nThe authors tackle a critical issue in T2I diffusion models: the difficulty of aligning generated images with longer text prompts. As the demand for complex, high-fidelity image generation grows, the ability to handle longer text inputs accurately is essential. The segmentation approach, paired with decomposed preference optimization, offers a well-motivated solution to this problem. Segmenting long text into manageable parts allows for better processing within the confines of existing encoding models, while the decomposed preference optimization fine-tunes the alignment, addressing the unique challenges posed by long prompts. The design choices reflect a reasonable and methodical approach to tackling these limitations, and the paper articulates the rationale for each component clearly. This structured approach suggests the authors have carefully considered the problem’s nuances, offering a solution that is not only effective but also grounded in sound methodology.\n\n(3) Demonstrated Superiority over State-of-the-Art Models.\nOne of the paper's significant strengths is the demonstrated performance improvement over state-of-the-art models. Through rigorous experimentation, the authors show that their method surpasses current leading models like PixArt-α and Kandinsky v2.2 in T2I alignment, particularly for long-text prompts. By fine-tuning Stable Diffusion v1.5 with their approach, they achieve superior alignment, reducing overfitting while preserving text-relevant information in the generated images. This achievement underscores the potential of the proposed method to set a new benchmark for handling longer, more detailed textual inputs within T2I models. The improvement over established models validates the effectiveness of the segmentation and preference optimization strategy, indicating that this approach could meaningfully advance the state of the art in T2I diffusion modeling." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new approach for long text inputs on text-to-image (T2I) alignment since the clip text encoding only allows 77 tokens. The authors address limitations of existing encoding methods like CLIP by proposing segment-level encoding, where long texts are divided and processed in parts to bypass input length constraints. They further introduce a decomposed preference optimization that separates alignment-related and non-alignment components of CLIP-based preference scores. By reweighting these components, the method reduces overfitting, achieving superior T2I alignment after fine-tuning Stable Diffusion v1.5, outperforming models such as PixArt-α and Kandinsky v2.2." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although the proposed method solved an important issue, three major issues remain as listed below.\n(1) Limitations and Ambiguities in the Segmentation and Merging Methodology. The segmentation and merging technique proposed in this work introduces a unique approach to handling longer text inputs but raises questions regarding its effectiveness and generalizability. When text inputs exceed 77 tokens, this method still encounters limitations, as it is fundamentally restricted by the underlying model’s capacity to handle “long” sequences since the split-and-merge process does not solve the problem. This constraint is particularly concerning as longer text inputs are common in real-world applications and often essential to producing detailed and contextually accurate image generations. The current approach of segmenting and then merging these sections seems like a workaround rather than a robust solution to handling extended texts, which may inherently limit its scalability and versatility. Furthermore, the mechanics of how segmentation and merging affect the underlying model's cross-attention dynamics remain underexplored. Cross-attention is a critical component in the alignment process between text and image features, and segmenting inputs may disrupt this alignment, especially as certain semantic connections might be lost or diluted across segmented inputs. Investigating the cross-attention differences between the original, unsegmented approach and the segment-and-merge methodology could shed light on any distortions introduced by this technique. A more thorough analysis of cross-attention’s role here could help refine segmentation methods to better retain textual coherence and improve image alignment fidelity, ultimately benefiting downstream performance.\n\n(2) Dependency on an Outdated Baseline Model (Stable Diffusion v1.5):\nThe use of Stable Diffusion v1.5 as the primary evaluation model poses a significant limitation, given that the field has moved toward more advanced versions like SD-3 and SDXL. These newer versions incorporate improved architectures and training techniques, yielding enhanced performance, especially in terms of image quality and alignment with textual inputs. The reliance on an outdated model not only limits the relevance of the study’s results but also restricts the potential impact of the proposed method. Using v1.5 as the baseline reflects well on the approach’s applicability to older architectures, but it leaves unanswered questions about its efficacy on more sophisticated models that incorporate advancements in diffusion techniques, training scale, and multimodal alignment mechanisms. \nMoreover, maintaining SD-1.5 as a standard for comparison could inadvertently hold back progress within the research community. As models continue to evolve, it’s essential to align benchmark tests with the latest technologies to ensure that methods are relevant and that advancements reflect real-world capabilities. Preliminary results from newer models, such as SD-3, have demonstrated considerable improvements in T2I alignment, indicating that the proposed method may benefit even further from these architectural updates. Testing on newer models would better position the approach in the context of current technological standards, ensuring that it remains relevant and applicable as diffusion models evolve. Future work should include evaluations on SD-3 and SDXL to substantiate claims of superiority over other methods in a more current setting. The test of SD-3 with the prompt used in the first example of Fig. 1 is shown below.\nhttps://ibb.co/CWyKQTZ\n\n(3) Over-reliance on Long Prompt Training and Lack of Generalizability Testing.\nThe proposed method seems to rely heavily on training with long prompts, which could limit its flexibility and adaptability. While training on extended text inputs may enhance alignment for similar prompts, it raises concerns about the model's performance on shorter or more varied prompts. In real-world scenarios, prompt lengths and structures vary significantly, and a robust model should perform consistently across this spectrum. By focusing predominantly on long-prompt alignment, the current approach may overfit to specific input lengths, making it less effective for shorter or less detailed prompts where segmentation might not be necessary or where text segments are not sufficiently complex to benefit from this treatment.\nTo address this potential limitation, it would be valuable to conduct experiments that vary prompt lengths and structures systematically, assessing whether the model’s performance holds across different scenarios. Additionally, testing with alternative segmentation designs could reveal whether simpler or more complex methods yield better alignment. These experiments would enhance our understanding of how adaptable the proposed method is, providing insights into its generalizability and robustness. The community would benefit from such insights, as they could guide further development of segmentation-based approaches for T2I tasks." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper presents a segment-level encoding method and a decomposed preference optimization method to enhance the alignment of T2I diffusion models with long text inputs, outperforming existing models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024improving,\ntitle={Improving Long-Text Alignment for Text-to-Image Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ZK8zyIt7o},\nnote={under review}\n}" }, "abstract": { "value": "The rapid advancement of text-to-image (T2I) diffusion models has enabled them to generate unprecedented results from given texts. However, as text inputs become longer, existing encoding methods like CLIP face limitations, and aligning the generated images with long texts becomes challenging. To tackle these issues, we propose a segment-level encoding method for processing long texts and a decomposed preference optimization method for effective alignment training. For segment-level encoding, long texts are divided into multiple segments and processed separately. This method overcomes the maximum input length limits of pretrained encoding models. For preference optimization, we provide decomposed CLIP-based preference models to fine-tune diffusion models. Specifically, to utilize CLIP-based preference models for T2I alignment, we delve into their scoring mechanisms and find that the preference scores can be decomposed into two components: a text-relevant part that measures T2I alignment and a text-irrelevant part that assesses other visual aspects of human preference. Additionally, we find that the text-irrelevant part contributes to a common overfitting problem during fine-tuning. To address this, we propose a reweighting strategy that assigns different weights to these two components, thereby reducing overfitting and enhancing alignment. After fine-tuning $512 \\\\times 512$ Stable Diffusion (SD) v1.5 for about 20 hours using our method, the fine-tuned SD outperforms stronger foundation models in T2I alignment, such as PixArt-$\\\\alpha$ and Kandinsky v2.2." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Long Text Alignment", "Diffusion Models", "Preference Optimization", "Text-to-Image Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c762592774943e8346fc43f4d1e72c6e5d63a142.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/368526d15ccce25e3b427a4fb654b35c2e5be754.zip" }, "title": { "value": "Improving Long-Text Alignment for Text-to-Image Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2ZTnALzLyX
MotifExplainer: a Motif-based Graph Neural Network Explainer
main
Active
Instance-level explanation;Graph Neural Network;Motif
interpretability and explainable AI
3;3;3;5
4;4;4;4
2;2;1;2
2;2;1;3
1;3;1;3
3.5
4
1.75
2
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How are the most important motifs determined, and which motifs were defined and used as explanations in the experiments?\n- In Algorithm 1, where does h originate?\n- The proposed model includes motif extraction in the efficiency study, which is generally quite slow. How can it outperform existing models in speed? Could you also provide a time complexity analysis?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Providing post-hoc explanations is crucial for training trustworthy GNNs.\n - Using motifs can be a valuable approach for interpretability, offering substantial potential impact.\n - The proposed method’s utility is supported through experiments on a range of datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a GNN explainer that uses motifs as the unit of explanation. By decomposing representations based on extracted motifs, it produces subgraph explanations. The proposed approach demonstrates its effectiveness across various datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It is unclear how this approach improves over existing subgraph-based explanation models, such as GLGExplainer [2].\n - The paper would benefit from comparisons with more recent XAI methods, such as D4Explainer [1] and MixupExplainer [3], along with subgraph-based explanation methods like GLGExplainer [2]. The most recent baseline in the experiment section of this paper was published in 2021. \n\nReferences:\n\n[1] Chen et al., \"D4Explainer: In-distribution Explanations of Graph Neural Network via Discrete Denoising Diffusion,\" NeurIPS 2023.\n[2] Azzolin, \"Global Explainability of GNNs via Logic Combination of Learned Concepts,\" ICLR 2023.\n[3] Zhang et al., \"MixupExplainer: Generalizing Explanations for Graph Neural Networks with Data Augmentation,\" KDD 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "The questions are listed in paper weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. MotifExplainer focuses on statistically significant motifs rather than individual nodes or edges, providing more human-understandable explanations by highlighting recurring and functionally relevant substructures within graphs.\n\n2. By reducing the search space to motifs rather than all possible subgraphs, MotifExplainer is computationally more efficient, making it suitable for dense or large-scale graphs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces MotifExplainer, a novel method for explaining Graph Neural Networks (GNNs) by identifying important motifs within a graph. MotifExplainer utilizes domain-specific motif extraction rules to identify these recurring substructures, creating motif embeddings through a pre-trained GNN’s feature extractor.\nIn graph classification, MotifExplainer aggregates motif embeddings to create a new graph embedding, while in node classification, it focuses on motifs that affect a specific node’s embedding. An attention layer highlights the most relevant motifs for predictions, aiming for more interpretable, human-understandable explanations. The approach is more efficient than subgraph-based methods by reducing the search space, and experiments show it provides high-quality explanations with improved interpretability and computational efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motif-based explanation approach is already a well-known method, with other papers[1,2,3,4] actively utilizing motifs for explainability. This paper needs to demonstrate its unique advantages and the necessity of its approach compared to these previous works.\n\n- [1] Chen, Jialin, and Rex Ying. \"Tempme: Towards the explainability of temporal graph neural networks via motif discovery.\" Advances in Neural Information Processing Systems 36 (2023): 29005-29028.\n- [2] Ding, Feng, et al. \"MEGA: Explaining Graph Neural Networks with Network Motifs.\" 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 2023.\n- [3] Perotti, Alan, et al. \"Graphshap: Motif-based explanations for black-box graph classifiers.\" arXiv preprint arXiv:2202.08815 (2022).\n- [4] Zhang, Shichang, et al. \"Motif-driven contrastive learning of graph representations.\" arXiv preprint arXiv:2012.12533 (2020).\n\n2. In this model, cycles are used to extract motifs without domain knowledge. However, the paper needs to justify the validity of the statement \"We consider combining cycles with more than two coincident nodes into a motif.\" Since motifs are central to this model, the model's validity hinges on how motifs are defined. The justification for the effectiveness of this approach in extracting motifs across various domains is insufficient.\n\n3. The authors claim that their model addresses efficiency issues when generating explanations for dense or large-scale graphs. However, in Section G, they conducted experiments only on the simplest molecular dataset, the MUTAG dataset, without testing on large-scale data. To demonstrate the model's practical utility, efficiency experiments should also be performed on larger graph datasets, such as the IMDB dataset used by the authors, as well as on even larger datasets.\n\n4. The model's performance heavily relies on motif extraction, which plays a critical role in explainability. It is necessary to show how performance varies with different motif extraction methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Performance differences depending on the motif-extraction algorithm need to be shown.\n\n2. How does the proposed motif-extraction algorithm, which focuses on cycle structures, manage to extract $NO_2$ and $NH_2$ as motifs?\n\n3. Compared to PGIB, the current SOTA method that shares a similar motivation (i.e., considering motifs) with this paper, what are the strengths of this paper?\n\n4. The threshold $\\sigma / t$ appears to have a significant effect on the final explanation; however, it also seems heuristic without guidance on how to determine it. How can we set this threshold when working with real-world datasets, and how can we evaluate whether the threshold is properly set?\n\n5. Not all GNN prediction models may be explicitly divided into two parts: an embedder and a predictor. How can this method be applied in such cases?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Considering motifs within the original graph makes the explanation more human-understandable.\n\n2. The paper is well-written and easy to follow.\n\n3. The method is intuitive and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an explainable method for graph data by providing explanations using motifs, which represent subgraphs within the original graph that play a critical role in prediction. To generate explanations for a pretrained GNN model, they first extract motifs from the original graph using off-the-shelf extraction algorithms (e.g., BRICS, RECAP) or a proposed extraction method that generalizes by only considering cycles and edges as motifs. They then determine the importance of each motif by training an attention weight for each one. In experiments, they present both qualitative and quantitative results to demonstrate the superiority of their explanation method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The method appears highly dependent on the motif-extraction algorithm, which is not a contribution of this paper. For example, in the case of the MUTAG dataset, without domain knowledge, if the proposed motif extraction algorithm (a cycle-based extraction method) is used, $NH_2$ and $NO_2$ are unlikely to be identified as motifs that play a critical role in prediction. I strongly recommend that the authors show which motifs are extracted depending on the motif-extraction algorithm and compare the performance of the method accordingly.\n\n2. PGIB [1], which has a closely related and similar motivation to this paper, should be included. PGIB also considers subgraphs (i.e., motifs) to provide explanations, sharing the same motivation of emphasizing the importance of motifs for explaining graph data. The paper should elaborate on its strengths compared to PGIB and include PGIB as a baseline in the experiments.\n\n[1] NeurIPS'23, Interpretable Prototype-based Graph Information Bottleneck" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. What is the metric for the results shown in Table 5? Why for MUTAG, the smaller the better, but for the other two the larger the better?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Proposed method is simple but effective. \n2. Good empirical results on Fidelity$-$ and Accuracy metrics compared with some old methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a simple but effective method to explain GNNs at the instance-level. It first identifies motifs by domain knowledge, then feeds each motif to the GNN to obtain the motif embedding. Finally, they build an attention-based network to obtain the attention weights of each motif in each graph instance, which are identified as the importance of the motifs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The baselines and related work are old. Recent works such as [r1,r2,r3,r4,r5] should be discussed and compared. \n\n2. Presentation is very poor. Citation style needs to be corrected. \\citep and \\citet should be properly used. Some tables are confusing. See questions. \n\n3. To extract motifs, domain knowledge are required. This make it impossible to be applied to a variety of real world tasks where the domain knowledge is unknown. \n\n4. The fidelity evaluated in this paper is different from the one used in the paper of SubgraphX. Why not use their metric? We'd like to see how MotifExplainer performs on common metrics. \n\n5. Feeding the motifs to GNNs, and training additional attention network will result in more computational cost. Can you also provide efficiency analysis?\n\n[r1] Zhang, et al. Gstarx: Explaining graph neural networks with structure-aware cooperative games. Advances in Neural Information Processing Systems, 35:19810–19823, 2022. \n\n[r2] Rong, et al. \"Efficient gnn explanation via learning removal-based attribution.\" ACM Transactions on Knowledge Discovery from Data (2023).\n\n[r3] Lu, et al. \"GOAt: Explaining Graph Neural Networks via Graph Output Attribution.\" The Twelfth International Conference on Learning Representations, 2023. \n\n[r4] Li, et al. DAG matters! GFlownets enhanced explainer for graph neural networks. In The Eleventh International Conference on Learning Representations, 2023. \n\n[r5] Pereira, et al. Distill n’explain: explaining graph neural networks using simple surrogates. In International Conference on Artificial Intelligence and Statistics, pp. 6199–6214. PMLR, 2023." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We proposed an instance-level explainer to explain GNNs by identifying important motifs, which are recurrent and statistically significant patterns in graphs." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024motifexplainer,\ntitle={MotifExplainer: a Motif-based Graph Neural Network Explainer},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2ZTnALzLyX},\nnote={under review}\n}" }, "abstract": { "value": "We consider the explanation problem of Graph Neural Networks (GNNs). Most existing GNN explanation methods identify the most important edges or nodes but fail to consider substructures, which are more important for graph data. One method considering subgraphs tries to search all possible subgraphs and identifies the most significant ones. However, the subgraphs identified may not be recurrent or statistically important for interpretation. This work proposes a novel method, named MotifExplainer, to explain GNNs by identifying important motifs, which are recurrent and statistically significant patterns in graphs. Our proposed motif-based methods can provide better human-understandable explanations than methods based on nodes, edges, and regular subgraphs. Given an instance graph and a pre-trained GNN model, our method first extracts motifs in the graph using domain-specific motif extraction rules. Then, a motif embedding is encoded by feeding motifs into the pre-trained GNN. Finally, we employ an attention-based method to identify the most influential motifs as explanations for the prediction results. The empirical studies on both synthetic and real-world datasets demonstrate the effectiveness of our method." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Instance-level explanation", "Graph Neural Network", "Motif" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/144875c0c9b7b55fe21573f1a6293b6dc9c475b0.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MotifExplainer: a Motif-based Graph Neural Network Explainer" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2aL6gcFX7q
Understanding Data Poisoning Attacks for RAG: Insights and Algorithms
main
Active
Safety; Retrieval
other topics in machine learning (i.e., none of the above)
3;3;5;5
3;4;3;3
2;2;3;3
2;2;2;2
2;2;3;3
4
3.25
2.5
2
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The attacks and defenses to RAG are an active research topic, given RAG is used in many real-world applications. Additionally, existing attacks are summarized in the paper. \n\n2. Multiple attacks on RAG are considered.\n\n3. The analysis made in the paper is interesting. For instance, Figure 1 shows some empirical evidence to verify the developed theory." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies both defenses and attacks to retrieval-augmented generation, which has been used in many applications. The proposed attack and defense are based on the observation that poisoning attacks tend to occur along directions for which clean data distribution has small variances." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. One limitation of the method is that the assumption can be strong. For instance, it is assumed that adversarial query has a different distribution from normal query. However, in practice, an attacker may select normal queries as target queries. In this scenario, the distribution of the adversarial query would be the same as the target query. This assumption may hold for certain attacks. The authors may consider narrowing down the scope, i.e., focusing on the scenarios where the adversarial query has a different distribution from the target query. \n\n2. The assumption 1 is not very clear. How to measure the distance between two texts? The authors may consider adding more explanations to make it easier for readers to understand. Also, assumption 1 states the distance between two texts is bounded, which may not be informative, as it may hold for two arbitrary texts in practice. \n\n3. The proposed defense may influence the utility of RAG. For instance, if new knowledge is added for a query, it can be rejected if it is substantially different from clean texts in the clean data corpus. In the experiments, it is shown that the false positive rate is very high. Is it because the clean documents are irrelevant to the protected queries? It can be helpful to perform a comprehensive analysis of the proposed defense on the influence of the utility of RAG systems. One naive defense is to reject all documents whose similarities (e.g., embedding vector similarity) are high with protected queries. The authors may consider comparing with some baselines to demonstrate the effectiveness of the proposed defenses. Additionally, the evaluation in Section 5.2 for the proposed attack is very limited." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Add more related references.\n2. compare their method to existing works like [3].\n3. Conduct more experiments regarding the proposed attacks.\n4. Explain the performance of the proposed attack.\n\nPlease find more details in the aforementioned 'Weaknesses' part.\n\nPS: I am willing to increase my score if the authors can (partly) address my concerns." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe authors attempt to give a deeper understanding and theoretical analysis of existing attacks. It should be encouraged.\n2.\tThis is a well-written paper. The definitions of symbols and the overall flow are clear.\n3.\tThe proposed defense is simple yet highly effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the vulnerability of Retrieval-Augmented Generation (RAG) systems to data poisoning attacks, where adversaries manipulate the retrieval corpus to influence model outputs. It reveals that effective poisoning occurs along low-variance directions in the clean data distribution, allowing attackers to insert poisoned data that stealthily alters retrieval results. The authors propose a new defense metric, Directional Relative Shifts (DRS), to detect these poisoned entries by examining shifts along susceptible directions. Additionally, they introduce an advanced attack algorithm that regularizes DRS values, making poisoned data harder to detect. Empirical tests confirm the effectiveness of DRS in various RAG applications, demonstrating the need for robust defenses." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Missing some references.\n- Line 65: The authors should provide references for perplexity-based filters (e.g., [1]).\n- Line 143-153: The authors should also mention existing attacks against (e.g., [2]).\n2. There has been some work discussing the characterization of poisoned samples. In particular, the proposed method (i.e., DRS) is similar to [3] to some extent. The authors should compare their method to existing works.\n3. The authors only use AgentPoison as an example to demonstrate the effectiveness of the proposed attack. The authors should conduct more extensive experiments on all discussed attacks to verify its generalizability.\n4. According to Section 5.2 (Table 5), the performance of the proposed attack is limited.\n5. The authors should directly place the appendix after the references in the main document.\n\n\nReferences\n1. Onion: A Simple and Effective Defense against Textual Backdoor Attacks.\n2. Targeted attack for deep hashing based retrieval.\n3. Spectral Signatures in Backdoor Attacks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. **Clarification on Theoretical Basis** -- Could you provide a more rigorous theoretical explanation for why certain low-variance directions are more susceptible to poisoning attacks in DRS? A deeper analysis would help clarify the underlying vulnerabilities exploited by attackers.\n2. **Defense Scope and Practicality** -- Given that the defense currently focuses on protecting a specific subset of pre-selected queries, how would DRS perform in scenarios where the entire query space needs protection? Have you considered evaluating DRS’s effectiveness without pre-selecting queries, to simulate more realistic defensive conditions?\n3. **Lack of Attack Success Rate Comparison** -- In the evaluation of the proposed “new” attack algorithm, the paper only presents its detection rate under the DRS defense. Could you provide a comparison of the attack success rates between the new algorithm and traditional attacks?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Innovative Approach** -- The proposed DRS defense is novel in its focus on low-variance directions to detect adversarial data shifts. This approach, within the experimental settings of the paper, demonstrates defensive effectiveness against poisoning attacks.\n2. **Comprehensive Evaluation** -- This paper provides extensive experiments in multiple RAG setups, such as autonomous driving and medical Q&A, confirming the generalizability of DRS across diverse applications.\n3. **Insightful Theoretical Contributions** -- The theoretical analysis connecting attack effectiveness to data distribution characteristics (specifically low-variance directions) offers valuable insights, potentially influencing future defenses in retrieval systems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates vulnerabilities in RAG systems due to adversarial data poisoning attacks. The authors analyze how specific data characteristics affect attack success, proposing a new defense method, Directional Relative Shifts (DRS), which detects poisoned data by monitoring shifts in directions with low data variance. They also introduce a stealthier attack algorithm that minimizes DRS to evade detection. Experimental results indicate that DRS demonstrates strong defense performance, though its effectiveness is somewhat reduced against the proposed attacks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Sparse Theoretical Explanation** -- While DRS’s foundation on variance shifts is intuitive, a deeper theoretical analysis could further clarify why certain dimensional shifts are more vulnerable. This would strengthen the defense’s theoretical underpinnings.\n2. **Unrealistic Defense Assumptions** -- The defense method assumes prior knowledge of a specific subset of queries that need protection from poisoning attacks. In real-world applications, defenders typically do not have knowledge of which specific queries might be targeted, and a practical defense would need to offer broad protection across all possible queries. This limitation reduces the generalizability and practicality of the proposed DRS-based defense method.\n3. **Unrealistic Assumption** -- In Section 3.1, the authors illustrate their attack method with an example where, in a knowledge base about food, an adversarial query about mathematics is used to avoid retrieving clean documents. This assumption is unrealistic, as it does not reflect typical user behavior—users are unlikely to ask irrelevant questions, like mathematics queries, in a food-related knowledge base context. This reduces the practical applicability of the assumptions underpinning the theoretical insights.\n4. **Inaccurate Description of Experimental Results** -- In Figure 1, the authors claim that \"we can observe that the attack success rates of Ap are higher than BadChain and AutoDan.\" However, the figure only shows relative changes in certain dimensions and does not explicitly provide data on the actual success rates of each attack. This discrepancy between the description and the figure may mislead readers and reflect a lack of rigor in interpreting experimental results.\n5. **Limited Innovation in Attack Method** -- Although the paper claims to develop a new attack algorithm, it essentially modifies existing attack methods by adding a regularization term based on the proposed defense metric (DRS). This adjustment is an incremental improvement rather than a substantive innovation. Moreover, the effectiveness of this “new” attack is limited, as it only partially reduces the DRS defense success rate without significantly overcoming the defense." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The analysis and observations of current poisoning attacks on RAG are novel and interesting.\n\nThe paper considers four attack settings to demonstrate the effectiveness of the defense methods, offering a comprehensive and thorough evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors conduct a comprehensive analysis of data poisoning attacks on RAG. Specifically, they provide a framework to analyze attacker objectives. They observe that more effective attacks tend to result in larger relative shifts along directions with smaller variances. Based on this observation, the authors design a new filtering method to defend against poisoning attacks. Additionally, they introduce a regularizer to bypass the new detection method. Through experiments, they demonstrate the effectiveness of both the new defense and attack strategies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Major concern: I am uncertain about the reliability of DRS. For example, if the question is, \"Who is the OpenAI CEO?\" I would expect the embedding of a clean document (\"The CEO of OpenAI is Sam Altman\") to be similar to that of a poisoned document (\"The CEO of OpenAI is Elon Musk\"). I am unsure whether DRS can effectively handle such an attack.\n\nThe clarity of this paper needs improvement.\nSome examples: \n1. In Figure 1, what is the Y-axis?\n2. In Section 2.1, the attacker’s capability is described as \"only injecting poisoned data (e.g., by creating a new Wikipedia page).\" However, in Section 5.1.2, the setting appears to change, with the retriever itself being backdoored.\n3. In Section 5.1.1, there is no description of the adversarial query.\n4. In Section 5.1.1, the statement \"For each attack method, we generate 300 poisoned data samples\" is unclear. Does \"poisoned data samples\" refer to poisoned documents?\n\nIf I understand correctly, DRS also requires a set of clean samples to compute the threshold, but it is unclear how large and diverse this dataset needs to be." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024understanding,\ntitle={Understanding Data Poisoning Attacks for {RAG}: Insights and Algorithms},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2aL6gcFX7q},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) have achieved success across various domains but also exhibit problematic issues, such as hallucinations. Retrieval-Augmented Generation (RAG) effectively alleviates these problems by incorporating external information to improve the factual accuracy of LLM-generated content. However, recent studies reveal that RAG systems are vulnerable to adversarial poisoning attacks, where attackers manipulate retrieval systems by poisoning the data corpus used for retrieval. These attacks raise serious safety concerns, as they can easily bypass existing defenses. In this work, we address these safety issues by first providing insights into the factors contributing to successful attacks. In particular, we show that more effective poisoning attacks tend to occur along directions where the clean data distribution exhibits small variances. Based on these insights, we propose two strategies. First, we introduce a new defense, named DRS (Directional Relative Shifts), which examines shifts along those directions where effective attacks are likely to occur. Second, we develop a new attack algorithm to generate more stealthy poisoning data (i.e., less detectable) by regularizing the poisoning data’s DRS. We conducted extensive experiments across multiple application scenarios, including RAG Agent and dense passage retrieval for Q&A, to demonstrate the effectiveness of our proposed methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Safety; Retrieval" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c87c9dfc8484cd932943d922a67324c9d0046f54.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/eff7c7323ac56ed4c13a3f2e3efd1e08f2dbf6c6.zip" }, "title": { "value": "Understanding Data Poisoning Attacks for RAG: Insights and Algorithms" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2bEjhK2vYp
SSLA: A Generalized Attribution Method for Interpreting Self-Supervised Learning without Downstream Task Dependency
main
Active
Interpretability;Attribution;Self-Supervised Learning
interpretability and explainable AI
3;3;5;5
4;4;3;2
2;2;2;2
2;2;2;3
3;3;3;3
4
3.25
2
2.25
3
-0.904534
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Figure 2, why we have a full R^n shape for a0, a1, ..., a_T?\n- There seems to be no reason seperate the snowflake and light blue arrow in Figure 2?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The prerequisites are designed to resolve the problem caused by other factors. And the method are design to reflect this spirit.\n- The diagram are clear and help the readers to understand the method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a novel attribution algorithm for feature-level attribution on self-supervised learning. Compared to other feature-level attribution. The method is designed to meet prerequisites that the interpretation should not rely on 1) downstream task 2) other samples (other than the argumentation) and 3) model architectures. Authors present some experiments to justify the new method (SSLA) is effective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I am not sure if the perquisites can be widely accepted by the community. For example, what is the downside (empirically / theoretically) if a downstream task is considered during the attribution process?\n- Lack of comparison between different attribution methods. One interesting problem could be what's the difference between SSLA result v.s. other methods that relies on downstream tasks.\n- Minor presentation suggestion\n - Equation 1 and 2 seems to be a little redundant.\n- I am willing to raise the rate if the effectiveness of SSLA on some traditional evaluation methods (on downstream tasks) are proved (at least) to be correlated" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* **Comparison with Existing Methods:** Have you considered adapting existing attribution methods like Integrated Gradients or Grad-CAM to SSL settings for comparison? How does SSLA perform relative to these methods?\n* **Validation of Evaluation Framework:** How have you validated the effectiveness of your proposed evaluation framework? Have you conducted any studies or experiments to show that it correlates with human intuition or ground truth attributions?\n* **Testing on Diverse Architectures:** Given that experiments are only conducted with ResNet-50, have you tested SSLA on other architectures?\n* **Handling SSL Randomness:** How does SSLA account for the randomness inherent in SSL methods, such as stochastic data augmentations? Does this randomness affect the stability of the attribution results?\n* **Computational Overhead:** What is the computational cost of SSLA compared to standard inference? Is it feasible to apply SSLA to large-scale models and datasets?\n* **Generalization to Other Domains:** Can SSLA be applied to SSL models in domains other than computer vision?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* **Novel Focus on SSL Interpretability:** The paper addresses an important and under-explored area—the interpretability of SSL models without reliance on downstream tasks or specific architectures.\n* **Clear Prerequisites:** The authors clearly outline three prerequisites for SSL interpretability methods, providing a solid foundation for their approach.\n* **Architecture-Agnostic Approach:** SSLA is designed to be independent of specific neural network architectures, potentially making it broadly applicable across different SSL models.\nRecognition of Evaluation Challenges: The authors recognize the limitations of traditional interpretability evaluation methods in the context of SSL and attempt to propose a new framework tailored to SSL tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the interpretability of SSL models, focusing on the challenge that existing interpretability methods often rely on downstream tasks or specific model architectures. To overcome these issues, the authors propose three fundamental prerequisites for SSL interpretability:\n1. The interpretation should not introduce information from downstream tasks.\n2. The interpretation process should not introduce samples other than the current sample.\n3. The interpretation process should not be restricted to specific model architectures.\n\nBased on these prerequisites, they introduce the Self-Supervised Learning Attribution (SSLA) algorithm. SSLA redefines the interpretability objective by introducing a feature similarity measure. \nThey also propose a new evaluation framework tailored to SSL tasks, arguing that traditional interpretability evaluation methods are impractical due to the absence of explicit labels and suitable baselines in SSL settings. Experiments are conducted using five representative SSL methods (BYOL, SimCLR, SimSiam, MoCo-v3, MAE) on the ImageNet dataset with ResNet-50 as the backbone. They compare SSLA against a random masking baseline, demonstrating that SSLA can more effectively identify important features that influence the SSL model's representations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* **Insufficient Empirical Evaluation:** The experimental evaluation is limited. The authors only compare SSLA to a random masking baseline. There are no comparisons with other existing attribution methods adapted to SSL, making it hard to gauge the effectiveness of SSLA.\n* **Limited Dataset and Model Diversity:** Although experiments are conducted on the ImageNet dataset using ResNet-50, the evaluation lacks diversity in both datasets and model architectures. The claim that SSLA is architecture-agnostic is not fully supported without experiments on different architectures.\n* **Evaluation Methodology Concerns:** The proposed evaluation framework is novel but not thoroughly validated. The authors argue that traditional evaluation methods are unsuitable for SSL interpretability but do not provide sufficient empirical evidence or theoretical justification. It is unclear whether the metrics used effectively measure interpretability in SSL contexts." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper proposed a novel feature attribution method for SSL model that does not rely on downstream tasks. \n2. The claim made in the paper is supported by both theoretical derivations and experiments. \n3. The paper is well written and easy to follow. The discussion of prerequisites of an attribution method for SSL may spark interesting discussions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes SSLA, a feature attribution method for self-supervised learning (SSL) tasks. In particular, the method is designed without dependency of downstream tasks. The method starts by defining the usefulness of SSL model as its ability to preserve representation of data after transformation. Then it addresses the significance of features by attributing this usefulness to features iteratively. The paper then conducts feature masking experiment to demonstrate the effectiveness of the method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper relies on the independence of downstream tasks, which make the comparison of this method and existing methods difficult. Hence, it is difficult to address the effectiveness of this method. \n2. The derivation of theorems rely heavily on first order approximation. Though it is common, the paper does not provide analysis on error bounds, which downgrades the trustworthiness of the method.\n3. The two main components of the method lack motivation. The first one is using cosine-similarity of features before/after transformation as a measure of usefulness of SSL model. The correlation (or even causality) of this and \"SSL as learning representation\" is not clear. The second one is the iterative method. The author may consider justify why we need an iterative method to attribute the importance.\n4. Although the paper proposes the method to be independent of downstream tasks, its evaluations still rely on downstream tasks, which seems counter-intuitive(Line 179 -180). Moreover, since the evaluation is dependent of downstream tasks, the author may consider compare their method to other SSL attribution methods that rely on downstream tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "**Reviewing summary**\n- As listed in the weaknesses, I think the authors did not incorporate the divergence into their consideration, which can be regarded as the most critical component of contrastive self-supervised learning, making their criterion sound unreasonable. Despite their indicating the unreasonable aspects regarding the interpretability of SSL in relation to downstream tasks, this results in my score of 3." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The additional information introduced by prediction head and downstream tasks may influence the interpretability of SSL is a crucial and challenging breakthrough point.\n- Three fundamental prerequisites for the interpretability of SSL proposed by this paper sounds reasonable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper indicates that the introduced additional samples from downstream task would impede the interpretability of Self-Supervised Learning (SSL). To tackle this issue, the authors try to propose a new interpretability objective by introducing a feature similarity measure, decoupling the interpretability process from the reliance of downstream tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Despite the author indicating the potential weakness for the interpretability of SSL, the alignment ability for data augmentation is just one side of current self-supervised learning. The other component, which is often regarded as an extra design to prevent training collapse intuitively, is essentially to ensure that the representation divergence is sharp enough to cluster the data distribution by latent categories. More details can be found in [1] and [2]. Therefore, only adopting the extent of augmentation that is invariant to evaluate the related influence of variables is quite biased.\n\n - [1] Awasthi, Pranjal et al. “Do More Negative Samples Necessarily Hurt in Contrastive Learning?” International Conference on Machine Learning (2022).\n - [2] Weiran Huang, Mingyang Yi, Xuyang Zhao, and Zihao Jiang. Huang, Weiran, et al. \"Towards the generalization of contrastive self-supervised learning.\" arXiv preprint arXiv:2111.00743 (2021)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024ssla,\ntitle={{SSLA}: A Generalized Attribution Method for Interpreting Self-Supervised Learning without Downstream Task Dependency},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2bEjhK2vYp},\nnote={under review}\n}" }, "abstract": { "value": "Self-Supervised Learning (SSL) is a crucial component of unsupervised tasks, enabling the learning of general feature representations without the need for labeled categories. However, our understanding of SSL tasks remains limited, and it is still unclear how SSL models extract key features from raw data. Existing interpretability methods are heavily reliant on downstream tasks, requiring information from these tasks to explain SSL models. This reliance blurs the line between interpreting the SSL model itself and the downstream task model. Moreover, these methods often require additional samples beyond the target of interpretation, introducing extra information that complicates the interpretability process.\nIn this paper, we propose three fundamental prerequisites for the interpretability of SSL tasks and design the Self-Supervised Learning Attribution (SSLA) algorithm that adheres to these prerequisites. SSLA redefines the interpretability objective by introducing a feature similarity measure, reducing the impact of randomness inherent in SSL algorithms, and achieving more stable interpretability results. Additionally, SSLA abstracts the interpretability process, making it independent of specific neural network architectures. To the best of our knowledge, SSLA is the first SSL interpretability method that does not rely on downstream tasks. We also redesign a more reasonable evaluation framework and establish baselines for comparative assessment. The source code for our implementation is publicly available at https://anonymous.4open.science/r/SSLA-EF85." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Interpretability", "Attribution", "Self-Supervised Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/91aaf490498c88cc562109f3da84ab361eea97b0.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SSLA: A Generalized Attribution Method for Interpreting Self-Supervised Learning without Downstream Task Dependency" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2bIQBDSfRk
DenseAttention: No-Compromise Exact All $N \times N$ Interactions Algorithm with $O(N)$ Space and Time Complexity
main
Active
self-attention;deep learning;transformer architecture;nlp;efficient transformers;DenseAttention;long context;Long Range Arena
foundation or frontier models, including LLMs
3;3;3;3
4;4;4;3
2;2;2;2
2;1;2;2
2;2;2;3
3
3.75
2
1.75
2.25
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Given the popularity of Transformer models, the topic of their efficiency becomes more and more important. The proposed solution is also well-motivated.\n2. The paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors propose a new architecture, DenseAttention Network, which could potentially replace Transformer. The motivation of this new design is to alleviate the quadratic time complexity in sequence length as well as the memory-bound operations in the vanilla Transformer (e.g. softmax and layer normalization). Specifically, they propose to linearize the original multi-head attention layer with naive matrix multiplications. To stabilize the forward pass, the inputs of each layer are scaled to have the same $\\ell_\\infty$ norm. The authors further propose a replacement for the rotary embedding and sliding window that is compatible with their approach.\n\n**Strengths**\n1. Given the popularity of Transformer models, the topic of their efficiency becomes more and more important. The proposed solution is also well-motivated.\n2. The paper is well-written and easy to follow.\n\n**Weaknesses**\n1. The major flaw of this paper is the thin experiments.\n2. The paper lacks several important previous papers.\n\nIn summary, this paper proposes a potential solution to accelerate Transformer models. However, the experiments are not convincing enough. Therefore, I would recommend a clear rejection unless there is further evidence." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The major flaw of this paper is the thin experiments. The Transformer model is known to perform well on a wide range of tasks. In addition, it also demonstrates a promising scaling effect. On the other hand, this paper only contains limited experiments: (1) the testbeds are limited. Currently, the only benchmark is Long Range Arena (for causal LMs); (2) the baselines are limited. There is only one Transformer model that serves as the baseline without specifying how the model is trained; (3) the scaling effect is not studied. The authors do not analyze how the parameter number affects the results. It is unclear if the method could be scaled to larger-scale applications.\n2. The paper lacks several important previous papers. In fact, linearizing attention has been heavily studied before [1, 2, 3]. This paper has no comparisons or discussions. \n\n[1] Random Feature Attention, ICLR 2021 \\\n[2] Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention, ICML 2020 \\\n[3] Transformer Dissection: A Unified Understanding of Transformer's Attention via the Lens of Kernel, EMNLP 2019" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1) The paper proposes an interesting approach to eliminate projection matrices in attention, considering that the multiplication $W_QW_K^{\\top}$ can be replaced with a single parameter, which I don't think exists in previous literature.\n2) The paper also proposes using local attention in conjunction with the proposed attention function." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a DenseAttention Network (DANet), which addresses inefficiencies in the Transformer architecture, especially its high memory and computational cost - O(N^2), with respect to sequence length - N. DANet uses a new MaxNormActivation and Cosine Relative Positional Embeddings, capturing N x N interactions at O(N) space and time complexity. Experimental results demonstrate that DANet outperforms FlashAttention on long sequences on the Long Range Arena benchmark." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The paper lacks comparison against mamba in experiments. Mamba-I and Mamba-II are fast approaches for long range sequence modeling. \n2) This is not the first paper which captures NXN correlations with O(N) complexity. Linear attention [1] uses linear approximations of attention. A fair comparison with this paper would be great.\n3) The mathematical writing in this paper is inconsistent. Here are some instances:\n\n\nBetter Notation:\n1. Standard operators max, var should be mentioned in times new roman using \\DeclareMathOperator\n2. Defined operators such as MaxNormActivation can be put in \\text{MaxNormActivation}, as done in 200-204. \n3. Line 240: has a typo open bracket.\n4. Line 284: << should be \\ll.\n5. Line 246: why is fp16 and bf16 bolded?\n\nMajor readability issues:\n1. Inconsistent definition of $X_i$ in line 247, 300 and 311.\n\nIf the above issues are resolved I am willing to increase my score.\n\n[1] Katharopoulos, Angelos, et al. \"Transformers are rnns: Fast autoregressive transformers with linear attention.\" International conference on machine learning. PMLR, 2020." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Why do you still consider DANet as Transformer-based? The only part of transformers that is left, is the Feeforward layers which is now inside the block.\n- You train your model with 4 stages, but the original BERT was trained on 2 stages. Could you also train the baseline in the same way?\n- On page 10, line 494 (key highlights) you hint at the fact that DANet outperforms the baseline due to a soft-capping of output logits that you use. Why did you not try this for the baseline as well? \n- L.400: The authors find that local attention is effective. Do you use the Transformer Self-Attention here? An ablation on this would be interesting.\n- Why do you use float16 in the experiments?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The usage of MaxNormActivation seems to be well motivated by a theoretical variance analysis. However, this could also be supported with experiments empirically too.\n- Code provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel neural network architecture called DenseAttention Network as alternative to Transformer networks with Self-Attention. \nThe core innovation of DANet is the novel DenseAttention mechanism, which removes Softmax and projection layers from original Self-Attention.\nAdditionally the authors modify the surrounding network block: They replace Layernorm or RMS with their novel “MaxNormActivation”, they remove some skip connections and modify the Rotary Positional Embeddings. \nThe paper performs experiments on the Long Range Arena Benchmark and masked language modeling with BERT-large sized models. \n\nThe paper claims to outperform a BERT baseline on masked language modeling. \nIt claims to set a new SOTA on “Transformer-based” models on LRA and to outperform 4 of 6 State Space model baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In general I believe this paper is not ready for publication as there are several weaknesses in terms of the new architecture, the experiments and the presentation in the paper. My main concerns are summarized below: \n\n- A related work section is missing in which the authors put DANet in relation to other Linear Attention variants (e.g. GLA https://arxiv.org/abs/2312.06635 ), State space models (e.g. Mamba https://arxiv.org/abs/2405.21060) or other RNNs variants (e.g. xLSTM (https://arxiv.org/abs/2405.04517 ) or RWKV (https://arxiv.org/abs/2305.13048 )). Also a relation to embedding models other than BERT is missing, e.g. Monarch Mixer (https://arxiv.org/abs/2310.12109). \n- Since DANet seems to be a hybrid architecture (Section 3.3), also a relation to hybrid architectures (e.g. https://arxiv.org/abs/2402.19427, https://arxiv.org/abs/2406.07522) is interesting.\n- There are so many architecture changes (e.g. Layernorm, Positional Encoding, Block structure, Attention mechanism, Block order / hybrid variants) that leave the reader unclear of what brings performance gains. A careful ablation study could help here.\n- While the paper demonstrates large throughput benefits in the long context regime compared to Transformers, it has not been shown in the paper that DANet performs well in the long context regime.\n- Regarding Cosine RelPE: It is not clear why the authors made the modification to the original Rotary Positional Embedding. It seems to be motivated by efficiency gains, but this claim is not supported sufficiently. An experiment on this could help.\n- A conclusion is missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- What is the specific motivation of designing CosineRelPE? \n\n- Is there anything that suggests that any new block (DenseAttention, CosineRelPE, MaxNormActivation) can generalize to other architectures and improve either performance or efficiency (while not degrading the other)?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The MaxNormActivation block is potentially useful as a method to stabilize LLM training.\n- The empirical results in tables 1-3 show that DenseAttention has some promise empirically on long-context sequence modeling tasks as compared to the standard transformer, S4-V1, and BERT.\n- The efficiency results in table 4 show that it's possible to get out-of-the-box performance increases at long context compared to the usual BERT model. Specifically, all changes seem to be architectural, and no specialized kernels are needed to get better performance, as a result of using `torch.compile` and potentially fusing linear operations together." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a new architecture which they call DenseAttention Network, which is a variation of the standard transformer architecture which is specifically tuned to perform well on long sequences. The changes include:\n1) Take the softmax away in the attention block, writing $QK^{\\top}V$ instead of $\\mathrm{softmax}(QK^{\\top})V$, and using associativity to compute the matrix product in linear time. (They name this as DenseAttention mechanism/block.)\n2) Use a MaxNormActivation block instead of LayerNorm, which scales each token feature by its maximum absolute value.\n3) Use a novel positional embedding called Cosine RelPE, which is claimed to perform similarly but more efficiently computable than RoPE.\n4) For very long contexts, use a hand-rolled local attention implementation suited for DenseAttention.\nThey show that this architecture has general improvements over a basic transformer + RoPE implementation in Long Range Arena, both in performance (across a few tasks) and efficiency (more broadly, as the usual attention mechanism does not have linear-in-$N$ time complexity). It also shows improvements against S4-V1 in Long Range Arena, and BERT in terms of Masked Language Modeling." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The main mechanism behind DenseAttention, i.e., removing the softmax and using associativity to compute the product in linear-in-$N$ time, has been studied before; see for example [1]. It is acknowledged that the paper cites [1], but the paper suggests that the mechanism in [1] has poor efficiency; however DenseAttention is strictly a special case of the mechanism in [1] (using the notation of [1], set $\\phi$ as the identity mapping). So this claim does not make sense, and the novelty of DenseAttention seems limited.\n\n- The reasoning behind using MaxNormActivation seems lacking. In particular, since all norms are equivalent (i.e., bounded by each other up to multiplicative constants, possibly dependent on dimension) in finite-dimensional vector space, the boundedness of the maximum norm is equivalent to the boundedness of the $\\ell^{2}$ norm. So if the argument in the paper goes through, it should mean that $\\ell^{2}$ normalization should also work (then why not LayerNorm? But LayerNorm doesn't work, as reported in the paper, so something else is going on.). Although the MaxNormActivation is an interesting and potentially useful contribution, it may not work for the reason explained in the paper. Also there's a potential typo in the equation defining MaxNormActivation: it should be $\\frac{X_{i}}{\\max_{j}|X_{ij}| + \\epsilon}$ on the RHS (note the absolute value).\n\n- Not much motivation is given for the two other modifications, e.g., CosineRelPE and the local attention proposal - they seem to have a flavor of \"we tried it and it works,\" potentially with some ablation, and without context of why such an approach may or may not make sense or generalize to other architectures.\n\n- The results in Long Range Arena are promising insofar as they match up against a standard transformer and an SSM, but this may not be a fair comparison. Given that the authors start with a regular transformer and apply modifications to show improvement on long-context, they could also compare against more recent models specialized for long context. For example, the authors omit comparison with S5, whose numbers are publicly available on [PapersWithCode](https://paperswithcode.com/dataset/lra), as well as S4 V2 and a long list of other models benchmarked on Long Range Arena but not necessarily added there.\n\n- The result on efficiency compared to BERT also may seem to not be a fair comparison. BERT is trained with an encoder-only architecture, while DenseAttention Network is trained with a decoder-only architecture. A fairer comparison would pit DenseAttention Network against a regular decoder-only transformer (as well as BERT if desired, along with, say, an SSM), under the same experimental setting, and allow readers to observe trends in the different approaches as different scaling parameters vary.\n\n\n[1] Katharopoulos, Angelos, et al. \"Transformers are RNNs: Fast autoregressive transformers with linear attention.\" International conference on machine learning. PMLR, 2020." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel DenseAttention architecture which achieves 1) favorable computational efficiency; 2) linear time & space complexity by simplification and reduction of standard Transformer architecture." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024denseattention,\ntitle={DenseAttention: No-Compromise Exact All \\$N {\\textbackslash}times N\\$ Interactions Algorithm with \\$O(N)\\$ Space and Time Complexity},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2bIQBDSfRk},\nnote={under review}\n}" }, "abstract": { "value": "The ubiquitous Transformer architecture suffers from two main bottlenecks: 1) low computational and memory efficiency, leading to suboptimal hardware utilization, and 2) quadratic time complexity with respect to sequence length $N$, making it slow and costly for large data contexts. We propose a novel DenseAttention Network architecture, a straightforward simplification of the standard Transformer block that addresses these issues and serves as a drop-in replacement for language modeling tasks. We eliminate memory-bound components in DenseAttention, including Softmax, masking, one skip connection, and both LayerNorms, as well as key, value, and output projection matrices, as they become redundant. Despite these removals, it maintains exact $N \\times N$ pairwise interactions between tokens. By exploiting the associativity of matrix multiplications, DenseAttention can be computed with $O(N^2d)$ or $O(Nd^2)$ time and space complexity, depending on the context. To handle the absence of Softmax and prevent numerical instability, we introduce MaxNormActivation at both ends of the Transformer block. We also devise Cosine Relative Positional Embeddings as a computationally efficient replacement for RoPE, and simple LocalAttention variations of the block to help the model focus on details in extremely long contexts. \n\nDenseAttention competes with FlashAttention in speed on small sequences and outperforms it by orders of magnitude on large contexts. We pre-train encoder language models on sequences up to 16K in length, which perform similarly or better than baseline BERT-large, while significantly improving speed and efficiency. Finally, we achieve state-of-the-art on the LRA benchmark among the Transformer-based architectures." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "self-attention", "deep learning", "transformer architecture", "nlp", "efficient transformers", "DenseAttention", "long context", "Long Range Arena" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/64dca02fa7dc74d3d2de185add6e0a45144496fe.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/78e4b43b1d574d7e27aaf0271c25bf639f64491b.zip" }, "title": { "value": "DenseAttention: No-Compromise Exact All $N \\times N$ Interactions Algorithm with $O(N)$ Space and Time Complexity" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2bWf4M5tRo
Enhancing Hallucination Detection with Noise Injection
main
Active
Hallucination Detection; Robustness
foundation or frontier models, including LLMs
3;3;5;5;5
5;4;4;4;4
2;2;2;2;2
2;2;2;3;2
2;2;3;3;3
4.2
4.2
2
2.2
2.6
-0.612372
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "No specific question from me. But my concerns are majorly stated in the previous section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* Good logical flow and storytelling.\n* Clear presentation of experimental results and straightforward mathematical formulations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the potential of injecting noise into the intermediate layer outputs of LLMs to induce greater uncertainty when they are prone to hallucination." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Lack of theoretical justification for the noise injection approach: Although the injection method is simplistic, the authors do not clarify why they chose to sample noise from a uniform distribution with fixed mean and variance across LLMs. This choice raises concerns about the generalizability of the results.\n* No evaluation of statistical significance: The reported performance improvements with noise injection are marginal, and the absence of confidence intervals weakens claims regarding these improvements.\n\nOverall, I feel that this paper is still not ready for publication." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How do you extract the final answers from the long answer? How do you make sure it is always in the end? Do you do some sort of prompt engineering or few shot for this\n- What is the acc of the model in greedy decoding?\n- Why are the results on GSM8K are different in table 2 and 3? What is the difference in the setting? \n- \"For each dataset, we select the temperature within T = {0.2, 0.5, 0.8, 1.0} which optimizes the model accuracy on this dataset\" - on the validation dataset?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper touches a critical issue in current LLMs. Any progress in error detection is critical to the field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the challenge of detecting \"hallucinations\" in Large Language Models (LLMs). The study proposes a novel technique to improve hallucination detection by adding \"noise injection\" to intermediate layers of the model, creating an additional source of randomness during response generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper presents some notable weaknesses in both the presentation of content and in aspects of the methodology and experimental design. Below are specific areas of concern:\n\n- The review of related work is somewhat shallow. There is substantial literature on detecting hallucinations in models, yet this paper does not adequately differentiate its approach or clarify how it builds upon existing insights.\n- All experiments are conducted on a single model, which limits the generalizability of the conclusions. Testing across multiple models would strengthen the claims.\n\n## Intro:\n- The term \"hallucinations\" is only briefly defined as instances where a model generates “plausible yet incorrect responses.” However, it remains unclear if this term includes all model errors or just those based on plausibility. The paper does talk about plausibility further, leaving the reader uncertain about what qualifies as a hallucination.\n- You refer to figure 7 which is in the appendix. Core results should be presented in the main paper, and anything you talk about in the intro is definitely core. Note that reviewers are not required to read them but in your case it was fundamental to understand your results. This note is relevant for the rest of the paper as well.\n- We empirically validate the hypothesis in Figure 7 (a) -> how exactly the figure validates your hypothesis? Readers need a step-by-step walkthrough to see how Figure 7(a) substantiates the hypothesis.\n\n## Section 2:\n\n- The definition of $f$ is a bit vague and as a results, the method as well. The model's output is not a function of all of its hidden states, because each hidden state $l$ is a function of the previous hidden state $l-1$. I think that maybe you could say that if you talk about the residual stream that sums all hidden states (because later you talk about mlp output), but it is very not clear at this point of reading.\n- Because of that, it's not clear what happens when you replace $h_t^l$ with a noised version. Do you recompute $h_t^{l+1}$ to get a noised version or do you just noise the clean version? This needs to be clearly explained. If you add the noise to the MLP output which in turn simply goes to the residual stream, and you don't recompute the following MLPs in higher layers after adding noise, then this is just equivalent to add noise K times (where K are the number of layers you noised) to the residual stream, without significance the the specific layers that are noised, because the unembedding layer simply takes the residual stream after the final layer.\n\n## Section 3:\n\n- Table 2 lacks information on statistical significance, including standard deviations and the number of seeds used for experiments. Additionally, there is no indication of the dataset size.\n- he statement, “This supports our intuition that incorrect answers are less robust to noise injection…” appears without prior context. While there is mention of hallucinations having higher entropy, there is no discussion that wrong answers may appear less after noise injections. Why does this happens?\n- It was not clear to me why you need a separate section for GSM8K as experiments are later conducted across multiple datasets, making this section feel repetitive.\n\n## Section 4:\n\nThe paper lacks a clear presentation of noise boundaries and statistical significance tests, which raises concerns about the reliability of findings. The difference between the proposed methods and baselines is small, and it is unclear how significant these differences are. Only Figure 4 provides such comparisons for GSM8K, while other datasets are not covered.\n\nSome other typos etc.:\n- Links to figures/equations are broken.\n- Line 118: \"**an** uncertainty metric\"\n- Line 122 sentence is not grammatically correct\n- Line 289 \".,\"\n- Figure 7 caption: \"Rest of setup up follows Figure 7 (b)\" -> typo?\n\n\nI believe that all of these issue could be fixed in a revision (not sure that in the short time of the rebuttal period) and then it will be a valuable research paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Is there any explanation why the performance is more significant only when combined with Answer Entropy?\n2. I like the results shown in Table 4, but I would appreciate it if the author can proivde more experiments in other datasets, such as CSQA or TriviaQA.\n3. I would like to see more perturbation based methods. For example, what will happen if we perturb the input query for those samping based methods?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The motivation for introducing randomness in the hidden layer is intuitive and makes a lot of sense. The paper is well-written and easy to implement.\n2. The concept of perturbing intermediate representations to enhance the separability between hallucinated and non-hallucinated generation is overall innovative.\n3. Extensive experiments are provided to demonstrate the effectiveness of noise injection in enhancinghallucination detection across various datasets and uncertainty metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes enhancing the performance of hallucination detection by perturbing hidden unit activations in intermediate layers for sampling-based methods. Unlike existing approaches that measure uncertainty through prediction layer sampling, this work introduces noise to intermediate layer representations and combines this noise injection with prediction layer sampling to improve hallucination detection. Extensive experiments demonstrate the effectiveness of this method across various datasets, uncertainty metrics, and model architectures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The performance improvement from noise injection is insignificant in most cases. As illustrated in Table 3, there is an insignificant increase in Predictive Entropy and Normalized Entropy, with the most notable improvement occurring only in the answer entropy of the GSM8K dataset. \n2. The author argues that the effects of noise injection and prediction layer sampling are complementary. However, this claim is not strongly substantiated by the results shown in Figure 3. A Pearson correlation of 0.67 does not clearly indicate a complementary relationship between these two sources of randomness. Even without introducing noise, drawing entropy with temperatures T=0.5 and T=1.0 will show similar positive correlations.\n3. The author introduced additional hyperparameters $\\alpha$, $\\ell_1$ and $\\ell_2$ to adjust the randomness of sampling. However, this comparison may be unfair, as performance could also be enhanced by optimizing parameters such as temperature T, top_P, and top_K.\n4. Theoretical insight is limited in explaining why perturbations at the hidden layer are more effective than output layer sampling for self-consistency based hallucination detection methods. In my opinion, using a larger temperature is essentially the same as modifying the feature space to increase randomness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper flows well with detailed explanations.\n- Ablation experiments are thorough and extensive.\n- The problem of Hallucination detection is crucial in recent LLM studies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to inject noise in the intermediate representations to enhance hallucination detection. The method is mainly tested on Llama2 on 4 different datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- My main concern is the soundness of the experimental results. Although the authors have shown the std of experiments in Figure 4, this is only shown for the dataset, GSM8K, which had the greatest improvement. However, considering that the gain in the other three datasets is relatively smaller, I would like to see the std values for other datasets too. Also, please conduct a t-test on the improvements.\n- The authors tested their method mainly on Llama2-13B-chat. Although the experiment on Mistral has been provided in Table 6, this is only done on GSM8K. I would like to see a full table of experiments on other datasets.\n- The message of Figure 2 (b) is somewhat unclear to me. I don't think the figures demonstrate better separability between non-hallucination and hallucination. Maybe a more fine-grained histogram would show a better picture?\n- (minor) There are some grammatical issues in writing. I suggest using Grammarly or ChatGPT to refine the manuscript.\n- (minor) There is no Figure 7 while the manuscript keeps referring to it. I'm assuming it should have been Figure 2, but please correct this.\n\nOverall, the paper is well written. However, my main concern is the significance and generality of the approach. If my concerns are resolved, I would be happy to adjust my scores." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The authors refer to Figure 7 multiple times throughout the text. I believe this is a type, as there is no Figure 7. Should this be Figure 2 instead?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Perturbing intermediate layers seems to increase the uncertainty gap between instances where the model is correct and where it is not.\n- The authors make an effort in ablating their results, in particular to distinguish the noise effect induced by intermediate vs last layer." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work builds upon the idea that the variability of LLM answers to a question is most pronounced when the LLM does not know the correct answer. By perturbing the intermediate LLM layers, they show this gap in variability tends to increase, facilitating the detection of hallucinations.\n\nThe work is largely empirical. Most of the results are shown for the GSM8K dataset, where the method appears to work best. On three other datasets, results are still positive but much more contained. Table 3 would benefit from reporting standard deviations over the multiple runs. Right now it is not clear if the difference in entropy over CSQA, TriviaQA and ProntoQA is significant.\n\nI appreciate the insight this work brings in terms of showing that the epistemic uncertainty induced by perturbing intermediate layers can provide complementary effects to the aleatoric uncertainty induced by last layer for the purpose of detecting hallucinations. However, considering the complications introduced - the method needs access to the intermediate layers of the model, it may be sensitive to the noise magnitude (the Appendix in this direction is not particularly extensive) and to which layers are perturbed - I wonder if the improvements are in fact worth the effort. \n\nI'd suggest the authors to provide a comprehensive evaluation across many datasets, including standard deviation of the results, to show that the method works robustly in multiple instances." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Results seem significant on GSM8K, less so on the other datasets. Standard deviations are missing. \n- It may be worth extending the analysis on the sensitivity to the noise magnitude to better gauge the robustness of the algorithm. In the main paper, the authors only use either no noise or noise magnitudes 0.01 and 0.05, and only for one dataset. In the Appendix, results for another dataset are presented, but at different noise magnitudes. It would be good to provide results for a sufficient amount of noise magnitudes and all datasets." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing Hallucination Detection with Noise Injection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2bWf4M5tRo},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) are observed to generate plausible yet incorrect responses, known as hallucinations. Effectively detecting such hallucination instances is crucial for the safe deployment of LLMs. Recent research has linked hallucination to model uncertainty, suggesting to detect hallucinations by measuring dispersion over answer distributions obtained from a set of samples drawn from the model.\nWhile using the model's next token probabilities used during training is a natural way to obtain samples, in this work, we argue that for the purpose of hallucination detection, it is overly restrictive and hence sub-optimal. Motivated by this viewpoint, we perform an extensive empirical analysis showing that an alternative way to measure uncertainty - by perturbing hidden unit activations in intermediate layers of the model - is complementary to sampling, and can significantly improve detection accuracy over mere sampling." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Hallucination Detection; Robustness" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5c5db3e2c89bbc9e42e483b75ee843a333bf91bb.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f258578d37912108820f6132db7efabe3199a7fe.pdf" }, "title": { "value": "Enhancing Hallucination Detection with Noise Injection" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2bn7gayfz9
CTBench: A Library and Benchmark for Certified Training
main
Active
certified training;benchmark;open-source library
alignment, fairness, safety, privacy, and societal considerations
3;5;5;6
4;4;4;3
2;4;4;3
2;2;2;2
3;4;4;4
4.75
3.75
3.25
2
3.75
-0.662266
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. It is already well established by previous works that robustness training increases local smoothness. What is unique about the findings presented in this paper?\n2. It is also previously established that adversarially robust trianing methods tend to have higher sample complexity, and therefore are more likely to overfit (less regularization). Other than the choice of metric, what is unique about the findings in Section 5.4.?\n3. Is there an explanation for why the model performs worse for certain corruptions? How will these results be affected if we use different L_p norms? For example, I would expect a model trained to be robust in the L_2 space to be better resistant to Gaussian noise and less resistant to salt and pepper noise." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The paper proposes a new benchmark for certified robustness methods for image classifiers.\n2. Authors implement several prominent certified robustness methods in a unified framework, thereby standardizing the implementations to facilitate future research.\n3. Furthermore, authors correct implementation mistakes and perform systematic hyperparameter tuning to fully realize the potential of all methods.\n4. Authors present several interesting findings regarding the properties of ceritified robustness methods, for example, models trained using distinct methods have a high overlap in the examples they succeed and fail on, uncovering a sample-specific inherent difficulty level that can be leveraged to improve training. And, these methods can boost OOD generalization for specific corruptions, and hurt generalization for others." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents CTBENCH, a standardized library and benchmark designed to fairly evaluate certified training algorithms for neural networks, addressing the inconsistency in previous evaluations due to varied training schedules, certification methods, and under-optimized hyperparameters. By testing all algorithms under consistent conditions with tuned hyperparameters, CTBENCH reveals that most certified training methods perform better than previously reported, setting new benchmarks. Through CTBENCH, authors uncover several interesting properties of models training with certified methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Authors incorrectly state that the benchmark from Li et al. is not up to date as \"it reports 89% and 51% best certified accuracy for MNIST epsilon = 0.3 and CIFAR-10 epsilon = 2/255 in its evaluation, respectively, while recent methods have achieved more than 93% and 62%\". However, at the time of this review, the numbers on Li et al.'s leaderboard (https://sokcertifiedrobustness.github.io/leaderboard/) are even higher than 93% and 62%, they are 94.02% and 68.2%. Furthermore, the leaderboard toppers are defenses from 2019/2021. It appears that the authors might have pulled their numbers from a stale source.\n2. In order to be an improvement over the existing benchmark (of Li et.al.), one important requirement is comprable or improved comprehensiveness. Based on the results in the paper, the proposed benchmark is significantly less comprehensive than Li et. al. on two important directions: (i) number of defenses evaluated, (ii) number of diverse models used during evaluation. While I understand that the proposed work can be made more comprehensive by running more experiments, this is not the case currently and so is worth poining out.\n3. Furthermore, as stated in the limitations section, the propsoed benchmark only focuses on deterministic certified robustness in the L_infinity space. Whereas, Li et. al.'s benchmark uses both determinisitc and probabilistic certified methods, and covers all the popularly used norms in literature (i.e., L_1, L_2, L_infinity). Thereby further hurting the comprehensiveness of the proposed benchmark.\n4. Some of the findings presented in this paper are expected and already established by prior works (see Questions).\n5. The main contribution of the paper is a unified code-based (and benchmark) for promiment certified robustness methods. Even though authors uncover several interesting findings while reproducing and tuning sota methods, the nature of the contributions of this paper are heavily empirical (not enough technical novelty). As such, this paper is much better suited for venues like TMLR that put emphasis on contributions of such nature." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Some questions I had while reading:\n\n-- Why do methods like TAPS and MTL-IBP achieve better accuracy while deactivating more neurons?\n\n-- Is there a theoretical framework to explain the relationship between neuron deactivation and robustness? \n\n-- Is there a way to understand and leverage the shared mistakes patterns to improve certified training? Or is it natural that mistakes would overlap (similar to how mistakes overlap in natural training)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-- The paper is well-written and typeset well\n\n-- Tackles an important problem in the field: the inconsistent evaluation of different certified training methods. I think the field needed this kind of paper. \n\n-- It's not only a benchmark paper but provides some analysis into certified model behavior in loss fragmentation (showing certified models reduce fragmentation compared to adversarial training), have shared mistake patterns, model utilization metrics, and generalization performance (showing certified training provides benefits for certain types of corruptions)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a benchmark for certified robust training. The goal is to standardize the hyperparameters, training schedules (& other training configurations) between competing methods in certified training. The purported advantages of newer methods are lower when older baselines were given equivalent optimization and testing conditions. The work covers several popular approaches like PGD, IBP, and CROWN-IBP." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-- The novelty of the paper is limited since it's just focused on benchmarking existing methods. Certified robustness is a relatively new field and the field needs methods as much as unifying benchmarks. I do believe the lack of novelty is mitigated to an extent by the analysis provided in Section 5.\n\n-- I wonder about the sustainability of the benchmark since there are other leaderboards for adversarial training (e.g. RobustBench). Others may want to submit their work to an existing leaderboard rather than standardize to adopt your settings.\n\n-- I'm a bit confused about the purpose of the fragmentation experiments. Robust models lead to fewer flipped neurons in the presence of noise, but why should we care? This is after all expected given they are more robust in general to input noise. I believe these experiments may be valuable but the authors should articulate why." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The main concerns about the experiments are raised in the weaknesses section. If these can be addressed, I would be happy to change my opinion." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper raises an important question of fairly assessing the algorithmic improvements of recent certified training methods compared to older IBP-based training. Since the evaluation depends on many factors and components, the paper proposes to fix some of them to the best-known ones and to properly tune the rest. \n- The writing is clear (except for the presentation of Table 1), the code for benchmarking, and the weights of pre-trained models are provided. \n- The analysis of training methods leads to interesting conclusions. Particularly, the relationship between propagation tightness and certified accuracy at larger epsilon, i.e. the absence of correlation, is surprising." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a library for benchmarking certified training methods under unified settings. It uses the best practices for certified training from (Shi et al., 2021), such as CNN7 architecture with batch normalization, IBP initialization, warm-up schedule and warm-up regularizers. To improve generalization, it uses L1 regularization and stochastic weight averaging (Izmailov et al., 2018). From the implementation perspective, the authors propose to use full batch statistics to address problems with batch normalization when gradient accumulation or PGD attack is performed. The paper claims that the improvements of recent methods in certified training drop significantly compared to older IBP training method under the same settings with proper hyperparameter tuning. Further, the authors analyze different aspects of the training methods: regularization strength, model utilization, loss fragmentation, OOD generalization and shared mistakes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I believe the **experiments are insufficient** to support the main claims of the paper. Particularly:\n\n1. **Accuracy-robustness tradeoffs are not considered**. Improvements in robustness can be due to decreased natural accuracy, and vice versa [a, b, c]. For example, in Table 1 for CIFAR-10 at 2/255 the implementations of following methods choose a different point at accuracy-robustness tradeoff curve compared to the one in literature, getting higher robustness at the cost of reduced accuracy: CROWN-IBP, SABR, STAPS, MTL-IBP, making claims about technical improvements unsupported. In this regard, the baselines such as ACERT [a], and ACE [b] are missing. Accuracy-robustness tradeoff curves and metrics such as ART-score [a] can be used to capture the improvements in the tradeoff.\n2. **Error bars are missing**. The presented improvements over the results in the literature could be statistically insignificant. For example, the experimental results for CIFAR-10 at 8/255 in paper by Shi et al. (2021) show standard deviation of $\\pm0.3$ for certified accuracy and of $\\pm0.4-0.7$ for natural accuracy, which makes improvements in both accuracy and robustness in Table 1 for SABR and TAPS within the error of standard deviation. \n3. **Training costs are not considered**. Different methods require different amount of computational costs for training, which could be an important factor to consider in benchmarking.\n4. **Certification costs are not considered**. Since some certified training methods allow computing tight certified bounds using efficient \"online\" certification methods, such as IBP (Gowal et al., 2018, Mao et al., 2024), the IBP-based certified accuracy or IBP-based certified radius [a] could also be compared. The cost of test-time verification might be an important factor in choosing a training method.\n\nSince this is a paper proposing a benchmark, it **lacks original** contributions. In terms of evaluation setting, most of the components were already used consistently in previous works.\n\nSmaller comments:\n- The main results in Table 1 are hard to parse and analyze due to large amount of numbers to compare. Accuracy-robustness plots could help with improving clarity.\n- Due to shared mistakes, the paper claims that \"_... there could be an intrinsic difficulty score for each input_\". The certified radius of robustness of each point, described in [a, d], could serve as such score. The average certified radius and/or the histogram of radii [d] can be compared in the benchmark. The adaptive training methods can be discussed in this regard.\n\n[a] Nurlanov, Z., Schmidt, F.R., Bernard, F. (2024). Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs. In: Bifet, A., et al. Machine Learning and Knowledge Discovery in Databases. Research Track and Demo Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14948. Springer, Cham. https://doi.org/10.1007/978-3-031-70371-3_8\n\n[b] Müller, M. N., Balunović, M., & Vechev, M. (2021). Certify or predict: Boosting certified robustness with compositional architectures. In International Conference on Learning Representations (ICLR 2021).\n\n[c] Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., & Madry, A (2019). Robustness May Be at Odds with Accuracy. In International Conference on Learning Representations (ICLR 2019).\n\n[d] Bosman, A. W., Hoos, H. H., & van Rijn, J. N. (2023). A preliminary study of critical robustness distributions in neural network verification. In Proceedings of the 6th workshop on formal methods for ML-enabled autonomous systems." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See above; I would love to hear the authors comments on each of the weakness above along with a response to the general comment." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The paper is well presented, well written, with clear goals and objectives.\n\n2- While this is for a non-expert is not obvious, but the amount of experiments and computation required in this paper is beyond impressive.\n\n3- The insights of the paper are particularly helpful. I personally did not expect that current SOTA methods are under performing. However, it was not that surprising that the improvements over IBP for larger epsilons are not that big.\n\n4- Paper sheds light on a relatively good problem." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The The paper introduces CTBENCH, a unified library and benchmark for evaluating certified training methods for neural networks. It addresses the challenges in comparing existing certified training algorithms by standardizing training schedules, certification methods, and hyperparameter tuning. The authors demonstrate that most algorithms in CTBENCH surpass previously reported results, revealing that much of the perceived advantage of newer methods diminishes when outdated baselines are properly tuned. The benchmark provides insights into certified training methods, encouraging future research by offering a consistent framework and re-establishing state-of-the-art performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper focuses solely on deterministic certified training, overlooking advancements in randomized certified robustness. I believe the paper should have cited works like Cohen et al., Matthias et al. (l1 certification with differential privacy -- early works from 2019), Greg Yang (\"All Shapes and Sizes\" paper), among many others.\n\n2. The paper only considers infinity ball, neglecting other perturbation sets. While this is generally okay, some insights with a few experiments in other perturbation sets might be helpful. It is not clear whether the proposed tricks in the library as part of the unified certified training would work for other perturbation sets (e.g., L2). If they do not, it raises the question of whether we would need a separate library for each perturbation set. The next steps are unclear if that is the case.\n\n3. Some conclusions on the impact of tuning and modifications, while valid, lack formal decomposition, making it difficult to quantify individual contributions. No clarity on the contribution of each individual component (batch norm, etc) towards the final performance. A small systematic study will be very helpful.\n\n4. The evaluation is based on a single model architecture (CNN7); the paper should demonstrate that the library and recommendations hold across different architectures.\n\n\nGeneral comment: Interest in certified models has significantly declined over the past two years. At ECCV, for example, there were notably fewer submissions and accepted papers on adversarial attacks, even though this topic was previously very popular in vision conferences. One reason for this decline could be the uncertainty around where such certifications can be practically deployed, especially given the massive scale of current models, which are thousands of times larger than the CNNs discussed here. Furthermore, as models shift towards generative architectures, it’s unclear who will find this domain relevant. While the paper makes valuable contributions, this direction feels somewhat outdated by about two years and the question of the benefit for it is very unclear and vague, at least to me. I would love to hear the authors take on this.\n\nMinor Comments:\n1. Cite \"is NP-complete\" line 321.\n2. Is not typical robust accuracy (adv acc) for PGD around 48% on 8/255 CIFAR10? Or is because you use CNN7.\n3. adversarial accuracy is not well defined in line 135. You need to say that it is empirical and serves as an upper bound to the robust accuracy.\n4. certified accuracy defined in 133 is not correct. It should be the portion of *correctly* classified samples that are certifiably robust." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We develop a library unifying certified training algorithms, achieve SOTA universally by correcting exsiting implementation mistakes, gain new insights and point out future work direction." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024ctbench,\ntitle={{CTB}ench: A Library and Benchmark for Certified Training},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2bn7gayfz9},\nnote={under review}\n}" }, "abstract": { "value": "Training certifiably robust neural networks is an important but challenging task. While many algorithms for (deterministic) certified training have been proposed, they are often evaluated on different training schedules, certification methods, and systematically under-tuned hyperparameters, making it difficult to compare their performance. To address this challenge, we introduce CTBench, a unified library and a high-quality benchmark for certified training that evaluates all algorithms under fair settings and systematically tuned hyperparameters. We show that (1) almost all algorithms in CTBench surpass the corresponding reported performance in literature in the magnitude of algorithmic improvements, thus establishing new state-of-the-art, and (2) the claimed advantage of recent algorithms drops significantly when we enhance the outdated baselines with a fair training schedule, a fair certification method and well-tuned hyperparameters. Based on CTBench, we provide insights into the current state of certified training and suggest future research directions. We are confident that CTBench will serve as a benchmark and testbed for future research in certified training." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "certified training", "benchmark", "open-source library" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/be2a75412cddcf491867fc5030f9007794d48c85.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/0349d3b3138337cab1bc1e8ab9fb8c1d140f05c2.zip" }, "title": { "value": "CTBench: A Library and Benchmark for Certified Training" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
2c7pfOqu9k
DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference
main
Active
LLM inference;attention;memory-efficiency;tree-based decoding
infrastructure, software libraries, hardware, systems, etc.
6;8;8;8
3;4;4;4
3;4;3;3
3;3;3;3
2;3;3;2
7.5
3.75
3.25
3
2.5
1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Thank you for submitting the paper to ICLR 2025! I think this paper tries to tackle the important problem of improving GPU utilization for LLM serving under the scenario of tree-structured generation. The paper provides a good background of example tree-structured applications, how existing attention algorithms work and how attention could be calculated in a segmented way. The evaluation of the new proposed algorithm demonstrates solid speedup over existing baselines. I overall feel positive about the paper with a few comments and suggestions for improvements.\n\nThe current illustration of the main algorithm in Section 3 is hard to follow.\n\nThere are remarks and comparisons here and there.\n\nFigure 3 includes too many notations and details that make the reader hard to recognize which are the baselines and which are the new techniques proposed in the paper. Even after reading all the text, I could not clearly figure out how flattened tree KV splitting works in detail. There are tons of places where the descriptions refer to the Appendix.\nHowever, I think the reader should be able to grasp the intuition and how the algorithm works at a high level by just reading the main text of the paper.\n\nMy current understanding is that the core of the DeFT algorithm is to help create balanced and sharable QKV groups during the QKV Preparation Phase. It is probably better to clearly define how KV-guided grouping and flattened tree KV splitting work into two separate subsections, as they are the two main techniques proposed in the paper.\n\nIn terms of questions, how do you define the node sizes in the tree KV? \n\nIf the DeFT-Node-Chunk adds additional overhead due to imperfect splits after splitting by the nodes, could we first optimize the tree KV structure to ensure we have nodes of balanced sizes?\n\nIn the Attention Calculation Phase, how many techniques introduced in the paper are novel compared to previous works?\n\nIn addition, how does the proposed technique compare to [cascade inference algorithm](https://flashinfer.ai/2024/02/02/cascade-inference.html)? The cascade inference algorithm also makes the observation that the KV caches could be shared when there are common prefixes between requests. It first uses a multi-query attention kernel to compute the attention between queries and KV caches of the shared prefix, which goes through L1 cache and registers. Then it uses batch decode attention kernel to calculate for the remaining suffixes, which accesses the global memory and L2 cache.\n\nIn terms of experiments, it seems all evaluations are currently completed on a single A100 GPU.\nHow would the performance be if the algorithm is applied in a multi-node distributed LLM inference setting?\nWould any of the parallelization techniques affect the effectiveness of the splitting algorithm?\nHow would the algorithm perform in a long context LLM serving scenario?\n\nOther questions:\n\n1. For Table 5, why is there an even larger speedup for the case of upper-bound (no attention)? Isn't the proposed algorithm only optimizing for the attention operation?\n\n2. How would different types of attention operation (e.g. multi-head, multi-query, or group-query attention) affect the performance of DeFT?\n\n3. For Figure 4, what would the latency breakdown be for DeFT-Node-Chunk? Would unpaged versions of DeFT-Node-Chunk and DeFT-Flatten incur similar overhead for KV management?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Tries to solve the important problem that current LLM serving systems are inefficient in computation and IO for tree-based decoding applications.\n2. Provides good background on segmented attention and existing attention algorithms.\n3. Evaluation results show decent speedup over baselines." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Tree-structured decoding is gaining more popularity in LLM serving due to the presence of applications such as multi-step reasoning and speculative decoding. Existing inference systems are inefficient due to their failure to be prefix-aware: they either perform redundant recomputation of KV caches for shared prompts, or repeatedly load and store KV caches of shared prompts during attention calculation. This paper presents DeFT, an efficient attention calculation algorithm with prefix-awareness and load-balanced KV cache partitions. DeFT uses KV-guided grouping to group the prefix's KV cache with all shared queries. It then uses flattened tree KV splitting which splits the KV cache into balanced partitions to reduce overhead in computation. Evaluations show that DeFT has better wall-clock time speedup in multiple tree-structured decoding applications compared to state-of-the-art baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper is hard to follow. The design figures include too many details. Lack of clear explanation of the key techniques including KV-guided grouping and tree KV splitting.\n2. Lack of evaluation or discussion on multi-node settings and other baselines." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weaknesses." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Authors introduce KV-Guided Grouping, which reuses memory for shared prefixes in the KV cache, minimizing redundant I/O operations.\n2. Authors' approach to balanced workload distribution via Flattened Tree KV Splitting leads to better GPU usage.\n3. Triton implementation provides strong empirical evidence of the efficacy of the method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the DEFT (Decoding with Flash Tree-Attention) algorithm, aimed at enhancing efficiency in tree-structured language model (LLM) inference. Traditional approaches often fall short due to redundant memory access, inefficient handling of KV cache for shared prefixes, and poor GPU utilization. DEFT addresses these issues through two primary innovations: KV-Guided Grouping and Flattened Tree KV Splitting. Authors claim that these strategies optimize memory accesses and ensure balanced GPU utilization, leading to significant speed-ups in tree-based tasks like few-shot prompting, multi-step reasoning, and speculative decoding." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While single GPU performance is quite good, it is not clear how DeFT can scale to larger models requiring multiple GPUs.\n2. Though there is a one-liner on vLLM comparison, there is no numerical comparison with vLLM given that vLLM also implements prefix-based KV-cache sharing.\n3. The overhead of QKV PREPARATION PHASE is unclear from the empirical results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Reasoning has become a popular approach to enhance the performance of large language models (LLMs) on complex tasks. Are there any future plans to integrate this method within task pipelines to achieve end-to-end improvements?\n2. As noted in the weaknesses, tensor parallelism is widely used to scale large LLMs across multiple GPUs. Will this work be released as an open-source repository to help develop an infrastructure, similar to vLLM or DeepSpeed, that provides a usable framework for the public?\n3. The test on speculative decoding sets T from 32 to 256, which is much larger than usual settings (<10), have you test speculative decoding with smaller T value?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Efficient Memory Usage and Balanced Workload Distribution**: DEFT's KV-Guided Grouping minimizes redundant memory access by loading shared prefix data only once, reducing IO costs associated with repeatedly reloading the KV cache. Combined with the Flattened Tree KV Splitting strategy, which evenly distributes data across GPU units, DEFT maximizes GPU utilization by ensuring balanced workload distribution, thus avoiding bottlenecks and maintaining consistent processing speeds.\n2. **Enhanced End-to-End Processing Speed**: Compared to state-of-the-art methods, DEFT achieves up to a 2.5x speedup in end-to-end latency, making it highly effective for tasks that require complex, tree-based structures like few-shot prompting and multi-step reasoning.\n3. **Scalability Across Tasks**: DEFT demonstrates versatility by performing well across different tree-structured applications, such as speculative decoding, where shared prefix usage and efficient load balancing are particularly challenging." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents DEFT (Decoding with Flash Tree-Attention), an optimized algorithm for efficient inference in tree-structured large language model (LLM) tasks, such as few-shot prompting, multi-step reasoning, and speculative decoding. Existing methods face inefficiencies from redundant memory access of shared prefixes and unbalanced GPU workload distribution, which leads to low utilization and slower processing. DEFT addresses these issues with two key techniques: KV-Guided Grouping, which minimizes memory access by loading shared prefix data only once, and Flattened Tree KV Splitting, which enhances GPU utilization by evenly distributing workload across GPU units. Implemented on Triton, DEFT achieves up to 2.5x faster end-to-end speeds by significantly reducing memory operations, making it highly effective for complex, tree-based LLM applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Lack of Comparison with Shared Prefix Infrastructure**: While DEFT introduces novel techniques for memory efficiency and load balancing, it lacks a direct comparison with existing infrastructure solutions like vLLM and DeepSpeed-MII, which already support shared prefix KV cache across different batches. Such a comparison would clarify DEFT’s advantages and limitations relative to widely adopted methods that also aim to reduce redundancy in KV cache management.\n2. **Challenges with Distributed Memory and Tensor Parallelism**: DEFT’s current design primarily targets single-device GPU optimization and may not be directly compatible with distributed memory or tensor parallelism setups, which are commonly used to scale large language models across multiple GPUs. Adapting DEFT to work efficiently in distributed environments could require additional modifications to handle inter-device communication and memory sharing effectively, potentially limiting its scalability for very large models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The paper makes a significant contribution to optimizing LLM inference for tree-based decoding tasks, introducing novel methods that are both theoretically sound and empirically validated. The authors have addressed previous concerns through additional material, improving the clarity and robustness of the work. Therefore, I recommend acceptance" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Integration of Supplementary Material: Could the authors consider integrating key explanations and findings from the supplementary material into the main paper to improve readability and clarity for readers who may not delve into the appendix?\nEnergy Efficiency Metrics: While DEFT reduces IO operations, have the authors considered measuring the impact on energy consumption or providing an analysis of energy efficiency improvements?\nMinimal Shared Prefixes Scenarios: How does DEFT perform in scenarios where the shared prefix is minimal or the tree width is very small? Are there any overheads introduced in such cases compared to existing methods?\nRealistic Scalability: Do the authors foresee any limitations or challenges in extending DEFT to more common larger model sizes (e.g., 70B parameters, or 400B) or to different model architectures beyond those tested? These larger models generally excel at complex multi-step reasoning tasks compared to the <32B counterparts, which may reveal different patterns in inference and could affect the effectiveness or accuracy retention of your approach." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Relevance: The paper tackles a timely and significant problem in optimizing LLM inference for tree-based decoding applications, which is highly relevant to current AI research and deployment.\nOriginality: Introduces a novel attention algorithm, DEFT, leveraging KV-Guided Grouping and Flattened Tree KV Splitting to address memory access inefficiencies and load balancing.\nTheoretical Justification: Provides solid theoretical analysis to justify the proposed methods, including IO complexity analysis and discussions on the correctness of the algorithm.\nEmpirical Validation: Demonstrates significant improvements in end-to-end latency and attention computation across multiple tasks (few-shot prompting, multi-step reasoning, speculative decoding) compared to state-of-the-art baselines. The supplementary material includes extensive experimental results and ablation studies, strengthening the empirical validation.\nComparison with Concurrent Works: The supplementary material provides detailed comparisons with concurrent works, clarifying the advantages of DEFT in handling multi-level tree decoding and addressing unbalanced workloads.\nScalability: The authors provide results demonstrating DEFT's scalability to larger models (up to 34B parameters) and different hardware setups.\nAccuracy Preservation: The paper includes analysis showing that DEFT maintains model accuracy, with negligible differences in attention scores and perplexity compared to baseline methods" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents DEFT (Decoding with Flash Tree-Attention), a hardware-efficient algorithm that optimizes large language model (LLM) inference for tree-based decoding tasks like few-shot prompting and multi-step reasoning. Current systems struggle with redundant Key-Value (KV) cache loading and poor load balancing, causing inefficient memory use and low GPU utilization. DEFT solves this with KV-Guided Grouping, which reuses shared prefixes to reduce KV cache access, and Flattened Tree KV Splitting, which improves GPU efficiency. Implemented with OpenAI Triton, DEFT achieves significant speedups in attention latency compared to existing methods" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Presentation Clarity: While the supplementary material improves clarity, some sections of the main paper remain dense, and the inclusion of key explanations from the supplementary material into the main text could further enhance understanding. Significant critical information is gained through the supplementary material, specifically regarding reproducibility and algorithm details, which would benefit from inclusion in the main text.\nLimited Discussion on Energy Efficiency: The paper still focuses primarily on speedup metrics, and while memory access reduction implies energy efficiency, an explicit discussion or measurement of energy consumption would strengthen the work.\nApplicability in Varying Scenarios: Although the authors include experiments with varying tree widths and prompt lengths, further exploration of scenarios with minimal shared prefixes or very small tree widths would provide a more comprehensive understanding of DEFT's applicability." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose DeFT, a hardware-efficient tree attention algorithm to improve the tree-based decoding (e.g. multi-step reasoning, speculative decoding, etc) efficiency with IO-awareness for shared prefixes and load-balancing.." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024deft,\ntitle={De{FT}: Decoding with Flash Tree-attention for Efficient Tree-structured {LLM} Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=2c7pfOqu9k},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) are increasingly employed for complex tasks that process multiple generation calls in a tree structure with shared prefixes of tokens, including few-shot prompting, multi-step reasoning, speculative decoding, etc. However, existing inference systems for tree-based applications are inefficient due to improper partitioning of queries and KV cache during attention calculation.This leads to two main issues: (1) a lack of memory access (IO) reuse for KV cache of shared prefixes, and (2) poor load balancing.As a result, there is redundant KV cache IO between GPU global memory and shared memory, along with low GPU utilization. To address these challenges, we propose DeFT(Decoding with Flash Tree-Attention), a hardware-efficient attention algorithm with prefix-aware and load-balanced KV cache partitions. DeFT reduces the number of read/write operations of KV cache during attention calculation through **KV-Guided Grouping**, a method that avoids repeatedly loading KV cache of shared prefixes in attention computation. Additionally, we propose **Flattened Tree KV Splitting**, a mechanism that ensures even distribution of the KV cache across partitions with little computation redundancy, enhancing GPU utilization during attention computations. By reducing 73-99$\\%$ KV cache IO and nearly 100$\\%$ IO for partial results during attention calculation, DeFT achieves up to 2.52/3.82$\\times$ speedup in the end-to-end/attention latency across three practical tree-based workloads compared to state-of-the-art attention algorithms." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLM inference", "attention", "memory-efficiency", "tree-based decoding" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/de2bf89880d839a8f0c0bb0cd10b2fc891f1684d.pdf" }, "presentation": null, "primary_area": { "value": "infrastructure, software libraries, hardware, systems, etc." }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]