id
stringlengths 10
10
| title
stringlengths 3
179
| track
stringclasses 1
value | status
stringclasses 3
values | keywords
stringlengths 2
2.39k
| primary_area
stringclasses 21
values | author
stringclasses 501
values | authorids
stringclasses 501
values | aff
stringclasses 1
value | aff_domain
stringclasses 1
value | position
stringclasses 1
value | rating
stringclasses 355
values | confidence
stringlengths 0
19
| soundness
stringclasses 642
values | contribution
stringclasses 596
values | presentation
stringclasses 782
values | rating_avg
float64 0
9
| confidence_avg
float64 0
5
| soundness_avg
float64 0
4
| contribution_avg
float64 0
4
| presentation_avg
float64 0
4
| corr_rating_confidence
float64 -1
1
| project
stringclasses 1
value | github
stringclasses 1
value | Review
listlengths 2
10
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ydw2l8zgUB | EEGTrans: Transformer-Driven Generative Models for EEG Synthesis | main | Active | LLM;EEG;BCI;transformer | applications to neuroscience & cognitive science | 3;3;3;5 | 4;5;3;5 | 2;1;2;3 | 2;2;2;2 | 3;2;2;3 | 3.5 | 4.25 | 2 | 2 | 2.5 | 0.522233 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "As written, still do not understand how data is exactly generated for the classification tasks. Can you try to explain it very concisely?\n\nAlso, what train/test splits did you use? In the different training stages (generative model training, classifier training etc.)"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* applies commonly used, yet fairly recentnovel deep learning methods from other fields to EEG, where they have not been used as much\n* supplies code\n* overall approach probably makes sense (even though have not fully understood it)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This manuscript describes a vector-quantization-based autoregressive transformer as a generative model for EEG. They use their generative model to generate synthetic data for EEG classification tasks. Results show that using the real data and the synthetic data with an auxiliary loss slightly outperforms using only the real data on motor imagery decoding tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I found the way how actually the synthetic data is generated hard to understand and may still not have fully understood it. This should be more clearly described early on I am still confused about it.\n\nSome of the writing I found vague and therefore hard to read, e.g. first sentences \"Large language models (LLMs) have been extensively utilized across various scenarios due to their powerful model characteristic: the generative models. These models are not restricted to producing specific forms of output; instead, they can generate output in any form\" I found these sentences mostly confusing.\n\nIt is unclear to me how the authors split the data into training and test, I think they did not follow official train/test splits of the datasets? Did not see this information, maybe I missed it.\n\nAlso, there are of course a lot of existing works on those data that should be compared to, and for this it is also necessary to check and align the train/test split with those works and to also mention their performances.\n\nAverage power spectra (e.g., for all trials fo one subject) for synthetic and generated data should be shown to assess quality of generated EEG."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "This paper will need a lot of work, to perform all the experiments and evaluate the synthetic EEG Data.\n\nCan the authors perform simple group-level beamformer source analysis for both real and synthetic data?\n\nAdditionally Time Frequency plots for beta power changes in case of motor imagery for both generated and real data?\n\nCan we get some loss plots for the training and fine-tuning?\n\nLine [318-320] \"We focus on three channels commonly used in motor imagery experiments, and two epochs are shown to allow us to confirm the robust performance of EEGTrans across different channels and multiple epochs.\" - which 3 channels - c3, c4, cz?\n\nLine [416]\"As shown in Table 2, only EEGTrans closely matches the ground truth (real data) with minor differences, retaining high sample entropy and thus indicating high complexity and variation. However, it does not retain high-frequency components, which is evident in the spectral entropy. \" - Can we have some statistical comparison?\n\nLine[463]\"Table 3 shows the classification accuracy achieved through various approaches: using only real data, only synthetic data, combining real and synthetic data, and incorporating real data with synthetic data along with auxiliary loss. For a comprehensive analysis, we employ five-fold cross-validation instead of the original train-test split used in the competition.\" - How did you do 5 fold cross- validation? What was your distribution of the test and validation set, or it was simple 80 -20? Since it's already a competition dataset, why did you choose to do cross-validation and not cross-session analysis?\n\nCan you also do inter-subject analysis?\n\nIs it justified to benchmark against cycle gan for non-stationary time series data?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The paper used Transformer architecture to generate synthetic EEG. The idea is not novel as there have been previous attempts on this, however the problem is not well studied. Thus utilising transformers seems a reasonable attempt at testing the capability. \nThey Validated on multiple datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors proposed EEGTrans (Encoder - Decoder) based architecture for generating synthetic EEG datasets. It is useful, as collecting EEG data is a human subject-dependent and time-consuming process. Authors evaluated on multiple motor-imagery datasets after downsampling them to 128 Hz."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There have been other methods where authors used GPT-based architecture to claim a foundation Model: \"Neuro-GPT: Towards A Foundation Model for EEG\". There is another paper \"A Time Series is Worth 64 Words: Long-term Forecasting with Transformers\" for time series data - The authors should have evaluated that.\n\nThere should be more details on architecture, and optimisation. The authors explained for EEGTrans but for CycleGAN approach, I couldn't find the information on layers/number of parameters/etc. There is some information in the appendix but still not complete.\nI would encourage authors to include maybe clear architecture diagram for both in the main text, so it is easier to compare and understand the implementation.\n\nAdditionally, There is an extreme weakness in the paper related to evaluating synthetic data. The authors didn't perform standard evaluations like PSD, STFT, activation in alpha and beta bands, or source analysis. Averaging all the subjects doesn't distinguish activations.\n\nThere are 5 datasets for evaluation: Only 1 dataset is there with 1 subject and 128 Hz sampling. Why use that dataset to bring the other 4 datasets with higher resolution and number of subjects to 128 Hz? Maybe let go of that dataset and deal with 250 Hz, as we have tried over the years to have higher sampling and temporal resolution for these datasets \n\nIn paper, I have seen repetitive occurrences stating the same stuff, authors can reduce that, to add more information in main text. ex:\n\"Visual inspection indicates that EEGTrans produces higher-quality synthetic data compared to CycleGAN. It is evident from the\nvisual comparison that EEGTrans’s generated data is significantly superior\". \n\nanother Line [493]The BCI Competition IV Dataset 2a was collected some time ago] - \"some time ago\" really? We can say, 2a was collected in 2008 and HGD was in 2017. However the may statement is speculative in nature, so authors can point towards noise/interference due to room/environment. \n\nLine [534] : \"This method produces high-quality synthetic data that enhances downstream classification tasks.\" - If you only intended for classification (only one downstream), just train a classification model utilizing transformers?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Does the next-token prediction converge well? I'm curious about how well the next-token prediction would work. Do you have a test set to verify the next-token prediction process? If there is the test set, how well can the model correctly predict the next token? Maybe reporting metrics like perplexity will help. \n2. Following Question 1., it's possible that the model is simply yielding EEG data from the trained dataset (with some kind of bias on frequency as described in 4.5). If so, the model will be downgraded to merely replicate an existing dataset and use it to other datasets.\n3. While the model is called EEGTrans, taken the name from transformer, I feel the RVQ plays the most important role for synthesizing realistic EEG data. So what if we use the discretization method such as RVQ, but do not use the auto-regressive manner to train a transformer model? For example, one can actually combine VQ with CycleGAN to also generate good-quality EEG data.\n4. I'm not sure whether it is my problem, but when I'm trying to visit the link to the code in the abstract, it tells me that \"The repository is not found.\""
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The general idea is taken from the language model, thus the overall methodology is reasonably backed up with empirical results, and is still novel in the EEG synth task. \n2. The experiments demonstrate their superior performance compared to a classic method CycleGAN. \n3. The overall presentation is good. The authors clearly conveys their ideas."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes the transformer-based EEG generative model, namely EEGTrans, to synthesize EEG data based on multiple datasets. The model used the RVQ autoencoder to train the discretize the EEG signal and build the codebook to represent EEG tokens. Then EEGTrans can model the generative task as a next-token prediction task in an auto-regressive manner just as what a language model is doing and then the RVQ decoder can generate the EEG signal similar to the real data. Experiments confirms the similarity by comparing the spectral entropy and sample entropy. Further experiments confirms that with these synthetic data as augmentation, a model can achieve better results on the target BCI dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "First of all, I have to mention that the key research problem of the EEG synthesis is not \"what is the best method to generate the synthetic EEG data\", but \"why do we need to generate the synthetic EEG data\". The paper answers this question in the most common way, which is to use these data as augmented data to improve the performance on a specific downstream task (and in this paper the BCI motor imagery task). From this point of view, the scope of the contribution of this paper will be limited to getting improved synthetic EEG data for data augmentation rather than extending our understanding for this area. \n\nThe above statement is actually not the weakness of this paper. I like paper that won't overclaim but focus more on concrete things. However, starting from it, there will be the following concerns about the paper. \n1. It's unclear to me whether the label needs to be generated together with the EEG data. If so, the usage of the synthetic dataset seems to be limited to highly related tasks and the claimed cross-dataset advantage will be weakened.\n2. If the synthetic data is used to boost the performance of downstream tasks, then how does it perform compared to more common but simple data augmentation methods like cropping and concatenating EEG fragments, jittering, etc.? It lacks such justification. \n3. If the synthetic data is claimed to integrate information from different datasets, then how should we compare this paradigm to a more common paradigm where model such as LaBraM will integrate information in the hidden representation space? This alternative paradigm can boost the performance of downstream tasks with unsupervised training on unlabeled EEG data, which can integrate information from more diverse dataset. \n\nThere are also some comments on the methodology: \n1. EEGTrans doesn't take the relationship among channels into consideration, and doesn't evaluate the cross-channel reconstruction quality.\n2. The role that the next-token prediction plays in the proposed method is unclear and unverified. See the questions for more details."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The EEG signals have high variations between subjects and between datasets, how are these addressed in this work? Is the generation subject-dependent?\n\n2. The generation quality of the proposed will also be largely dependent on the RVQ AE, which added additional complexity to train and tune, any comment on this?\n\n3. What are the real-world applications for the proposed method? It seems like in-order to generate synthetic data for a new target dataset, you still need to use real data as input. It is very rare that we only collect 'the first half of a trial' in BCI experiments.\n\n4. For auto-regressive generation, how does the 'prediction' performance decline with respect of length?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. the structure of the paper is good and clear\n2. the method section was clearly described and the figure 2 is informative.\n3. the source codes are provided to facilitate reproduction"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposed a transformer-based generative model, EEGTrans to autoregressively generate EEG data based on the input sequence. The proposed model was evaluated against CycleGAN and achieved more accurate 'prediction' or generation of future segments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The model was trained for next code prediction which means the proposed method is more of a EEG signal 'forecasting' or 'prediction' rather than generation for data augmentation purpose. Therefore, the introduction and problem setup is miss aligned.\n2. For synthetic data generation to facilitate training in a new dataset, both the reality and the diversity of generated signals are important. In a typical GAN or Diffusion setting, new signals can be generated or sampled from a random vector sampled from the normal distribution during inference to achieve diversity and variety. However, in the proposed method, it seems like there is no sampling process for diverse signal generation, so how can the proposed method augment the target dataset?\n3. There are GAN-based method for EEG that can directly generate the raw signal which should be added into comparison. For example, EEG-GAN.\n4.The language used in section 4.7 is very vague, for example, 'Dataset 2a was collected some time ago', 'the synthetic data likely does not retain subject-specific information', 'the classifier should yield very similar output'. These claims need to be justified better for example, providing a tSNE plot to show the generated data vs the real data, and the target classes vs the subject to demonstrate there is no subject-specific information. \n5. Why not use MSE as a performance metric since the method is 'predicting' the future segment?\n6. The use of GAN to replace the Transformer module as baseline for next coder prediction task should be justified better."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper introduces EEGTrans, a transformer-based generative model for sequentially generating synthetic EEG signals"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024eegtrans,\ntitle={{EEGT}rans: Transformer-Driven Generative Models for {EEG} Synthesis},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ydw2l8zgUB},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advancements in Large Language Models (LLMs) have been significant, largely due to improvements in network architecture, particularly the transformer model. With access to large training datasets, LLMs can train in an unsupervised manner and still achieve impressive results in generating coherent output. This study introduces a transformer-based generative model, EEGTrans, designed for sequentially generating synthetic electroencephalogram (EEG) signals. Given the inherent noise in EEG data, we employ a quantized autoencoder that compresses these signals into discrete codes, effectively capturing their temporal features and enabling generalization across diverse datasets. The encoder of EEGTrans processes EEG signals as input, while its decoder autoregressively generates discrete codes. We evaluate our method in a motor imagery Brain-Computer Interface (BCI) application, where merging data across datasets is particularly challenging due to experimental differences. Our results demonstrate that the synthetic EEG data effectively captures temporal patterns while maintaining the complexity and power spectrum of the original signals. Moreover, classification results show that incorporating synthetic data improves performance and even surpasses that of models based on Generative Adversarial Networks. These findings highlight the potential of transformer-based generative models to generalize effectively across multiple datasets and produce high-quality synthetic EEG signals."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM",
"EEG",
"BCI",
"transformer"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/eedb4842487c689adce0b844e0cc888d50b6032a.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "EEGTrans: Transformer-Driven Generative Models for EEG Synthesis"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ye1mxb79lw | BILBO: BILevel Bayesian Optimization | main | Active | bilevel;Bayesian optimization | probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.) | 3;5;5;6;6 | 3;3;4;3;3 | 2;2;3;3;3 | 2;2;2;3;3 | 3;3;4;3;3 | 5 | 3.2 | 2.6 | 2.4 | 3.2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The writing is in general good and clear. I liked the motivation of Bilevel optimization. \n\n- The paper performed extensive experiments on various datasets including some real-world examples. This is clearly a strength, though this could have been much more convincing if the paper is positioned to claim contributions on the empirical side, rather than the theoretical side."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigates Bayesian Bilevel Optimization through a sampling-based approach. For general bilevel optimization problems, it provides a bound on the regret of the proposed algorithm and demonstrates its effectiveness across various synthetic and real-world problems."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper's introduction is somewhat misleading. Unlike typical optimization studies focused on the computational complexity of local-search algorithms, this paper addresses statistical complexity, assuming exhaustive search over a function hypothesis space. Consequently, it diverges from the usual challenges associated with noisy, constrained, and derivative-free settings in continuous stochastic optimization, making direct comparisons with gradient-based methods inappropriate.\n\nBelow are a few detailed comments:\n\n- Line 59-60: What is \"information flow\"? This is not a well-defined term. \n\n- Line 68-69: why functions are modelled using Gaussian Process? And this assumption appears abrupt. \n\n- Gaussian process: notation is messy and discussion is hard to follow.\n\n- At the beginning of Section 3, it is not clear what you exactly get from querying points.\n\n- Line 155 \"trusted sets\": in my first read, this was very confusing. I think the paper should have been much clearer about the interaction protocol, the fact that exhaustive search over hypothesis class is okay, and it is not going to be about convergence of local-search methods. \n\n- Line 301-302: What is \"information gain\"? Define it precisely.\n\nWhile the extensive experiments in Section 4 are impressive, it remains unclear how to evaluate this section if the main contribution is claimed on methodological rather than real-world applicability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Optimization from two constructed sets can be very hard; see Step 7 of Algorithm 1. How do you implement this step while maintaining two sets in Step 17 efficiently? I guess only approximated solutions can be given."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tBilevel Bayesian optimization is an important problem with many real-world applications in machine learning, economics, energy management, and investment.\n2.\tThe proposed algorithm also works for the decoupled setting where both level functions may come from different simulators or experiments.\n3.\tBoth theoretical and empirical studies are provided. Real-world experiments are reported to positively show the effectiveness of proposed algorithm.\n4.\tOverall the writing is good and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This submission proposes a novel algorithm for bilevel Bayesian optimization that optimizes both upper and lower level problems jointly in a sample-efficient manner. The key idea of this algorithm is using confidence bounds to construct trusted sets of feasible and lower level optimal solutions. Theoretical studies show that trusted sets guarantee points with instantaneous regret bounds and sublinear regret bound is proven. Empirical studies are also provided."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tI have concerns on novelty of this submission since all techniques in this paper are pretty standard. First, functions in both levels are modelled by Gaussian process and then trusted regions are defined using confidence bounds. Then theoretical results are derived from Srinivas et al., 2010. I cannot find challenges that are unique to the bilevel Bayesian optimization, or I might have missed something. Please clarify this point.\n2.\tIn Theorem 3.9, theoretical results are given based on $|\\mathcal{F}|,|\\mathcal{X}|,|\\mathcal{Z}|$, so all of them must be finite, which put more restrictions on problem setting."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Can you explain more about the motivations for studying the decoupled setting? And the reasons to consider $\\mathcal{F}$ and function query $h_t$ is the algorithm design and regret definition? For example, can we only focus on the upper and lower-level functions in the regret definition? \n- For regret bound in Theorem 3.9, the authors discussed the relationship to constrained Bayesian optimization in Nguyen et al. (2023), is there any existing bilevel BO regret result that can be compared with? \n- In experiments, BILBO is compared with TrustedRandom and Nested. As described in Appendix C.1, the Nested baseline used a nested approach with SLSDP. But the related works mentioned, e.g. Kieffer et al. (2017); Dogan & Prestwich (2023) are not included. Can you compare it with the associated works directly? If not, can you explain the reasons that BILBO cannot compared with the related work? \n- In Figure 1a, can you explain why BILBO has a sudden drop at around 150 queries? And what is the regret of BILBO after 150 queries (cannot observe the line afterwards)? The uncertainty level looks huge as well, I am wondering what would happen if you increase the number of runs (or change random seeds/initialisations, etc). \n- In experimental results, we observe Nested is generally similar to or worse than TrustedRand (which is only a random sampling), can you explain the possible reason? Does it suggest Nested is not a strong baseline (as it is from a previous work)?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper proposes a new algorithm, BILBO, specifically tailored for general bilevel optimization problems, where simultaneous optimization at both levels is essential.\n- The authors derive and prove a sublinear cumulative regret bound for the decoupled setting, enhancing the theoretical understanding of bilevel optimization.\n- Experiments are conducted on both synthetic and interesting real-world scenarios, demonstrating the practical applicability of BILBO.\n- The writing is generally clear and well-structured, making the paper accessible to readers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces BILBO, a novel bilevel Bayesian optimization framework that simultaneously optimizes functions at both upper and lower levels. The algorithm employs trusted sets based on confidence bounds at each level. The authors establish a sublinear cumulative regret bound in a decoupled setting and validate BILBO through experiments on simulated and real-world problems, using TrustedRand and Nested policies as baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While the algorithm is well-explained, the motivations for certain design choices, particularly the decoupled setting and the specific function query mechanisms ($\\mathcal{F}$ and $h_t$), are not sufficiently discussed. A deeper exploration of these choices would help readers understand the broader implications and advantages of BILBO’s design.\n- The paper’s discussion of related work is somewhat limited. A more detailed comparison with existing bilevel optimization methods, particularly those offering theoretical guarantees or practical performance benchmarks, would provide a clearer picture of BILBO’s relative strengths and weaknesses.\n- The baselines used in the experiments—TrustedRandom and Nested—are not the strongest available. Including comparisons with state-of-the-art bilevel optimization algorithms, such as those from Kieffer et al. (2017) or Dogan & Prestwich (2023), would offer a more rigorous evaluation of BILBO’s performance.\n- No code is provided alongside the paper. This omission hampers the reproducibility of the results and limits the ability of the research community to build upon or validate the findings presented."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Have the authors considered a hybrid setting, where some parts of the bi-level problem are black-box while other parts are white-box with explicit expressions? I guess such problems also arise widely in practice."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The problem addressed in this work is significant, and the proposed algorithm introduces a novel approach. Most existing bi-level Bayesian optimization algorithms tackle the problem using a nested framework, where each upper-level query requires separately optimizing the lower-level problem to convergence. In contrast, the proposed algorithm optimizes both levels concurrently, greatly enhancing sample efficiency, as shown in the experiments.\n2. An infeasibility declaration method is also included.\n3. Additionally, the algorithm achieves a theoretical sublinear regret bound, extending the classic bound from single-level to bi-level problems."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a new algorithm designed to solve bi-level constrained Bayesian optimization problems. It demonstrates that the algorithm achieves a sublinear regret bound and provides experimental results on both synthetic and real-world datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This approach involves keeping track of a set of nearly optimal solutions for the lower-level problem. In low-dimensional cases, this can be managed through discretization. However, it likely becomes difficult to scale effectively to even moderately high-dimensional problems."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Are there any problems with extending the proposed method for an infinite input set? I believe that we can extend it by relying on the discretization arguments (as in [1]) under the four-times differentiability assumption of the kernel and the compactness of the input set.\n- Why does the author focus on the cumulative regret in Theorem 3.9? As with the existing works (e.g.,[2,3])\nthe upper bound of the simple regret (or the stopping time upper bound for finding $\\epsilon$-accurate solution) seems to be derived based on the pessimistic estimated solution. I believe that simple regret is a more suitable performance measure from the optimization perspective.\n\n[1] Srinivas, Niranjan, et al. \"Gaussian process optimization in the bandit setting: No regret and experimental design.\" arXiv preprint arXiv:0912.3995 (2009).\n\n[2] Bogunovic, Ilija, et al. \"Adversarially robust optimization with Gaussian processes.\" Advances in neural information processing systems 31 (2018).\n\n[3] Kirschner, Johannes, et al. \"Distributionally robust Bayesian optimization.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2020."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Overall, the writing of this paper is clear, well-structured, and easy to follow. \n- Comprehensive proofs are provided, and the proofs seem correct, as far as I can see.\n- As far as I know, the proposed algorithm is the first theoretically guaranteed approach for bilevel Bayesian optimization. \n- The practical behavior of the algorithm is well-described in Figures 1 and 2, which will be helpful for practitioners."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the black-box bilevel optimization problem based on the Gaussian process (GP) model.\nThe author proposes an algorithm that leverages the confidence set of the underlying objective function $f$ under the Bayesian assumption (i.e., $f$ is the sample path of GP.)\nThe proposed algorithm achieves $O(\\sqrt{\\gamma_T T})$ cumulative regret upper bound whose \ninstantaneous regrets are defined as the extension of existing work [1] for the formulation of the bilevel optimization. The numerical experiments are conducted, including two problems motivated by real-world problems.\n\n[1] Nguyen, Quoc Phong, et al. \"Optimistic Bayesian Optimization with Unknown Constraints.\" The Twelfth International Conference on Learning Representations. 2023."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concern is the lack of novelty. The confidence bound-based approach for \ncomplex structured optimization problems have already been extensively studied \nin the BO field (e.g., robust BO [1,2,3], constrained BO [4,5], and composite objectives [6, 7], etc.).\nSpecifically, it seems that the proposed algorithm can be constructed by combining the extended regret definition of [4] and the sampling for potential solutions based on a confidence set. As far as I see, the non-trivial points of the proposed algorithm construction are the following:\n\n1. The construction of $\\mathcal{P}_t$ with $\\overline{z}$ (Eq. 3.4, 3.5). Specifically, the natural construction of the potential solution set $\\mathcal{P}\\_t$ is based on maximum $\\max l\\_{f,t}(x, z)$; however, we cannot bound $r\\_{f}$ with such construction.\n2. Reassignment of $z_t$ based on Lemma 3.6.\n\nI believe that the other parts of the algorithm construction and analysis are not novel for the reader who studies related BO fields (e.g., robust BO, constrained BO, etc.) since they only use the basic, well-known result of the existing algorithm and are naturally derived. \nFurthermore, uncertainty-based input selection by fixing one input, such as the procedures of point 2, is also commonly leveraged to upper bound the instantaneous regret in the BO field (e.g., [2,3]). \n\nFor the above reasons, I slightly lean my score toward rejection.\n\n[1] Bogunovic, Ilija, et al. \"Adversarially robust optimization with Gaussian processes.\" Advances in neural information processing systems 31 (2018).\n\n[2] Kirschner, Johannes, et al. \"Distributionally robust Bayesian optimization.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2020.\n\n[3] Iwazaki, Shogo, Yu Inatsu, and Ichiro Takeuchi. \"Mean-variance analysis in Bayesian optimization under uncertainty.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2021.\n\n[4] Nguyen, Quoc Phong, et al. \"Optimistic Bayesian Optimization with Unknown Constraints.\" The Twelfth International Conference on Learning Representations. 2023.\n\n[5] Xu, Wenjie, et al. \"Constrained efficient global optimization of expensive black-box functions.\" International Conference on Machine Learning. PMLR, 2023.\n\n[6] Li, Zihan, and Jonathan Scarlett. \"Regret bounds for noise-free cascaded kernelized bandits.\" arXiv preprint arXiv:2211.05430 (2022).\n\n[7] Xu, Wenjie, et al. \"Bayesian optimization of expensive nested grey-box functions.\" arXiv preprint arXiv:2306.05150 (2023).\n\n(Minor)\n- I believe that there is further room to enhance the paper's quality. For example:\n - The existing work [1] studies the particular case of the problem setting of this paper. The comparison with [1] will make the position and novelty of this paper clearer.\n - Finiteness assumptions of $\\mathcal{X}$ and $\\mathcal{Z}$ should be explicitly described in Section 2.\n - The references should be appropriately capitalized (e.g., bayesian -> Bayesian).\n - The notation $|\\mathcal{X}|$ usually denotes the cardinality of the set, not the dimension. I recommend using another notation.\n - Definition of the maximum information gain and its explicit upper bound for several commonly used kernels (SE or Mat\\'ern) is beneficial for the reader who are not familiar with theory.\n - The vector graphics figures (such as .pdf, .svg) are favorable.\n - The result of Section.4 seems yet unstable in $5$ trials.\n- (typo) Corollary 3.2: $[u\\_{h,t}(x, z), l\\_{h,t}(x, z)]$ -> $[l\\_{h,t}(x, z), u\\_{h,t}(x, z)]$\n- (typo) Line 107: The definition of the posterior mean misses the prior mean $m_h(\\cdot)$.\n- (typo) Line 108: \\sigma^2 I -> \\sigma^2 \\bm{I}"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024bilbo,\ntitle={{BILBO}: {BIL}evel Bayesian Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ye1mxb79lw},\nnote={under review}\n}"
},
"abstract": {
"value": "Bilevel optimization, characterized by a two-level hierarchical optimization structure, is prevalent in real-world problems but poses significant challenges, especially in noisy, constrained, and derivative-free settings. To tackle these challenges, we present a novel algorithm for BILevel Bayesian Optimization (BILBO) that optimizes both upper- and lower-level problems jointly in a sample-efficient manner by using confidence bounds to construct trusted sets of feasible and lower-level optimal solutions. We show that sampling from our trusted sets guarantees points with instantaneous regret bounds. Moreover, BILBO selects only one function query per iteration, facilitating its use in decoupled settings where upper- and lower-level function evaluations may come from different simulators or experiments. We also show that this function query selection strategy leads to an instantaneous regret bound for the query point. The performance of BILBO is theoretically guaranteed with a sublinear regret bound and is empirically evaluated on several synthetic and real-world problems."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"bilevel",
"Bayesian optimization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/83ffc7ed4a89f7c18c147755d24cff788136f272.pdf"
},
"presentation": null,
"primary_area": {
"value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "BILBO: BILevel Bayesian Optimization"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yeEWZ8qvlS | Learning Interpretable and Influential Directions with Signal Vectors and Uncertainty Region Alignment | main | Active | latent space;interpretability;concepts;directions;signals;patterns;distractors | interpretability and explainable AI | 3;5;5;6;6 | 4;3;3;2;2 | 2;2;2;4;3 | 2;2;2;3;3 | 2;2;2;2;3 | 5 | 2.8 | 2.6 | 2.4 | 2.2 | -0.9759 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Is the proposed method applicable to other types of data beyond images, such as text or time series data? If so, what modifications, if any, would be necessary?\n\nHow does the method scale with larger models and datasets? Are there computational bottlenecks, and if so, how can they be mitigated?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper presents a novel unsupervised framework for discovering latent space directions that are both interpretable and influential. This is an advancement over prior work that often requires annotated datasets or predefined concepts.\n\nThe method is theoretically grounded, extending previous models to a multi-label setting and introducing new loss functions such as the Uncertainty Region Alignment loss. The experimental results on synthetic and real-world data support the efficacy of the approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an unsupervised method for discovering interpretable and influential directions in the latent space of deep neural networks, specifically for image classifiers. The proposed approach leverages signal vectors and uncertainty region alignment to identify latent space directions that significantly influence model predictions while maintaining high interpretability. Unlike previous methods that require annotated concept datasets and predefined concepts, this method does not rely on prior knowledge and instead learns from the inherent structure of the feature space. The authors validate their approach on both synthetic data and real-world benchmarks, demonstrating that the discovered directions effectively fulfill critical debugging criteria and outperform supervised methods in certain aspects."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the experiments are promising, the evaluation is limited to a synthetic dataset and a single real-world dataset (Places365 with ResNet18). A broader range of datasets and models would strengthen the claims and demonstrate the generalizability of the method.\n\nThe paper could provide a more thorough comparison with existing methods, including additional baselines in both interpretability and influence metrics. This would help in positioning the proposed method within the existing literature more effectively."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Are the learned directions always labeled by NetDissect? I would appreciate some clarity on how NetDissect is being applied to the proposed method and its necessity in generating explanations. If so, why is the proposed method preferable to simply using NetDissect on its own?\n\n- Please see above weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The problem of finding concepts or latent directions that are highly influential, which I take to mean causal, is an interesting and very active research area at the moment. Being able to use interpretability tools to exert influence over a model would be of high utility for future research and real-world applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a method to learn interpretable and influential directions in a trained model’s latent space, where these directions are modeled as linear “signal” classifiers. The authors propose a method called Uncertainty Region Alignment that aligns the subspaces where the model is uncertain and the subspaces where the learned “signal” classifiers are also uncertain, with the claim that this enhances the interpretability and influence of the learned directions. They present results from a small synthetic setting and from a slightly larger, more realistic setting on a single model and dataset (Resnet18, Places265), where they benchmark against IBD and find that their method produces better concept detectors."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- As it stands, it was difficult for me to understand the contribution of this work with respect to prior literature in this field. Further discussion of how this paper relates to the wide body of recent literature on generating concept-based explanations, including follow-up works to ACE/ICE/IBD such as CRAFT [1], work on Concept Bottleneck Model literature [2, 3, 4, 5], and recent work on sparse autoencoders and dictionary learning, would help clarify the gap in existing literature that this paper is trying to fill.\n- Furthermore, it seems like this paper is a close follow-up to the papers by Doumanoglou et al. (2023; 2024) cited in the paper, in that the majority of the method is the same with some minor adjustments. Can you provide further discussion of how this method and its contributions differs from the methods in those two papers?\n- The writing and presentation of this paper are not very clear to me - I struggled to understand the types of explanations given by the proposed method, and how those explanations could be used to gain a “deeper understanding of the model’s strategy, fostering trust, and enabling model correction and improvement” as stated by the authors in the abstract. Clarifying the intent of the proposed method and how the experiments validate its practical applications with respect to the stated goals from the abstract and introduction would significantly improve the paper.\n- Given that this paper is an explainability/interpretability paper, I believe it could strongly benefit from providing some example explanations yielded by the paper, and a more in-depth example or case study of how the explanations can be applied to the aforementioned end goals of model correction.\n\n[1] Fel, Thomas, et al. \"Craft: Concept recursive activation factorization for explainability.\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2023.\n\n[2] Yuksekgonul, Mert, Maggie Wang, and James Zou. \"Post-hoc concept bottleneck models.\" *arXiv preprint arXiv:2205.15480* (2022).\n\n[3] Bhalla, Usha, et al. \"Interpreting clip with sparse linear concept embeddings (splice).\" *arXiv preprint arXiv:2402.10376* (2024).\n\n[4] Oikarinen, Tuomas, et al. \"Label-free concept bottleneck models.\" *arXiv preprint arXiv:2304.06129* (2023).\n\n[5] Gandelsman, Yossi, Alexei A. Efros, and Jacob Steinhardt. \"Interpreting CLIP's Image Representation via Text-Based Decomposition.\" *arXiv preprint arXiv:2310.05916* (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- The \"M\" term used in the maximum margin loss does not appear in any other loss equations (except perhaps in equation 5, where it seems to be a different \"M\"). Can you please clarify how the margin loss \"M\" is calculated?\n\n- Line 451 mentions \"Significant Direction Count (SDC) and Significant Class-Direction Pairs (SCDP). SDC represents the number of learned signal vectors that significantly influence at least one of the model’s classes, while SCDP counts the total number of class-direction pairs in which the learned signal vector significantly affects the class.\" -> what is \"significantly influenced\" here?\n\n- Conceptual clarification: Is the assumed data generation process in equation 1, identifiable? For example, if s_1 and s_2 are two signal directions, then is s_1 + s_2 also a signal direction? \n\n- Conceptual clarification: given a candidate direction \"v\", what is a simple test to determine whether it is a signal or a distractor direction?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "+ The topic of extracting a set of contributing semantic concepts underlying a representation is an important topic, that can potentially help efforts in debugging and controlling model behaviour. \n\n+ The proposed method is somewhat empirically successful, in that it is able to recover ground truth signals in synthetic settings; and recover signal vectors that score high on numerical interpretability metrics."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to automatically learn the set of concept vectors underlying a given model representation in an unsupervised manner. Specifically, given a model representation and an unsupervised image dataset, the method aims to extract a set of \"signal\" vectors that explain the representation, and are semantically meaningful. Experimental results show that (1) in synthetic settings where the ground truth signals are known, the proposed method recovers the ground truth; and (2) when applied for an image classification setting, the extract signal vectors score high on numerical interpretability metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This paper is **hard to read**. At times, the issue is missing details, and at other times, it makes lots of references to text in the appendix and/or other papers. For instance, the method defines six heuristic loss functions, however it is unclear how they are combined and how the regularization parameters are set. The exact definitions and motivations for these are sometimes buried in the appendix. In addition, several terms defined in individual papers (Doumanoglou et al. (2023), Pfau et al. (2020), and Pahde et al. (2024)) are used without defining or explaining them. In particular, the contributions in these listed individual papers (S^1;S^2 interpretability metrics, RCAV, and Pattern CAV) are unexplained and used without context in the paper. I recommend that the authors rewrite these sections by clearly explaining the method, including being explicit about the exact loss functions used and the optimization problem being solved. In addition, it would help if terminology from other papers is first clearly defined before being used. \n\n**Missing Comparison with PCA / ICA / Dictionary learning**: One of the goals of this paper is to recover signal and distractor vectors that combine additively to generate the underlying inputs. However, this is precisely the setting in which classical methods such as PCA / ICA / dictionary learning are applicable (depending on the exact underlying assumptions re: orthogonality, etc), and the missing discussion and comparison with these methods in the context of this paper is a significant omission. An experiment comparing the proposed method with such classical baselines would help build the case for the proposed approach. In addition, it would be helpful to point out what is conceptually missing in such classical approaches that necessitates the present one.\n\nIt is **unclear whether recovered signals are interpretable**, in the sense that an unambiguous assignment exists from the signal vector to a semantic concept. In the interpretability literature, including network dissection (Bau et al., 2017), it has been discussed that not all nodes are interpretable in the sense that an assignment to a semantic concept is not always possible. If the authors can present visualizations of the set of recovered signal vectors and discuss what makes them interpretable, this would help claim the interpretability benefits of this approach. \n\n**Minor**: The paper claims that its method overcomes a common weakness of concept-based methods, i.e., the requirement of annotated concept datasets (line 88). However, even the proposed method uses the Broyden concept dataset to annotate concept labels (line 412), and without this, it is unclear how the method recovers interpretable signals."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The attempt to discover influential directions in a model's latent space without relying on labeled data is good. If it works, it will make the method more scalable compared to other techniques that require manual annotation or predefined concepts.\n\n2. The method has been tested not just on synthetic datasets but also on real-world tasks using state-of-the-art models like ResNet18. \n\n3. The writing can be improved by incorporating more motivations in introducing each loss."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a method to identify interpretable and influential directions in latent spaces of deep learning models. Since the latent space directions play a crucial role in understanding and correcting deep models, the goal here is to identify directions in the latent space that influence model predictions while being interpretable. Traditional approaches for finding these directions rely on supervised methods and predefined concepts. This paper introduces an unsupervised framework that combines signal vectors with uncertainty region alignment to discover interpretable latent directions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While I commend the authors for developing an unsupervised method to identify interpretable directions, the approach faces a key challenge. Supervised methods generally ensure that these directions align closely with actual concepts, providing a clearer sense of interpretability. But unsupervised methods can not guarantee that. \n\nThe paper lacks experiments that measure cosine similarity with \"ground truth concept vectors\" on real datasets, such as Places365. This raises doubts about the authenticity of the learned vectors, as evidenced by the significant performance gap between unsupervised and supervised methods in Table 4. \n\nPutting together, if the learned \"concept vector\" doesn’t reliably represent the true concept, the claim of interpretability becomes less compelling."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses W1 to W4."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The presented method is very well theoretically motivated and is a valuable extension to existing (supervised) discovery methods like CAV.\n- The experiments have been carried out accurately and with care. However, I would classify both experiments more as proof-of-concepts than real-world experiments as the places365 dataset is very limited in complexity. However, this is totally valid in my opinion, as the focus of the paper is the theoretical contribution.\n- The ablation studies on the loss components are extensive and interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an unsupervised method for discovering interpretable and influential latent space directions in deep-learning models using signal vectors and uncertainty region alignment. The approach is validated on synthetic and proof-of-concept real-world data, showing its effectiveness in recovering key signal directions and improving model debugging."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**W1:** The paper is sometimes hard to follow and understand. While in the methods section, the equations help to understand what’s described in the text, the different experiments are sometimes hard to follow. Especially for a reader unfamiliar with latent space-based concept methods for interpretability the different methods and the interpretation of their results can be confusing. For example, the experiment setup in 4.1 and how it connects to section 2 could be better explained. The experiment setup in 4.2 refers to several methods from related work (e.g. Network Dissection, Broden dataset, RCAV (which was never introduced)), which readers are possibly not familiar with. \n\nTo improve clarity, I suggest adding an overview of key concepts at the start of the experiments section, explicitly explaining how Section 4.1 connects to Section 2, and providing brief explanations for unfamiliar methods (not limited to) like Network Dissection, the Broden dataset, and RCAV when they are first introduced.\n\n**W2:** The authors comment in Line 297 that experiments with the raw inclusion of the three loss terms do not converge, but do not further elaborate on how stable the final results are with their adaptations, i.e. if the learned directions are robust. \n\nPlease provide quantitative measures of the robustness in the learned directions, e.g. through the standard deviation or confidence intervals over several differently seeded runs for the tables in section 4 (if applicable).\n\n**W3:** Figures 1 and 2 are very helpful in communicating the concepts but are never referred to in the main manuscript. The paper would benefit from a general overview figure in the same style (in which these two figures would be a part), similar to a graphical abstract to communicate the different steps and concepts visually.\n\nPlease provide an overview figure on how the presented method is composed (and maybe why certain selections were made) in section 3. If it would make sense for you, also integrate Figure 2 into this overview figure.\n\n**W4:** The paper lacks a published code repository, making it difficult for others to easily apply the method to their own models.\n\nPlease provide a link to an anonymous repository with a (simple) implementation of the method. You can use \"Anonymus Github\" or GitLab with an anonymous email. \n\nI am open to raising my score if the identified weaknesses are either adequately addressed through revisions to the manuscript or convincingly argued to be non-issues."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "The proposed unsupervised method identifies a pair of latent space directions (filter and signal) with the first being able to answer questions of interpretability and the second to answer questions of concept influence on model's predictions"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Interpretable and Influential Directions with Signal Vectors and Uncertainty Region Alignment},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yeEWZ8qvlS},\nnote={under review}\n}"
},
"abstract": {
"value": "Latent space directions have played a key role in understanding, debugging, and fixing deep learning models. Concepts are often encoded in distinct feature space directions, and evaluating impact of these directions on the model's predictions, highlights their importance in the decision-making process. Additionally, recent studies have shown that penalizing directions associated with spurious artifacts during training can force models to unlearn features irrelevant to their prediction task. Identifying these directions, therefore, provides numerous benefits, including a deeper understanding of the model's strategy, fostering trust, and enabling model correction and improvement. We introduce a novel unsupervised approach utilizing signal vectors and uncertainty region alignment to discover latent space directions that meet two key debugging criteria: significant influence on model predictions and high level of interpretability. To our knowledge, this method is the first of its kind to uncover such directions, leveraging the inherent structure of the feature space and the knowledge encoded in the deep network. We validate our approach using both synthetic and real-world benchmarks, demonstrating that the discovered directions effectively fulfill the critical debugging criteria."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"latent space",
"interpretability",
"concepts",
"directions",
"signals",
"patterns",
"distractors"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1bac30e18f7715bcb1333484a7bedb4d4b6efd9c.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Learning Interpretable and Influential Directions with Signal Vectors and Uncertainty Region Alignment"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yeeIGM3N6w | Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering | main | Active | Sparse Mixture-of-Experts;Merging;Compression | other topics in machine learning (i.e., none of the above) | 5;5;6;6 | 3;4;4;3 | 3;3;3;3 | 2;3;3;3 | 3;3;3;3 | 5.5 | 3.5 | 3 | 2.75 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I am wondering the results or analysis for more extreme expert reduction scenarios, such as reducing to 25% or 10% of the original experts. This would give insight into how the method performs under more aggressive compression."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. An output based similarity metric of expert clustering is proposed, which is more effective than previous works.\n2.The experimental results compared with the previous methods are very good."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a retraining free experts merging approach which employs a hierarchical clustering strategy. The authors claim that using expert outputs as the similarity metric for clustering is more effective compared with using router logins or weights employed by prior works. The experimental results reveal the proposed approach gains more performance improvements compared with existing methods across various benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There is a lack of theoretical analysis on the performance of expert clustering using different similarity metrics."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- HC-SMoE offers a practical solution for reducing parameters without the need for retraining, simplifying the implementation process.\n\n- The task-agnostic nature of HC-SMoE allows for broader applicability across different language tasks, enhancing its versatility.\n\n- The comprehensive experiments conducted on eight zero-shot language tasks provide strong empirical evidence of HC-SMoE's effectiveness in large-scale models like Qwen and Mixtral."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work introduces Sparse Mixture-of-Experts (SMoE) models, which improve large language model performance without significantly increasing inference costs by activating only a subset of parameters. However, their high memory requirements hinder deployment. To address this, the authors propose Hierarchical Clustering for Sparsely activated Mixture of Experts (HC-SMoE), a task-agnostic framework that reduces SMoE parameters without retraining."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While this work demonstrates competitive accuracy, it lacks a comprehensive assessment of efficiency metrics, such as speedup and memory usage. Given that efficiency is a key contribution, this aspect of the experimental results is essential.\n\n- A theoretical analysis of the effectiveness of expert merging and HC-SMoE would enhance the understanding of the method's performance.\n\n- Although HC-SMoE is validated on eight zero-shot language tasks, its effectiveness may vary in more complex tasks or domains, potentially limiting its broader applicability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to my question above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The proposed method is simple but effective, \n* The paper is very easy to follow.\n* The experiments are comprehensive and the results are very promising."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a new expert merging framework, named Hierarchical Clustering for Sparsely activated Mixture of Experts (HC-SMoE), to reduce SMoE model parameters without retraining. The proposed method is simple but effective, and the experiments demonstrated the efficacy of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The motivation of using the \"Hierarchical\" clustering is not clear to me. I cannot intuitively get the idea of why hierarchical clustering is better than simple K-means clustering, although the results confirmed that K-means clustering is less effective. Besides, the paper proposed to use a \"hard\" hierarchical clustering, and I am wondering if it is more effective to use \"soft\" hierarchical clustering or simply \"soft\" clustering without hierarchies. \n* The choice of the calibration dataset. I did not see any ablation study about the choice of the calibration dataset, and I think the performance of the proposed method should highly depend on the calibration dataset. If the calibration dataset is not comprehensive enough, e.g., not covering enough domain specific data, the clustering may not be very informative, which may lead to poor performance. For example, if we want the LLM to perform well on a law-related or medical-related tasks, can you also rely on the same calibration dataset used in the experiments?\n* Some minor issues:\n * In Fig. 1, why you did not compare the methods on 14B model?\n * Section 3.2.1 presents the method of similarity metric but contains a lot of discussions about related work.\n * In line 299/300, is alpha_i fixed or not? If it is fixed, will it also suffer from the issue that you mentioned in line 199-203 about frequency-based method?\n * In Table 4, the best performance of 'ARC-c' should be the Average linkage using the Weight setting, right?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) Pruning experts in MoE models can indeed reduce the difficulty of deployment.\n2) The paper is easy to follow, and the ablation study is comprehensive.\n3) The experimental results on Qwen and Mixtral are convincing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Hierarchical Clustering for Sparsely Activated Mixture of Experts (HC-SMoE), a task-agnostic framework for merging experts within an SMoE model. HC-SMoE aims to reduce the model's parameters without requiring retraining. Experiment results on a series of benchmarks show its effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) O-prune [1] requires enumerating all possible combinations of experts, resulting in significant time overhead. I would like to know how HC-SMoE compares to other approaches in terms of runtime and resource consumption.\n2) O-prune [1] also conducts experiments on domain-specific tasks (e.g., GSM8K, Math). I am interested in the performance of HC-SMoE on these datasets.\n\n[1] Lu, Xudong et al. “Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models.” Annual Meeting of the Association for Computational Linguistics (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Our method (HC-SMoE) offers an efficient method for merging experts of large Sparse Activated Mixture of Experts (SMoE) models without retraining under task-agnostic settings."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024retrainingfree,\ntitle={Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yeeIGM3N6w},\nnote={under review}\n}"
},
"abstract": {
"value": "Sparse Mixture-of-Experts (SMoE) models represent a significant breakthrough in large language model development. These models enable performance improvements without a proportional increase in inference costs. By selectively activating a small set of parameters during task execution, SMoEs enhance model capacity. However, their deployment remains challenging due to the substantial memory footprint required to accommodate the growing number of experts. This constraint renders them less feasible in environments with limited hardware resources. To address this challenge, we propose Hierarchical Clustering for Sparsely activated Mixture of Experts (HC-SMoE), a task-agnostic expert merging framework that reduces SMoE model parameters without retraining. Unlike previous methods, HC-SMoE employs hierarchical clustering based on expert outputs. This approach ensures that the merging process remains unaffected by routing decisions. The output-based clustering strategy captures functional similarities between experts, offering an adaptable solution for models with numerous experts. We validate our approach through extensive experiments on eight zero-shot language tasks and demonstrate its effectiveness in large-scale SMoE models such as Qwen and Mixtral. Our comprehensive results demonstrate that HC-SMoE consistently achieves strong performance, which highlights its potential for real-world deployment."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Sparse Mixture-of-Experts",
"Merging",
"Compression"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a23868b6c05aeef8a7957e0731f262ca0a2934ce.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yf30Al57nu | CodeLutra: Boosting LLM Code Generation via Preference-Guided Refinement | main | Active | large language models; preference learning; code generation | foundation or frontier models, including LLMs | 3;3;5;6;8 | 4;4;4;3;4 | 2;2;4;3;3 | 2;1;3;3;3 | 3;2;4;3;3 | 5 | 3.8 | 2.8 | 2.4 | 3 | -0.263523 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How does CodeLutra perform on longer programs, e.g,, competitive coding? (This is nice to have; the material in the paper is enough for publication.)\n- What is the CodeLutra's performance vs program length on the current data sets?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- CodeLutra simple yet effective method, with clear articulation of how it differs from related work.\n\n- Impressive results: with only 500 samples, CodeLutra achieves GPT-4-level performance on a base model with just 8 billion parameters. For the Spider benchmark, it improves base model performance from 59.3 to 74.4 in just four iterations, surpassing GPT-4’s 74.4. On BIRD, it increases performance from 22.3 to 42.6 in four iterations, approaching GPT-4’s 46.3.\n\n- Comprehensive evaluation, covering three coding benchmarks (Spider, BIRD, and DS-1000) and three models (Llama-3-8B, Gemma-7B, and StarCode-7B), demonstrating the approach's generalizability.\n\n- Strong ablations that address key questions: (i) dual loss significantly boosts performance, raising it from 17.2 (DPO) to 76.6 on Spider; (ii) negative samples are crucial, as performance increases from 20 to over 40 with their inclusion, while positive samples alone yield minimal improvement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents CodeLutra, a supervised fine-tuning (SFT) approach that demonstrates significant improvements on coding tasks. Specifically, CodeLutra achieves GPT-4-level performance fine-tuning an open-source 8-billion-parameter model using as few as 500 samples (with groundtruth solutions). The key idea behind CodeLutra is to use both positive and negative examples to fine-tune the model, creating a hybrid of SFT and DPO techniques. CodeLutra assumes a ground truth to generate these examples: if a sample code produces the same inputs/outputs as the ground truth, it is a positive example; otherwise, it is a negative example."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Current evaluation focuses on SQL queries and data science problems, which are relatively short (from a few lines of code to several 10s of lines of code). It would be interesting to see how this approach generalizes to longer programs.\n- Limited exploration of scenarios without ground truth. In such cases, CodeLutra relies on syntactic error detection, but the results are, as expected, less impressive."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- In line 150, \"if the model only predicts wrongly in the final token in a code snippet, the overall probability P (y|x) in the Equation 1 might still remain high as the preceding tokens are correct\". While the hypothesis makes sense, do you really observe this situation in real LLM and dataset? I doubt it.\n- Equation 6 is studied in a previous literature \"Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer\" with theoretical support, but not cited. This also limits the novelty contribution (at least on this part).\n- I'm confused with the experimental setup. What is the training dataset? It seems the experiment is using the test dataset to train. Could you clarify?\n- Could you explain the setting for SFT in Table 1? One baseline is the SFT model that only uses the groundtruth training solutions, or use the synthetically generated correct solutions. Which one are you using?\n- I don't think Table 2 is a right setting, where 17.2 and 12.4 is extremely low for DPO-only method. Normally we do DPO training on top of SFT model. The right setup should be training on top of SFT model. What is the gap between, SFT then DPO training and the SFT regularized preference training?\n- 500 samples might mean that 500 prompts or problems. What is the size of generated samples overall?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper is well-written. The proposed method with training with correct and failed generations iteratively makes sense. Experiments show good improvement on benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a method by iteratively generating successful and failed code and training with preference optimization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Some experimental setup is not clear enough, such as training data, SFT setting, and details of synthetically generated dataset. One of the contribution DPO and SFT loss is studied in previous literature. More experiments might be needed for comparing SFT then DPO with DPO+SFT loss."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "NA"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "NA"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces CODELUTRA, a framework designed to enhance the performance of LLMs in code generation tasks. However, the method is almost the same as an existing method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed method closely resembles that presented in [1]. Applying the same approach to a different scenario does not warrant publication, especially since this new scenario is simpler and benefits from execution feedback. \n\n[1] Iterative Reasoning Preference Optimization. https://arxiv.org/abs/2404.19733"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* Are the two answers in Figure 5 flipped? Given Currency is in table customers, I feel the first answer is correct, and the second is wrong."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Comprehensive evaluation, ablation, and analysis support the effectiveness of the proposed method. In particular, the necessity of negative training samples and of SFT loss are both well studied.\n* The paper is well written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes CODELUTRA, a preference-guided training framework to let code LLMs iteratively refine itself based on execution signals from its own generations. Specifically, given a task-specific training set, at each iteration, the model generates answers which are then evaluated by unit tests. Each correct answer is paired with an incorrect answer to form a preference pair. The preference dataset is then used for DPO training. To address the issue that DPO may reduce the generation probability of both correct and incorrect answers, supervised finetuning loss is added to DPO loss for joint training. Experiments show that CODELUTRA significantly improves performance on SQL and data science tasks, and is much more effective than DPO alone."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The technical novelty of this paper is somewhat limited. L233-246 claimed two major points of novelty: refinement from execution feedback and dual loss mechanism. First, using feedbacks from program execution to iteratively refine code LLMs is a direction that has been extensively studied (e.g., CodeRL [1], and NExT [2]). However, these works are not discussed in the related work section. Second, the dual loss objective (i.e. adding SFT loss in DPO training) was proposed in [3], known as RPO, which is not cited.\n* I find DS-1000 Pass@1 results in Table 1 are inconsistent with the public leaderboard (https://ds1000-code-gen.github.io/model_DS1000.html). In particular, pass@1 of Codestral-22B and Llama-3-70B-Chat is 51.2 and 48.6 respectively in the leaderboard, but 35.8 and 36.4 respectively as reported in the paper.\n\n\nReference:\n\n[1] Le, Hung, et al. \"Coderl: Mastering code generation through pretrained models and deep reinforcement learning.\" Advances in Neural Information Processing Systems 35 (2022): 21314-21328.\n\n[2] Ni, Ansong, et al. \"NExT: Teaching Large Language Models to Reason about Code Execution.\" Forty-first International Conference on Machine Learning.\n\n[3] Liu, Zhihan, et al. \"Provably mitigating overoptimization in rlhf: Your sft loss is implicitly an adversarial regularizer.\" arXiv preprint arXiv:2405.16436 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. See weakness 1, 2\n2. I am curious that if this method can lead to a model that is generalizable. For example, the authors split DS-1000 for training and evaluation. I wonder how the resulting model would perform on other similar datasets, e.g. MBPP?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well-organized and easy to follow\n2. The proposed method can lead to a fine-tuned LLAMA3-8B model which has comparable performance to GPT-4.\n3. The authors conduct comprehensive ablation studies that the effect of every component involved in their method is clearly demonstrated.\n4. The method can still have good performance with limited annotations or training samples."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors proposed a new training framework called CODELUTRA which aims to fine-tune a small CodeLM to match or surpass the closed-source LLMs like GPT-4. CODELUTRA adopts an iterative method to learn by comparing the correct generation and the failed generation. At each iteration, CODELUTRA constructs the preference dataset by classifying the generation codes of the model from the last iteration and employs a dual-loss function that combines DPO with SFT for training. The authors show that their method can achieve a performance comparable to GPT-4 in the data query and data science tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Line 230 states that \"The refinement process continues until the improvement between consecutive iteration becomes marginal\". However, in the experiments, the authors seem to fix the iteration number to 4. In practice, how do you decide if the improvement between consecutive iterations is marginal?\n\n2. The baseline setup is not clear enough and may not be comprehensive.\na) For closed-source LLMs, it is unknown what prompting method is used. It is also not clearly stated what fine-tuning method is used. From the Appendix, I infer that the LoRA is used in CODELUTRA but is it also used in the fine-tuning baseline?\nb) In Table 1, since LLAMA-3 is used as the base model for CODELUTRA, the authors should apply more previous fine-tuning methods in the same setting and compare with them instead of comparing with different open-source CodeLLMs. For example, the related work section mentions other fine-tuning methods (Line 520), e.g. Self-debug and Codefort. The authors should apply them to fine-tune LLAMA3 and compare the results.\n\n3. Paper presentations can be further improved. Specifically, a) Line 142 $f$ is not defined. b) The notations in the legend of Figure 2 are not defined."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024codelutra,\ntitle={CodeLutra: Boosting {LLM} Code Generation via Preference-Guided Refinement},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yf30Al57nu},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) have significantly advanced code generation but often require substantial resources and tend to over-generalize, limiting their efficiency for specific tasks. Fine-tuning smaller, open-source LLMs presents a viable alternative; however, it typically lags behind cutting-edge models due to supervised fine-tuning's reliance solely on correct code examples, which restricts the model's ability to learn from its own mistakes and adapt to diverse programming challenges. To bridge this gap, we introduce CodeLutra, a novel framework that enhances low-performing LLMs by leveraging both successful and failed code generation attempts. Unlike conventional fine-tuning, CodeLutra employs an iterative preference learning mechanism to compare correct and incorrect solutions as well as maximize the likelihood of correct codes. Through continuous iterative refinement, CodeLutra enables smaller LLMs to match or surpass GPT-4’s performance in various code generation tasks without relying on vast external datasets or larger auxiliary models. On a challenging data analysis task, using just 500 samples improved Llama-3-8B's accuracy from 28.2\\% to 48.6\\%, approaching GPT-4's performance. These results highlight CodeLutra's potential to close the gap between open-source and closed-source models, making it a promising approach in the field of code generation."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language models; preference learning; code generation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/48d18bdcda59e138b9ceb3f05bbc37c0cc2b29aa.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "CodeLutra: Boosting LLM Code Generation via Preference-Guided Refinement"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yfW1x7uBS5 | Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI | main | Active | security;adversarial;style mimicry;generative ai | alignment, fairness, safety, privacy, and societal considerations | 3;8;8;8 | 4;4;4;4 | 2;3;3;4 | 1;3;3;3 | 4;3;4;3 | 6.75 | 4 | 3 | 2.5 | 3.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The paper is well-written and easy to follow.\n- It works on an important problem and provides critical insights: all protection methods today cannot protect artworks from diffusion-based mimicry. Though works like Glaze have been widely accepted by artists, they actually perform bad.\n- The proposed robust mimicry methods are simple but effective.\n- The authors did extensive experiments to support the claims."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors critically revisit the current efforts towards protecting artists' work from diffusion model mimicry. The author proposes that most protection nowadays cannot really protect artists, since the protection can by easily bypassed using some tricks of purifications e.g. Gaussian Noising, Diff-Pure and Up-scalers. Extensive experiments are done to support the claim in this paper. Overall, the paper is well-written, easy to follow and the proposed method is simple but effective."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Clarification:\n- The title of this paper is 'ADVERSARIAL PERTURBATIONS CANNOT RELIABLY PROTECT ARTISTS FROM GENERATIVE AI', while style-based mimicry is one part of diffusion-based mimicry, there are more basic mimicry e.g. inpainting, style-transfer by diffusion model, which are also tested in previous papers of protection e.g. Mist. Fine-tuning a diffusion model seems to have a more complicated mechanism compared with image-to-image applications of diffusion models. \n- I wonder does the proposed method also work for inpainting/image-to-image SDEdit, if it works, the proposed method becomes more general.\n\nMethods:\n- I noticed that the perturbation used in this paper is quite small, if the noise is scaled up, will the purification be worse\n- While Glaze and Mist are popular, there are many other protection methods that can be studied to get a safer conclusion. e.g. MetaCloak [3] and SDS [4].\n\nRelated Papers:\n[1, 2] are highly related to this paper, [1] also find that the current attacks are vulnerable to purifications, [2] proposed that latent diffusion models can be easily purified by pixel-space diffusion models. \n\n[1] Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?\n\n[2] Pixel is a Barrier: Diffusion Models Are More Adversarially Robust Than We Think.\n\n[3] MetaCloak: Preventing Unauthorized Subject-driven Text-to-image Diffusion-based Synthesis via Meta-learning\n\n[4] Toward effective protection against diffusion-based mimicry through score distillation"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. It would be interesting to see the results on Glaze 2.1 (the newest version) for completeness’s sake. Does this induce greater quality degradation compared to its previous versions?\n\n2. The “Best-of-4” method is not really a practical method since it depends on the human ratings and the same human ratings are used to evaluate the method. For fair comparison, the raters need to be split into validation and test raters, and the method selection should utilize only the validation raters and be evaluated by the test raters, with averaging across splits via cross-validation."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. While it was anticipated that prominent style protection techniques like Glaze might have weaknesses, the extent of their fragility is striking. The paper demonstrates that these methods fail even against rudimentary attacks, with Glaze unable to withstand a mere change in the fine-tuning script. This revelation underscores a concerning lack of “security mindset” within the style protection research community, which is particularly alarming given the recognition Glaze has received, including multiple awards from USENIX Security.\n\n2. The MTurk study is well constructed and the authors have taken pains to ensure that their work adheres to commonly accepted ethical standards."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper examines the effectiveness of existing style protection methods (such as Glaze, Mist, and Anti-DreamBooth) against both black-box and white-box attacks. The authors demonstrate that these protections can be easily circumvented using simple techniques like DiffPure, upscaling, and adding Gaussian noise, as well as more sophisticated methods like IMPRESS. The findings are supported by an MTurk study, where participants were asked to distinguish between images generated using style mimicry on unprotected vs protected artworks. The paper highlights the inherent asymmetry between attackers (style mimickers) and defenders (artists), cautioning against relying on adversarial perturbations for style protection due to the potential false sense of security they may provide."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The work has limited novelty since it was predictable that style protection would be vulnerable to DiffPure-like purification methods. It would be good if the purification methods evaluated in the paper could be packaged into a standard baseline, say on the lines of AutoAttack.\n\n2. In my opinion, the paper overstates the general case against style protection techniques based on adversarial perturbation. The presented argument makes an assumption that artists have the choice to release their artworks on the internet, and they may choose to withhold their artworks if they believe that AI models may be trained on their artworks. However, digital artists are extremely dependent on the internet to grow their customer pool and advertise their works, so this is likely not a feasible option. The appropriate counterfactual to artists using Glaze-like methods would be not using any protections at all. Also, since these methods seem to be improving against simple attacks at least, it may be enough to use them as a deterrent rather than as fool-rpoof security.\n\n\nI struggled to decide whether to rate this paper as borderline accept or clear accept due to the stated weaknesses. Ultimately, I decided to rate this paper as clear accept as the argument may indeed be persuasive to a small minority of artists who may decide to not publish any of their works if there is no secure style protection mechanism."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See weaknesses above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The writing is clear and easy to follow.\n2. The topic is interesting and important."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a user study evaluating pre-processing techniques against current adversarial perturbations, finding that such methods can significantly reduce their effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main weakness is the unreliable evaluation and whether the experimental results reflect real-world cases.\n\n1. The study heavily relies on MTurk workers, who may not be suitable for this task. They might lack the expertise to classify artistic styles or assess whether artwork quality has been degraded. Artists themselves, such as those mentioned in [1], should be included in the main evaluation. Additionally, these workers may not represent potential consumers of artists' works, contrary to what the authors propose (L 338). A more thorough study should report the workers' relevance to the art domain (e.g., how often they purchase art or visit museums).\n2. The effectiveness of the fine-tuning procedure is unclear. As shown in Fig. 3 (especially the bottom row) and Fig. 5, the generated images, even without protection, do not closely resemble the style of the training images. In contrast, previous work like Glaze [1] uses a fine-tuning setting with much stronger resemblance between generated images and training data (see Fig. 2 in [1]). The focus should be on whether adversarial perturbations effectively defend against mimicry once it has been successfully achieved. Even in Appendix K, where the limitations of using MTurk for evaluation are acknowledged, fewer than 50% of cases were rated as better or equal to “successful” mimicry.\n \n In an extreme case, if fine-tuning involved only one step, adversarial perturbations would likely fail to defend against mimicry. The core issue seems to be underestimating how closely the model need to fit training images, including adversarial perturbations, to learn the style, which may result in an underestimation of their impact.\n \n3. Even if the above weaknesses are overlooked, the paper provides no new insights on solving the problem of mimicry. Both pre-processing and adversarial perturbations degrade image quality, raising the question of whether removing adversarial perturbations is worth the cost, given that pre-processing might degrade the image quality in ways that hinder style recognition.\n4. The paper overlooks broader ethical implications, such as the removal of adversarial perturbations used as a watermark. Similar to traditional watermarks (such as a simple icon on the corner) that can be removed but are illegal to do so in many countries, stronger watermarks complicate removal and leave evidence. A discussion on these ethical issues would contribute more meaningfully to the community.\n5. The proposed methods lack novelty, largely offering improved hyper-parameter tuning of existing approaches.\n\n[1] Shan S, Cryan J, Wenger E, et al. Glaze: Protecting artists from style mimicry by {Text-to-Image} models[C]//32nd USENIX Security Symposium (USENIX Security 23). 2023: 2187-2204."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Overall, the technical details in the paper are clearly described, so I have no questions about the technical aspects. However, I do have some open questions for the authors that I would like to discuss:\n\n+ Q1: Practical and scalable evaluations? The core claim of this paper is to demonstrate the ineffectiveness of existing art mimicry protection methods. For this purpose, using a user study as an evaluation method is sufficient. However, for future papers proposing alternative protective solutions, what could be a scalable way to evaluate their effectiveness?\n\n+ Q2: Is it technically possible to protect artists from mimicry? The conclusions drawn from this work appear to be somewhat pessimistic. Given that generative AI has a strong capability to fit any content, it seems that the potential for mimicry might be inevitable. What could be potential solutions to mitigate this issue?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ S1: The motivation is clear and compelling, effectively serving the paper's purpose: to caution researchers about the limitations of using adversarial perturbations for protection against art mimicry.\n\n+ S2: The paper is well-organized and easy to follow. The experiments conducted are solid.\n\n+ S3: The conclusions drawn from this work are potentially significant for the community and could reshape the landscape of this research field. Researchers are encouraged to reconsider the current paradigms of artwork / copyright data protection in light of their practical effectiveness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the practicality of \"using adversarial perturbations to protect art works from mimicry\". The paper argues that, existing research works which uses adversarial noise for art copyright protection -- although published in top ML/Security conferences -- do not robustly achieve their claimed goals. Simple technical means are enough to bypass these existing protections. The work argues that researchers and practitioners should rethink the solution for art mimicry problem and develop alternative protections."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "+ Typos\n 1. Line 219, fare -> fail?\n 2. Figure 11, 13, 14, 15 seem to have the wrong label - now they are all labelled as \"gaussian noising\""
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We show that adversarial perturbations are not a reliable strategy to protect artists' images from being used to train generative models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024adversarial,\ntitle={Adversarial Perturbations Cannot Reliably Protect Artists From Generative {AI}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yfW1x7uBS5},\nnote={under review}\n}"
},
"abstract": {
"value": "Artists are increasingly concerned about advancements in image generation models that can closely replicate their unique artistic styles.\nIn response, several protection tools against style mimicry have been developed that incorporate small adversarial perturbations into artworks published online. In this work, we evaluate the effectiveness of popular protections---with millions of downloads---and show they only provide a false sense of security. We find that low-effort and \"off-the-shelf\" techniques, such as image upscaling, are sufficient to create robust mimicry methods that significantly degrade existing protections. Through a user study, we demonstrate that **all existing protections can be easily bypassed**, leaving artists vulnerable to style mimicry. We caution that tools based on adversarial perturbations cannot reliably protect artists from the misuse of generative AI, and urge the development of alternative protective solutions."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"security",
"adversarial",
"style mimicry",
"generative ai"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e93f7276629aae4a7b12fb8eec882fac1b217397.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yfZJdCijo6 | Maximum Coverage in Turnstile Streams with Applications to Fingerprinting Measures | main | Active | maximum coverage;turnstile streams;sketching | optimization | 5;5;5;6 | 3;3;4;2 | 2;2;2;3 | 3;2;2;3 | 1;2;1;3 | 5.25 | 3 | 2.25 | 2.5 | 1.75 | -0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "What is the actual dependency on k?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The algorithms presented in the paper are interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper the authors consider the problem of choosing at most k subsets from a stream such that the number of distinct items covered by the subsets is maximized. This is an interesting problem with applications in other areas - as demonstrated in the paper. The authors give a O~(d/\\epsilon^2) algorithm where d is the number of sets in the stream and \\epsilon is an approximation parameter."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "It is clear that the most important parameter is k. And when analysis the complexity of the algorithms the authors have avoided discussing the dependency of the space complexity on k. This makes it very hard to properly judge the true contribution of the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Is the model is strict-turnstle model?\n2. Line 200: L_1 sketches are trivial. Do you mean L_0 Skteches?\n3. The description of the algorithm is confusing and not precise. For example\n a. Line 9. All rows are concatenated to obtain v. However, each entry in the matrix is 0-1. So $v$ is a binary vector? I assume $v$ contains row numbers/elements of the universe? (For example, if elements $i$ is in sets 3, 4 8. Then the vector $v$ contains the number $i$, at 3 positions.\n b. When you are keeping an L_0 sample of $v$, what do they contain? It seems to me that they contain a sample of rows (excluding all zero rows, and hashed into the same bucket) from $A'_m$? Is this correct?\n c. Line 23: I do not understand what it means to \"if $r$ has less then ..... edges among $L_0$ samplers\". Clarifying the above question will help understand this line.\n\n4. Claim 3.1: Consider the instance where $m = 1$. $A'_1$ contains approximately half the rows. Shouldn't this need $n/2$ memory? What am I Missing?\n5. Line 276-277: $k \\log d/\\epsilon^2$ is a fixed number. How can this be OPT?\n6. Claim 3.2: McGregor & Vu's proof relies on (set) insertion only model? Is it easy to see that it translates into a turnstile model?\n7. Line 294: What are $c_1, c_2, \\cdots$? They are not defined earlier\n8. As defined a linear sketch is a matrix drawn from a family of matrices. The algorithm is implicitly defining a family of matrices. Can you define these matrices more explicitly? For the sketch to be linear, the $L_0$ sampler needs to be linear. This should be clarified.\n\nI will revise my score after the discussion period."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This is the first algorithm for the maximum coverage problem in the turnstile model."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work studies the maximum coverage problem in a streaming setting: Given $d$ sets over an universe $[n]$ and an integer $k$. Find $k$ sets whose union is maximized. The input (represented as $n \\times d$ matrix) arrives as a stream. Earlier works studied this problem in the insertion-only streaming model. This work studies the problem in the turnstile model, where deletions are allowed. The main contribution is the design of a sketch-based algorithm that use $\\tilde {O} (d/\\epsilon^3)$ space."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The writing could be improved to enhance the readability of the paper. I am not able to completely understand the proposed algorithm and verify the claims. Please see the Questions for details.\n2. There is a large body of work on streaming submodular maximization. A discussion on the relationships of those works to the current work is missing. For example, \n [1]. https://proceedings.neurips.cc/paper_files/paper/2020/file/6fbd841e2e4b2938351a4f9b68f12e6b-Paper.pdf\n [2]. https://dl.acm.org/doi/10.1145/3519935.3519951\n [3]. https://proceedings.neurips.cc/paper_files/paper/2020/file/9715d04413f296eaf3c30c47cec3daa6-Paper.pdf\n3. Experimental results are not evaluated on turnstile streams."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tLine 209, $|x_i| \\ge \\epsilon^2 || x ||_p$. What is the value of $p$, or does this apply to arbitrary value of $p$?\n2.\tLine 222, $\\epsilon$ is missing in the $\\tilde{O}$ notation, whereas in line 224, the $\\epsilon$ is not omitted in the $\\tilde{O}$ notation.\n3.\tLine 226, $\\alpha - \\epsilon$ -- > $1 - 1 / e - \\epsilon$."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper constructs a linear sketch that supports input updates for the maximum $k$ coverage problem.\n2. Experimental evaluations demonstrate a significant speedup compared to prior work.\n3. Overall, I find the paper to be well-written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the problem of constructing a linear sketch for the maximum $k$ coverage problem. Given a set of $n$ items and $d$ subsets, the objective of the maximum $k$ coverage problem is to select $k$ subsets that maximize the number of items covered. The problem can be represented using a matrix $A \\in \\{0, 1\\}^{n \\times d}$, and the goal of the linear sketch is to find a matrix $S$ such that $SA$ is significantly smaller than $A$ while still enabling an approximate solution to the original $k$ coverage problem using $SA$. \nSince the sketch is linear, it naturally extends to the turnstile model, where the entries of $A$ can be updated over time. \n\nThe paper also demonstrates the application of this sketching technique to the problem of fingerprinting for risk management, with empirical studies indicating substantial speed improvements over previous methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tSome explanation is missing for the algorithm. E.g., the paper claims that algorithm 1 construct a linear sketch. Does this imply that the $L_0$ sampler used in Algorithm 1 is also a linear sketch? The same question applies to the $L_1$ sketch. It would be helpful to explicitly clarify whether these sketches are linear, and if they are, to provide brief explanations or references that detail how they function as linear sketches. This additional context would aid readers in understanding the overall linearity of Algorithm 1."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Some questions are mentioned in the other parts of the review."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Extending the known results for the turnstile model is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper provides streaming algorithms for maximum coverage and fingerprinting for risk measurement problems. The streaming algorithms are in the turnstile model (input elements can inserted or deleted). Previous results on streaming algorithms were known only for the insertion-only model. Let us discuss the main results for both problems:\n\nMax-coverage:\nIn the maximum coverage problem, we are given subsets S1, ..., Sd of a universal set U (of size n) and an integer k, and the task is to find the k subsets that together cover the largest size subset of U. The problem is NP-hard. There is a poly-time (1-1/e)-approximation algorithm that is known to be tight. The input for this problem can be seen as a 0/1 matrix A of size nxd, where A(i, j)=1 iff i is in the subset S_j. In a previous work, McGregor and Vu (2018) gave a (1-1/e-\\eps)-approximation streaming algorithm in the insertion-only set-arrival model using O(d/\\eps^2) space. In the set-arrival model, the entire column of the matrix A is seen in one step. Bateni et al. (2017) gave a (1-1/e-\\eps)-approximation algorithm in the insertion-only edge-arrival model using O(d/\\eps^2) space. In the edge-arrival insertion-only model, a single matrix entry gets updated from 0 to 1. This paper explores the edge-arrival turnstile model, where in every step, one matrix entry may get updated from 0 to 1 or from 1 to 0.\n\nFingerprinting for Risk Management:\nIn targeted fingerprinting, the input is an n×d matrix A with n users and d features. The goal is to identify at most k features {f1,f2,...,fk} such that the number of users who share identical values at positions {f1,f2,...,fk} is minimized."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There is significant scope for improving the write-up. There is a lack of clarity in many of the statements that leaves the reader confused: \n- What is F_k in the abstract?\n- The introduction assumes the knowledge about the definition of a sketch. Writing 1-2 sentences defining a sketch before using it in the discussion would be good.\n- The abstract states the space usage to be O(d/\\eps^2), but the main theorem (Theorem 1) gives the space-bound as O(d/\\eps^3).\n- Remark 1 is unclear and confusing. It starts talking about 'sampling rates', l_0-sampler, etc. without discussing any random process or algorithm. I had no option but to move on without understanding Remark 1.\n- Lines (141-143): It is unclear what the estimation problem is. Are x_i values given in the streaming setting, or is it (i, \\pm 1)? Unless this is made clear, I am not sure how to interpret Theorem 3.\n- Understanding the sketch algorithm (Algorithm-1) is extremely challenging given that the format of the sketch is not defined. Is H_{\\leq d} a matrix or a subset of (element, subset) pairs? How is this sketch updated in the stream on an insertion and deletion? This does not come out clearly from the pseudocode. In line 14 of Algorithm 1, it is said that \"sketches and samplers handle updates.\" Which sketches are these?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024maximum,\ntitle={Maximum Coverage in Turnstile Streams with Applications to Fingerprinting Measures},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yfZJdCijo6},\nnote={under review}\n}"
},
"abstract": {
"value": "In the maximum coverage problem we aim to choose at most $k$ subsets such that the number of distinct items covered by the subsets is maximized. The input can be formalized by an $n \\times d$ matrix $A$ where there are $n$ items in the universe and $d$ input subsets. $A_{ij}$ is nonzero if item $i$ is in subset $j$ and is $0$ otherwise. To our knowledge, we are the first to create a linear sketch to solve maximum coverage which can lead to large runtime improvements and allow for implementation in distributed and streaming environments. We specifically focus on the application to turnstile streams which allows deletions. Here, the updates are of the form $(i,j,\\pm 1)$ which performs $A_{ij} = A_{ij} \\pm 1$. Previous work mainly considers the more restrictive set-arrival model where each update reveals an entire column of $A$ or the insertion-only model which does not allow deletions. We design an algorithm with an $\\tilde{O}(d/\\epsilon^3)$ space bound, which is nearly optimal for constant $k$. We then turn to fingerprinting for risk measurement where the aim is to monitor which $k$ columns of an input $n \\times d$ dataset poses the highest re-indentification risk. Our maximum coverage sketch directly enables a solution of targeted fingerprinting for risk measurement. Furthermore, we give an independent result of independent interest: a sketch related to the complement of $F_k$ for $k \\geq 2$. We use this sketch to create a streaming algorithm for general fingerprinting for risk management. Empirical evaluation confirms the practicality of our fingerprinting algorithms and shows a speedup of up to $210$x over prior work. We also demonstrate the use of our general fingerprinting algorithm as a dimensionality reduction technique, facilitating enhanced feature selection efficiency."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"maximum coverage",
"turnstile streams",
"sketching"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e548ead2a1696e9718b37570417c7c26c2dc5269.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/4880ef4eb1afc05d2fdf3b159e3caaaf6484831b.zip"
},
"title": {
"value": "Maximum Coverage in Turnstile Streams with Applications to Fingerprinting Measures"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yfkvUJEY6i | Learning Disease Progression Models That Capture Health Disparities | main | Active | fairness;equity;bias;health disparities;disease progression;bayesian model | alignment, fairness, safety, privacy, and societal considerations | 3;3;3;8 | 4;4;3;3 | 1;3;1;3 | 2;3;2;3 | 3;3;2;3 | 4.25 | 3.5 | 2 | 2.5 | 2.75 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Regarding function $f$ in line 135, is it a stationary function over the progression of severity -- that is, is it the same for every $Z_t$? If so, can you provide a justification for this assumption?\n2. There are hundreds, if not more, types of disparities among patients. Why do you believe that the three disparities used in your paper are the most significant or important?\n3. In the methodology section, the author mentions the need to pin a group $a_0$ first. How did you choose this particular group, and does selecting a different $a_0$ affect the performance of your model?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "This paper studies the important topic of predicting disease progression by capturing the disparities between patients."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a disease progression model that uses observed symptoms to model the progression of a patient's latent severity. Compared with previous research, it accounts for three types of health disparities: initial severity, disease progression rate, and visit frequency. The proposed method is identifiable and shows good performance on a private dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed method appears to be a variant of a hidden Markov model (HMM). Instead of using transition probabilities in HMM, it employs simple functions to describe transitions between states and outcomes. This simplification might limit the model's ability to capture the complex dynamics of disease progression.\n2. The directed acyclic graph, the selection of functions between observed and hidden variables, and the specific types of disparities incorporated in Section 3 seem overly simplistic and heuristic. The paper lacks detailed reasoning or motivation for these design choices. Providing justifications or empirical evidence supporting these decisions would enhance the credibility of the model.\n3. The paper adopts a linear representation solely because \"it provides an interpretable characterization of the trajectory.\" However, real-world disease progression often involves intricate, nonlinear relationships between variables. Relying solely on a linear model may lead to suboptimal performance and may not capture the true underlying patterns, potentially undermining the trustworthiness of the explanations. Exploring nonlinear models could offer better performance and more reliable interpretations.\n4. The empirical results are based on a private dataset with a relatively small number of subjects (n=2,942) and only four features. This raises concerns about the model's generalizability to other datasets. I recommend validating the model on public EHR datasets such as MIMIC or UK Biobank to assess its broader applicability. Furthermore, the comparisons are limited to simple baselines like linear regression, quadratic regression, principal component analysis, and factor analysis. Evaluating the model against state-of-the-art methods, including neural networks, RNNs, and transformers, would provide a more comprehensive assessment.\n5. Insufficient Evidence of Superior Performance:\n - The model's explainability is only qualitatively assessed using medical knowledge. Incorporating quantitative evaluations or user studies could strengthen the claims about interpretability.\n - The model does not show (significant) improvements over the baselines in reconstruction and predictive tasks. Additionally, the absence of error bars or statistical significance tests makes it challenging to determine if the differences are meaningful. If the authors can include the code, my confidence in the results will be higher."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- can the authors add stronger baselines, both from classical machine learning and from the literature of disease progression with health disparities? With a better description of the feature engineering and alternative feature engineering strategies. As it is, the paper shows measurable health disparities of several types, but it is not clear that it is really important for the overall disease understanding\n- can the authors extend the ablation study to the real dataset? or to datasets simulated with alternative data generation processes?\n- can you describe more clearly the metrics used? there are several variables in Xt, how is the RMSE and MAPE aggregated over all of them?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper has several interesting ideas\n- the addressed problem is very important\n- the model is well explained, and the theoretical analysis seems strong\n- the authors provide an extensive empirical analysis, with interesting insights"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this work, the authors propose a Bayesian disease progression model that explicitly accounts for 3 types of disparities concerning health. The model contains several subgroup-specific parameters to account for inequalities in health. The authors provide a strong theoretical analysis of the model, as well as analyses of simulated data and a real application for heart failure patients."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "However, the paper suffers from several flaws. I am really willing to increase my grade if those points are addressed, but as it is, the contribution of the model compared to existing strategies, in terms of performance is really unclear.\n- the baselines seem quite weak. The authors report that several indicators are important, like the visit frequency. I am not sure to understand from the manuscript which features the baselines include, in particular do they include demographics information? and not sure either that PCA or FA are the best tools to do feature engineering. In particular all those approaches are linear, so they can not model interactions between demographics and other features. Maybe considering tree-based methods would be relevant here, with an appropriate feature engineering process. I know that prediction is not the end-goal of the model, but this constitutes the only \"measurable\" performance indicator.\n- the ablation studies are interesting, but they should also be conducted on the real-data application, to assess the variation in predictive performance, because it is not very convincing that data simulated with a model would be less accurately represented by a different model. \n- additional baselines to consider, both on simulated and real data it would be interesting to compare the performances with models that account for demographics characteristics differently (with one type of disparity for instance)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- For a continuous A, does there have to exist a reference value a_0 for which $\\mu(a_0) = 0, \\sigma(a_0) = 1$?\n- Is the second disparity mentioned a disparity because there might be worse clinical care for each ancestry group? Or is this supposed to capture biological differences? Wouldn't the point of any disease progression model with ancestry as a covariate be to infer disparities in disease progression rates?\n- What happens if some of the disparities are unmeasured? How well can the model infer the correct disease progression e.g. in simulations where some of the disparity variables are not shown to the model."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- Alleviating health disparities and fairness w.r.t. clinical algorithms is an important aspect of machine learning for health and the impact of disparities on disease progression modeling is important\n- Interpretability and estimating effects of disparities and covariates on disease progression is clinically meaningful"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a Bayesian hierarchical, linear model of disease progression that captures well known disparities in an interpretable way, while maintaining identifiability. The authors show on a simulated data set that the model can correctly identify the relevant parameters and that accounting for disparities is important to correctly estimate progression. They then demonstrate the utility on a real world data set, showing predictive ability, reconstruction compared to factor analysis and PCA and consistency with medical knowledge."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- While the model is interesting from a clinical perspective, I am not sure this is the right venue for this publication due to limited technical novelty\n- Very limiting disease progression trajectory due to linear assumption in time. A patient can therefore not experience worsening and improvement on their disease trajectory\n- The proofs show that not taking into account disparities will bias the result, however other (non-linear) disease progression models can take \"baseline\" covariates like ancestry, effectively conditioning the probability of the latent progression $z$ on $A$ as well. Comparisons to models like these are necessary to underline the claims.\n- The authors need to compare the disease progression modeling with other state of the art methods of inferring latent disease trajectories. It is not clear how well more realistic disease trajectories are captured and how the model compares to SOTA models on this task.\n- For the synthetic data it would be helpful to also use data not sampled from the same architecture as the model under consideration, but more complex disease progressions to examine how well the model can infer these despite the relatively simple assumptions"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "None"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well written, easy to read and tackles an important problem: improving modelling when disparities mark model. The paper presents theoretical justification and thoroughly evaluates the proposed methodology on both synthetic and real-world data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a Bayesian modelling strategy to model disease evolution while accounting for three sources of biases present in medical data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The model makes assumptions upon the expression of disparities while being identifiable. The underlying process must verify the assumptions. It would be beneficial to discuss further how realistic and/or common these assumptions are. An analysis of a misspecified model when the underlying generating process does not meet these assumptions would be valuable."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose an interpretable Bayesian model that captures and accounts for three types of multiple health disparities, and we prove that failing to account for disparities leads to biased estimates of disease severity."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Disease Progression Models That Capture Health Disparities},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yfkvUJEY6i},\nnote={under review}\n}"
},
"abstract": {
"value": "Disease progression models are widely used to inform the diagnosis and treatment of many progressive diseases. However, a significant limitation of existing models is that they do not account for health disparities that can bias the observed data. To address this, we develop an interpretable Bayesian disease progression model that captures three key health disparities: certain patient populations may (1) start receiving care only when their disease is more severe, (2) experience faster disease progression even while receiving care, or (3) receive follow-up care less frequently conditional on disease severity. We show theoretically and empirically that failing to account for disparities produces biased estimates of severity (underestimating severity for disadvantaged groups, for example). On a dataset of heart failure patients, we show that our model can identify groups that face each type of health disparity, and that accounting for these disparities meaningfully shifts which patients are considered high-risk."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"fairness",
"equity",
"bias",
"health disparities",
"disease progression",
"bayesian model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/484cd25d66e920844aff5a395577a91717b88212.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/e8b7d2c86bcbf33ff6a30d04dbd994e456fa6cb2.pdf"
},
"title": {
"value": "Learning Disease Progression Models That Capture Health Disparities"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ygtmPu0xZy | Scalable Exploration via Ensemble++ | main | Active | Bandit;Scalable Exploration;Function Approximation | reinforcement learning | 5;5;5;5 | 3;4;4;3 | 3;3;3;2 | 2;3;2;3 | 2;2;3;3 | 5 | 3.5 | 2.75 | 2.5 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "(1) May include additional analysis or experiments on neural network models to demonstrate how Ensemble++ addresses the gradient coupling issue in that setting."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "(1) The paper proposes the Ensemble++ architecture, which achieves scalability with O(d³log T) per-step computation. This closes a long-standing gap in scalable exploration theory.\n\n(2) The paper empirically validates the scalability and efficiency of Ensemble++ through experiments in bandit tasks, including language-input contextual bandits using a GPT backbone."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Scalable exploration in sequential decision-making is challenging, especially in high-dimensional environments. Ensemble sampling, an approximation of Thompson sampling, is widely used but can suffer from performance degradation due to ensemble coupling in shared-layer networks. The Ensemble++ architecture is proposed to overcome this limitation by introducing decoupled optimization and lifted index sampling. Empirical results show that Ensemble++ outperforms existing methods in regret minimization across various tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I appreciate the motivation and approach presented. However, I find that the methods, analysis, and experiments are inconsistent. The motivation of this method is primarily based on ensemble neural networks, which suffer from gradient coupling issues. Yet, the analysis and experiments focus on linear contextual bandits, where gradient descent is not commonly used. Additionally, an important related work, \"neural contextual bandits [1,2,3],\" has been overlooked. I believe that the analysis from neural contextual bandits could be adapted to this method.\n\n\n[1] Neural Contextual Bandits with UCB-based Exploration\n[2] Ee-net: Exploitation-exploration neural networks in contextual bandits.\n[3] Federated Neural Bandits"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* In the abstract (and elsewhere in the paper), how is ensemble sampling a computationally efficient approximation of Thompson sampling? It seems that it is in specific settings, e.g. the Gaussian setting in Lemma 3 in [1], and the linear setting in this paper where $P_\\xi$ is zero-mean and isotropic. If I am understanding correctly, then this limitation in the connection between ensembles and posteriors needs to be specified. \n* In line 195-196, it is mentioned that there are $M$ ensemble components, but $A$ is simply a matrix of size $d\\times M$; are these $M$ ensemble components separate in any way? \n* How is the variance of the $Z_{s,m}$ perturbations (139-140) chosen? Are these heuristics, or is there a principled reason for their existence? \n* The abstract / intro mention computational complexity but I don't see it derived anywhere. Is this supposed to be in the paper? \n\n[1] Ian Osband, John Aslanides, and Albin Cassirer. Randomized prior functions for deep reinforcement learning. Advances in Neural Information Processing Systems, 31, 2018."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The motivation seems strong, the method is largely straightforward and well-motivated (see questions for exceptions), the paper seems to be overall well-written and well-organized, and the experiments seem promising."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Scalable exploration is challenging in high-dimensional bandit settings. Ensembling is one (computationally expensive) way to try to approximate thompson sampling; ensembling can be made computationally cheaper by sharing one feature extractor between ensemble members, but can underestimate uncertainty due to the ensemble coupling issue, where ensemble members are too similar. \n\nAuthors propose Ensemble++, which uses stop gradients and a high-dimensional ensemble index. Empirically, authors find Ensemble++ outperforms existing methods in bandit settings. Authors analyze regret and computation for Ensemble++ in linear contextual bandit settings; Ensemble++ achieves the same regret bounds as exact TS, and has $\\tilde O(\\log T)$ per-step computation complexity."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* **Insufficient discussion of related work** While there is a mention and empirical comparison to epinet in the paper, I felt there was insufficient discussion of how the proposed method is methodologically different from epinets, as they also use index functions.\n\n* **Unclear references to posterior approximation** There are references made to things approximating a posterior, but it was not clear what kind of an approximation that is (e.g. the way MCMC sampling is a posterior approximation, vs VI is a posterior approximation, vs a closed-form posterior for a different but similar problem is an approximation). \n\n\n * **Proposition 1** Proposition 1 is described as a closed-form update; while an earlier sentence mentions posterior approximation, from the Appendix, Proposition 1 is deriving the closed-form minimizers of the loss (2) with respect to $A$ and $\\mu$. Is this supposed to also be a posterior update? If so, what are the assumptions on the Bayesian model? \n\n * **Lemma 1** This result assumes $\\Sigma_t$ is the true posterior variance, but it is not clear why that should be the posterior variance. It seems like there are some assumptions and/or definitions missing. \n\n * **Experiments** There are no experiments that compare Ensemble++ vs other methods to a ground truth posterior, despite claims about Ensemble++ better approximating the posterior. (The first setting in 5.1 does appear to compare with ground-true TS, if I am understanding correctly, but this is the only example.)\n\n* **Sequential dependency** I still don't understand what exactly is the problem in regular ensembles and how it is mitigated in Ensemble++.\n\n* There are claims that the stop gradient prevents ensemble coupling; it would help if this could be empirically measured (beyond better bandit regret). \n\n* I think it could help for readability to have an algorithm box to summarize the method. \n\nAlso see questions for areas that were confusing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "More details should be provided on the stop-gradient operator in the network. Is it solely intended to block gradient flow? If so, how to do the ensemble process? I think the current explanation may not be sufficient for readers to fully understand this aspect."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(1) As demonstrated in the paper, Ensemble++ is computationally efficient, which is highly beneficial in practice. \n\n(2) Ensemble++ addresses the ensemble coupling issue by utilizing the stop-gradient operator."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an improved ensemble exploration method called Ensemble++, which incorporates decoupled optimization and lifted index sampling for more efficient exploration and uncertainty estimation. The architecture outperforms existing methods and is scalable in terms of computational cost."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In my understanding, the primary weakness of the paper lies in the theoretical analysis in Section 4. Specifically, the impact of the stop-gradient operator on regret performance is unclear, which raises concerns about whether the regret analysis fully supports the main idea. However, if I am missing something, please let me know, and I will reconsider my evaluation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The two key innovations in the proposed model are noted as (1) a variance-aware discretization method that prevents the exponential growth in ensemble size and (2) a reduction to sequential random projection techniques. However, the current presentation of the paper does not clearly explain how these innovations are applied. It is not clear how the two proposed changes improve upon the shortcomings of the existing approaches. Additionally, the solution approach lacks clarity without the algorithm.\n\n a) Can you provide a high-level description of how the variance-aware discretization method works and how it prevents exponential growth in ensemble size?\n\n b) Explain the key steps in applying sequential random projection techniques to their problem.\n\n c) Include a pseudocode description of the Ensemble++ algorithm to clarify the solution approach.\n\n d) Explain how the proposed modifications address the shortcomings of existing approaches like Ensemble+.\n\n2. The total computational complexity is $O(d^3 \\log T)$. However, it is not clear from the main paper how this result was concluded. Can you provide a proof sketch and key insights into how the specific approach resulted in this computational complexity and how it overcame the bottleneck in the existing approaches?\n\n3. The primary contribution is the computational advantage in high-dimensional settings. To validate the effectiveness of the proposed approach, experiments showcasing the variation of regret with respect to computational time will be beneficial. \n\n a) Can you include a plot showing regret vs. computation time for Ensemble++, Ensemble+, and the Langevin Monte Carlo approach?\n\n b) Provide a detailed comparison with the Langevin Monte Carlo approach in Xu et al., highlighting both performance and computational efficiency.\n\n c) Discuss any trade-offs between regret and computation time for each method."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper tackles an important issue: scalable exploration poses a significant challenge in sequential decision-making tasks, including reinforcement learning (RL) and contextual bandits, especially relevant in high-dimensional practical environments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents Ensemble++, an ensemble sampling method that enhances the computational efficiency of Thompson sampling. Recently, Ensemble+ was developed to tackle this challenge; however, it faces an issue with ensemble coupling that adversely affects its performance. To address this problem, Xu et al. later introduced Langevin Monte Carlo Thompson Sampling, which, unfortunately, incurs high computational costs. The aim of this paper is to improve the approach to ensemble coupling while minimizing computational expenses. The proposed approach, Ensemble++, addresses this by implementing decoupled optimization and lifted index sampling, enhancing exploration and uncertainty estimation. Theoretically, the paper shows that it achieves the same regret bounds as exact Thompson sampling in linear contextual bandits, with a per-step computation complexity of $\\tilde{O}(\\log T)$.\nEmpirical results in the performance of Ensemble++ are presented."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper lacks details in many instances, which hinders understanding the contributions of the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024scalable,\ntitle={Scalable Exploration via Ensemble++},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ygtmPu0xZy},\nnote={under review}\n}"
},
"abstract": {
"value": "Scalable exploration is a persistent challenge in sequential decision-making, especially in high-dimensional environments with neural networks. Ensemble sampling, a computationally efficient approximation of Thompson sampling, is widely used but suffers from performance degradation in shared-layer ensemble networks due to ensemble coupling. To overcome this limitation, we propose the Ensemble++ architecture, which introduces decoupled optimization and lifted index sampling for efficient exploration and uncertainty estimation. \nEmpirical results show that Ensemble++ outperforms existing methods in regret minimization while maintaining bounded per-step computation costs across a variety of tasks, including nonlinear bandits and language-based contextual bandits using a GPT backbone. Theoretically, we prove that Ensemble++ achieves the same regret bounds as exact Thompson sampling in linear contextual bandits, with $\\tilde{O}(\\log T)$ per-step computation complexity. This provides the first rigorous analysis demonstrating ensemble sampling as a scalable and effective approximation to Thompson sampling, closing a key theoretical gap in exploration efficiency."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Bandit",
"Scalable Exploration",
"Function Approximation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/01c64623f8a7ff1cacfa405bbb31c28c22dbcef9.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Scalable Exploration via Ensemble++"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yhKNCvYlCr | Transfering Knowledge into Efficient Tiny Models for Object Detection with Dual Prompt Distillation | main | Withdraw | knowledge distillation;object detection | unsupervised, self-supervised, semi-supervised, and supervised representation learning | Feng Zhao;Yukun Qi;Jiahao Chang;Lin Chen;Kun Li;Tianyou Song;Zehui Chen | ~Feng_Zhao6;~Yukun_Qi1;~Jiahao_Chang2;~Lin_Chen18;~Kun_Li13;~Tianyou_Song1;~Zehui_Chen1 | 3;3;3;6 | 4;5;4;4 | 2;3;2;2 | 1;3;2;2 | 3;3;2;2 | 3.75 | 4.25 | 2.25 | 2 | 2.5 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "+ This work attempts to explore efficient knowledge transfer from super large teachers to tiny students for object detection, which is interesting to the community.\n\n+ The methed that integrates visual prompts with the knowledge distillation framework, bridging the gap between teacher and student models is somehow novel.\n\n+ The experiments across diverse knowledge distillation settings show some promising results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a knowledge distillation method to transfer knowledge from super large teacher object detectors to tiny student detectors. The proposed method incorporates three key distillation components: feature distillation, alongside external and internal prompt distillation mechanisms. The authors make use of the learnable prompts as bridges to mitigate the knowledge gap between teacher and student architectures by transfering the crucial information via the cross-attention modules. To validate their approach, experiments were conducted across various teacher-student detector pairs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "However, there are several weaknesses as follows:\n\n1. The presentation of the method section is not well-organized. The authors fail to provide a formulation of the total loss functions employed during distillation, and the optimization process of learnable components remains unclear. Moreover, the mathematical framework fails to establish clear connections between individual equations and their roles in the final losses. It would be better to show comprehensive formulations linking the learnable modules to their corresponding loss terms. \n\n2. The Figure 2 is notably limited to comparing feature maps between only the largest teacher and smallest student models, while omitting features from models of intermediate scales. Thus it is hard to understand the crucial relationship between model size differences and feature representation gaps.\n\n3. Tables 1, 2, 3 are missing essential metrics, such as the number of parameters and inference latency. The proposed KD method will introduce additional parameters and computational overhead during inference compared to other KD methods, which is potentially not fair. A thorough analysis of the additional computational burden and its impact on inference time should be discussed.\n\n4. The experiments are limited to the CNN-based detectors, which raises questions about the method's generalizability to transformer-based detectors. \n\n5. Moreover, the \"tiny\" students used in this paper are relatively large compared to some truly lightweight detectors (e.g., Tiny-YOLO, SSDLite, EfficientDet, TinyDet, etc.), leaving the method's effectiveness on extremely compressed models unexplored."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- In line 241~242, the author states that **the gradient of KD signals is prone to disappear in shallow stages, making it difficult to optimise effectively**. it is not a consensus as for me. It's better to cite some previous works or do some experiments to prove this statement.\n- What does $M$ denote in formula (3)?\n- How model select the top-N pixels in $Init$?\n- The inputs and outputs of most formulas in this paper use the same symbol, which makes it harder to understand. It's better to use different symbols to distinguish output from input."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This paper attempt to use prompt learning methods in object detection KD. The expeirments are sufficient to demonstrate the effectiveness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a simple prompt-based object detection distillation framework, termed DualPromptKD, which aims to improve knowledge transfer efficiency from both teacher and student perspectives. The author enables the student model to fully leverage proficient teacher knowledge by distilling teacher representations into compact external prompts. In terms of the limited learning ability of the student model, the author introduces lightweight internal prompts tailored to bolster the feature imitation capability for the target model. Comprehensive experiments on MS COCO dataset demonstrate the effectiveness of proposed DualPromptKD."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Additional prompt learning modules and ConvLoRA bring extra parameters to the student, which goes against the purpose of compressing model of KD."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Overall, I think that investigating KD methods for lightweight object detection is meaningful. However, the current version can not fully support the claims. I expect more experiments in the rebuttal phase, including:\n\n- experimental comparions with most recent works to support the claims and demonste the superiority of this work;\n\n- more experiments with efficient detection pipelines, which can strengthen the core contribution of this work;\n\n- more experiments with general backbones to show the wide applications, which is a bonus of this work.\n\nI consider to improve my rating if these concerns can be addressed."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The topic is meaningful. The authors find the existing object detection KD methods are hard to apply on the light-weight setting.\n\n- Good performance. The authors provide extensive experimental results to show the effectiveness of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this work, the authors foucs on designing an effective knowledge distillation (KD) method for object detection. Specifically, they first showed that simply applying existing KD methods on (lightweight) object detection is not feasible, and proposed a new method, named DualPromptKD, for this topic. The authors proposed two designs in their model, i.e. external prompt KD and internal prompt KD. Finally, they conducted extensive experiments on the COCO dataset, which demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The authors claims they first apply existing KD methods [1,2,3,4] on the object detection task [5,6] but find limited improvement (e.g. line 14-17, line 50-53), which is the main motivation of this work. However, these baseline KD methods are somewhat outdated are not the SOTAs (line 52), and more recent works should be investigated. Also, the experimental results should compare the most recent KD methods.\n\n- This work mainly focus on KD for lightweight object detection. With this scope, I beleive the latency of the detectors should be listed. Besides, the authors mainly conducted the experiments with light-weight backbones, which is okay for this paper, but I still expect two further investigation: 1. Can the proposed KD method also work for the general backbones? 2. More experiments with efficient object detectors (such as EfficientDet, YOLOv4/7 etc.) should be conducted.\n\n\n[1] Focal and global knowledge distillation for detectors, CVPR'22\n\n[2] Pkd: General distillation framework for object detectors via pearson correlation coefficient, NeurIPS'22\n\n[3] Knowledge distillation from a stronger teacher, arXiv'22\n\n[4] Masked distillation with receptive tokens, arXiv'22\n\n[5] Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection, NeurIPS'20\n\n[6] Ghostnet: More features from cheap operations, CVPR'20"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The writing is clear.\n\n2. \"Distillation for tiny models\" should be a good research question to discuss."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces DualPromptKD, a knowledge distillation framework designed to transfer knowledge from large teacher models to tiny student models for object detection. DualPromptKD employs a dual-prompt distillation strategy, including external prompts that capture teacher model knowledge and internal prompts that enhance the feature extraction capabilities of student model. Experiments on the COCO benchmark demonstrate the effectiveness of DualPromptKD."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My main concern lies in the unfair comparison with existing methods. \n\nThis paper introduces prompts into the student model, which are additional parameters, while the methods compared in this paper do not add these extra parameters to the student model. Of course, the parameter number of prompt is small, as shown in Figure 4, usually about 20 M, but it is important to note that the GhostNet [1] network used in this paper also typically has only 20 M! This means that the student network parameters may have been doubled, so I think the current experimental comparison is very unfair.\n\n[1] Ghostnet: More features from cheap operations, CVPR 2020\n\n2. Lack of effective new insights.\n\nAs Lines 48-49 stated, the main difference between this paper's setting and the existing setting is \"much smaller and faster models.\" Therefore, the authors use the GhostNet network as the student network in their experiments. However, facing this new problem, this paper does not offer new insights. The insight in Lines 85-87 was already mentioned in [2]; the logic in Lines 89-90 is a bit strange - if distilling the output features on ResNet-50 does not constrain the consistency of shallow features, wouldn't the consistency of shallow features on small models be better constrained because there are fewer layers? The insight in Lines 91-95 lead to the design of prompts, but as previously stated, the effectiveness of prompts may lie in the introduction of additional parameters, rather than solving specific challenges when \"much smaller and faster models\" are used as student models. Therefore, I believe this paper lacks effective new insights, especially in-depth analysis of the specific challenges of the new scenario.\n\n[2] Pkd: General distillation framework for object detectors via pearson correlation coefficient, NeurIPS 2022\n\nDue to the unfairness of the experimental comparison and the lack of effective insights into the distillation task when using tiny models as student networks, I believe this paper does not meet the ICLR standard, and therefore I give a reject."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nzhao2024transfering,\ntitle={Transfering Knowledge into Efficient Tiny Models for Object Detection with Dual Prompt Distillation},\nauthor={Feng Zhao and Yukun Qi and Jiahao Chang and Lin Chen and Kun Li and Tianyou Song and Zehui Chen},\nyear={2024},\nurl={https://openreview.net/forum?id=yhKNCvYlCr}\n}"
},
"abstract": {
"value": "Knowledge Distillation (KD) has demonstrated significant benefits for learning compact models for object detection. Most current work focuses on general distillation settings, where student models are relatively large and learnable, then compete with the distillation performance. However, due to the model scale and inference speed, these models are seldom deployed in real-world applications. In this paper, we dive into a challenging but more applicable setting: how to distill rich teacher knowledge into tiny, faster models for object detection? We first show that simply applying previous KD strategies under such settings cannot achieve satisfying results, due to the extremely large model capacity gap between the teacher-student pairs. To this end, we propose a simple prompt-based object detection distillation framework, namely DualPromptKD, which aims to improve knowledge transfer efficiency from both teacher and student perspectives. Specifically, by distilling teacher representations into compact external prompts, we enable the student model to fully leverage proficient teacher knowledge even at inference time. In terms of the limited learning ability of the student model, we introduce lightweight internal prompts tailored to bolster the feature imitation capability for the target model. Extensive experimental results on the COCO benchmarks validate the effectiveness and generalization of our approach, including different image backbones and detector types. Notably, our DualPromptKD surpasses the previous best distillation strategies by more than 2.0 mAP under various experimental settings. The code will be available."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Feng_Zhao6",
"~Yukun_Qi1",
"~Jiahao_Chang2",
"~Lin_Chen18",
"~Kun_Li13",
"~Tianyou_Song1",
"~Zehui_Chen1"
]
},
"authors": {
"value": [
"Feng Zhao",
"Yukun Qi",
"Jiahao Chang",
"Lin Chen",
"Kun Li",
"Tianyou Song",
"Zehui Chen"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"knowledge distillation",
"object detection"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "zhao|transfering_knowledge_into_efficient_tiny_models_for_object_detection_with_dual_prompt_distillation"
},
"pdf": {
"value": "/pdf/15330ba2aeadb5d8d03e265eb15c3138ce479de6.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Transfering Knowledge into Efficient Tiny Models for Object Detection with Dual Prompt Distillation"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
yheQRc5xWB | Effective and Efficient Time-Varying Counterfactual Prediction with State-Space Models | main | Active | Time Series; State-space Models; Treatment Effect Estimation | causal reasoning | 5;5;5;6;6 | 4;4;3;3;2 | 2;3;2;3;2 | 3;2;3;3;3 | 3;3;3;3;2 | 5.4 | 3.2 | 2.4 | 2.8 | 2.8 | -0.763763 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please list up and carefully describe any questions and suggestions for the authors. Think of the things where a response from the author can change your opinion, clarify a confusion or address a limitation. This is important for a productive rebuttal and discussion phase with the authors.\n\n\n\nBesides RMSE, it would be good to add other ablation study such as distribution analysis of the counterfactual prediction from utilizing the proposed method vs. baselines, which would provide more evidence to validate the the effectiveness of introducing the shared latent factor as illustrated in Figure 1\nIn table1 and table2, could the author elaborate on more details of the baseline THLTS(v)? Why author think this would be fair baseline to justify the rationality of learning shared part of latent factors compared to the proposed method\nIn Algorithm 1, what is the difference between forecast model pρ() and gρ()?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "A substantive assessment of the strengths of the paper, touching on each of the following dimensions: originality, quality, clarity, and significance. We encourage reviewers to be broad in their definitions of originality and significance. For example, originality may arise from a new definition or problem formulation, creative combinations of existing ideas, application to a new domain, or removing limitations from prior results. You can incorporate Markdown and Latex into your review. See https://openreview.net/faq.\n\nOriginality\nThe paper proposes a novel approach to capturing hidden heterogeneity in time series based counterfactual prediction, which is a significant domain problem in causal learning. The proposed Time-shared Heterogeneity Learning from Time Series method is a novel method that addresses this specific challenge by encoding the shared time-aware latent confounder and then utilizing them for counterfactual outcome forecasting.\n\nQuality\nThe paper provides a clear and well-structured presentation of the proposed method, including a detailed explanation of the shared latent confounder variable encoding process via VAE and how to adapt to time series data.\nThe experimental results basically demonstrate the effectiveness of the proposed method in improving the performance of mainstream models. \n\nClarity\nThe paper is well-written and easy to follow, with clear explanations of technical concepts and methods. The authors provide an informative context in each section that effectively organizes the story and summarizes the paper contributions.\n\nSignificance\nThe proposed THLTS method has the potential to improve the accuracy of counterfactual outcome in time-series data scenarios. The capture of hidden heterogeneity across time domains is a common challenge in many fields. The proposed method is flexible and can be easily inserted with arbitrary causal modeling framework, making it a valuable contribution to the field."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a Time-shared Heterogeneity Learning from Time Series (THLTS) method which infers the shared part of latent factor across time steps with a variational auto-encoders (VAE), the method could capture the hidden heterogeneity by recovering the hidden factors and incorporate it into the outcome prediction. This method can be a flexible component to be easily inserted into arbitrary counterfactual outcome forecast models. The authors demonstrate the effectiveness of THLTS on (semi-)synthetic data in capturing shared patterns by combining several existing counterfactual outcome forecast methods to improve their performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Lack of Novelty in Methodology\nThe proposed THLTS method is based on the use of variational encoders (VAE) to improve counterfactual prediction in time-series data, which has been explored in other works such as\n1. https://doi.org/10.1145/3637528.3671950, \n2. https://doi.org/10.48550/arXiv.2310.18615 . \nWhile the shared latent factor encoding part is new, the underlying methodology is not entirely novel. To strengthen the contribution, the authors could provide a more detailed comparison with existing methods and highlight the specific advantages of their approach.\n\n\nLimited Experimental Evaluation\nThe paper only presents experimental results on (semi) synthetic datasets, which may not accurately reflect real-world scenarios. To demonstrate the practical applicability of the proposed method, it would be beneficial to include experiments on (or the connection to) real-world datasets ."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "(1) New problem on TCP on state-space model\n\n(2) Design of novel de-correlation mechanism to reduce confounding bias."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper works with a time varying counterfactual prediction method using STATE-SPACE model. It introduces methods that de-correlate between current treatment and historical covariates. They claimed that their model is effective and lightweight. Finally, they performed experiments on several datasets to highlight the efficacy of their method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1) My major concern is notation and presentation of the paper: The paper has too many overloading of notations-- for example, \"a\" or the actions are giving variable A_t but the system parameter is also A. This has been quite confusing to me for sometimes. \n\n(2) Re. experiments: I am not sure, results of Table 2 are statistically significant: I was looking for paired t test to see how well their method is effective with respect to baselines."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How sensitive is the CDSP mechanism to the choice of decorrelation threshold?\n2. Could the authors provide more insight into the computational complexity trade-offs between CDSP and traditional balancing methods?\n3. How does the method perform on extremely long sequences (e.g., >1000 timesteps)?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Technical Innovation:\n\n-Novel combination of Mamba architecture with TCP\n\n-Well-designed CDSP mechanism that addresses known limitations\n\n-Efficient implementation with linear time complexity\n\n2. Practical Value:\n\n-Better handling of long sequences\n\n-Improved computational efficiency\n\n-Real-world applicability demonstrated on MIMIC-III dataset\n\n\n3.Experimental Design:\n\n-Comprehensive ablation studies\n\n-Multiple evaluation scenarios\n\n-Reasonable baseline comparisons"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Mamba-CDSP, a novel approach for time-varying counterfactual prediction (TCP) using state-space models. The key innovation lies in combining the Mamba architecture (a recent advance in state-space modeling) with a new Covariate-based Decorrelation towards Selective Parameters (CDSP) mechanism. The method addresses two major challenges in TCP: computational efficiency and the over-balancing problem. The authors demonstrate superior performance over existing methods like Causal Transformer and G-Net on both synthetic and real-world datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Theoretical Analysis:\n\n-Limited theoretical justification for why CDSP works better than traditional balancing\n\n-Could benefit from more formal analysis of the bias-variance trade-off\n\n2.Empirical Validation:\n\n-Could benefit from more diverse real-world datasets\n\n-Limited discussion of failure cases\n\n-More detailed hyperparameter sensitivity analysis needed"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Forecasing counterfactual prediction is highly applicable in real-world scenarios.\n2. The time-shared heterogeneity based learning method is easy to implement with VAE.\n3. This paper first utilizes longitudinal method to find the latent factor of each sample, which is intuitive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper tackles the challenge of forecasting counterfactual outcomes in longitudinal settings. Previous methods using LSTM networks and transformers often neglect hidden heterogeneity caused by unobserved factors, which complicates predictions. The authors propose the Time-shared Heterogeneity Learning from Time Series method, which captures shared hidden factors using variational encoders. This approach enhances any counterfactual forecasting method and demonstrates improved performance in experiments with synthetic datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In Proposition 4.1, it would be helpful for the authors to explain more about when the prediction model $g$ is Lipschitz with respect to $e$, as this is critical for ensuring the model's effectiveness in identifying the latent factor.\n2. Since the latent factor is not directly observed, how can you guarantee that the latent factor identified by your method is the one you intend to find? It would be beneficial to provide some analysis regarding the identifiability of your method.\n3. Why did you choose VAE to implement your method? Could other structures, such as deterministic models, serve as the backbone? If so, is it possible to test different models as backbones in the experimental section?\n4. The compared baselines are not state-of-the-art methods. It would be better to select more recent methods as baselines to demonstrate the effectiveness of your approach, such as [1].\n\n\n\n[1] Estimating Counterfactual Treatment Outcomes over Time through Adversarially Balanced Representations. ICLR 2020."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Is CPSD on line 316 a typo?\n\n2. Which dataset was used for Table 3?\n\n3. How sensitive is the method to the choice of decorrelation hyperparameters?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Novel application of SSMs (specifically Mamba) to TCP, showing promising results in both effectiveness and efficiency. The paper leverages state-space models for counterfactual prediction, achieving significant improvements in both prediction accuracy and computational speed compared to existing methods.\n\n2. Well-motivated decorrelation approach that addresses key limitations of existing balancing methods. The proposed CDSP mechanism offers a novel solution to the over-balancing problem in sequential settings, effectively balancing between confounding bias correction and preservation of important covariate information.\n\n3. Comprehensive empirical evaluation across multiple datasets and settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Mamba-CDSP, a novel approach for time-varying counterfactual prediction (TCP) based on state-space models (SSMs). The key contribution is adapting the Mamba architecture with a covariate-based decorrelation mechanism to handle sequential confounding bias while preserving covariate information. The authors demonstrate superior performance compared to existing methods on synthetic and real datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited theoretical analysis of why covariance decorrelation works better than traditional balancing approaches.\n\n2. While performance improvements are shown, deeper analysis of where/why the improvements come from would strengthen the paper. For instance, Table 2 shows substantial gains from CDSP on the MIMIC-III real-world dataset, but this is puzzling since we cannot observe counterfactuals in this data and thus confounding bias should have minimal impact on evaluation. The authors should explain why CDSP shows such dramatic improvements if the test metrics don't actually measure counterfactual prediction ability. This suggests the gains might come from other aspects of the method beyond bias correction, which deserves further investigation.\n\n3. A more thorough literature review on temporal counterfactual estimation would enhance the paper by incorporating recent works like,\n - Chen et al, A Multi-Task Gaussian Process Model for Inferring Time-Varying Treatment Effects in Panel Data\n - Wu et al, Counterfactual Generative Models for Time-Varying Treatment\n - Wang et al, A Dual-module Framework for Counterfactual Estimation over Time\n - Berrevoets et al, Disentangled counterfactual recurrent networks for treatment effect inference over time\n\n4. The paper lacks sufficient implementation details for reproducibility. While the model architecture is described, key details such as hyperparameters (hidden dimensions, number of layers), the decorrelation coefficient, and dropout rates are not specified. These details are crucial for reproducing the reported results.\n\nThe reference for domain adversarial learning on line 269 is incorrect, for example Lim 2018 did not use domain adversarial learning strategy"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper presents a novel method for counterfactual prediction over time series with state-space models."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024effective,\ntitle={Effective and Efficient Time-Varying Counterfactual Prediction with State-Space Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yheQRc5xWB},\nnote={under review}\n}"
},
"abstract": {
"value": "Time-varying counterfactual prediction (TCP) from observational data supports the answer of when and how to assign multiple sequential treatments, yielding importance in various applications. Despite the progress achieved by recent advances, e.g., LSTM or Transformer based causal approaches, their capability of capturing interactions in long sequences remains to be improved in both prediction performance and running efficiency. In parallel with the development of TCP, the success of the state-space models (SSMs) has achieved remarkable progress toward long-sequence modeling with saved running time. Consequently, studying how Mamba simultaneously benefits the effectiveness and efficiency of TCP becomes a compelling research direction. In this paper, we propose to exploit advantages of the SSMs to tackle the TCP task, by introducing a counterfactual Mamba model with Covariate-based Decorrelation towards Selective Parameters (Mamba-CDSP). Motivated by the over-balancing problem in TCP of the direct covariate balancing methods, we propose to de-correlate between the current treatment and the representation of historical covariates, treatments, and outcomes, which can mitigate the confounding bias while preserve more covariate information. In addition, we show that the overall de-correlation in TCP is equivalent to regularizing the selective parameters of Mamba over each time step, which leads our approach to be effective and lightweight. We conducted extensive experiments on both synthetic and real-world datasets, demonstrating that Mamba-CDSP not only outperforms baselines by a large margin, but also exhibits prominent running efficiency."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Time Series; State-space Models; Treatment Effect Estimation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/64411c7773960df01aedb09aa6a39b5a41b838f1.pdf"
},
"presentation": null,
"primary_area": {
"value": "causal reasoning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Effective and Efficient Time-Varying Counterfactual Prediction with State-Space Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yhmVrA8W0v | The Convergence of Second-Order Sampling Methods for Diffusion Models | main | Active | diffusion models;reserve SDE | generative models | 3;3;5;6;6 | 4;5;4;4;3 | 3;2;2;3;4 | 2;1;2;3;3 | 2;3;2;2;2 | 4.6 | 4 | 2.8 | 2.2 | 2.2 | -0.699379 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- The paper presents the KL convergence results about the VP- and VE-types of diffusions models separately. Could you briefly explain how different types of forward processes affect your proof?\n- I find panel (b) of Figure 1 quite helpful as an illustration between theory and practice, but the paper also presents convergence results with respect to RK-2. What would the theoretical bounds of RK-2 look like on that graph?\n- Are equations 11 and 13 identical? If so, why?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper presents convergence results of second-order SDE solvers for diffusion models, which is very relevant to current research in diffusion modeling given the empirical usefulness of SDE-based simulation of the backward process, and the open question of suitable discretization techniques in this context. The paper gives a theoretical foundation on the application of high-order SDE solvers in diffusion modeling, which motivates further research on suitable solvers for diffusion generative modeling. \n\n- Within the scope of the paper, it presents a compelling argument in favor of SDE-DPM-2 over RK-2 or first-order discretization methods for the practical simulation of samples. I find the insight of \"not second-order solvers are equal\" overall interesting and helpful. \n- The paper also illustrates that the convergence bounds empirically with Gaussian mixture examples. \n- The theoretical results are quite general as they apply to both VP and VE diffusion models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Diffusion models (DMs) learn the score functions associated with a diffusion process, and use the learned scores to simulate an SDE corresponding to the backward process. While samples can be simulated by either an ODE or an SDE, SDE samplers are practically superior in terms of sample diversity and quality. This paper sets out to investigate second-order SDE solvers for the backward SDE, and concludes that second-order solver is preferable to the standard first-order discretization methods in terms of convergence with respect to the Kullback-Leibler divergence. \n\nThe paper mainly investigates two (approximate) second-order SDE solvers, SDE-DPM-2 and Runge-Kutta 2 methods, and compares the convergence results to first order SDE solvers such as EI. The paper presents theorems that suggest that SDE-DPM 2 is more preferable to RK-2 from the perspective of KL-divergence, mainly due to the added discretization error. \n\nWhile the paper mainly focuses on the VP-DMs based on the Ornstein-Uhlenbeck forward process, the main result also applies to the variance-exploding forward process as well, shedding light on the applicability of solvers on other forward processes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While I have an overall positive outlook on the paper, I think the paper's overall organization seems confusing: the paper presents the main theorems and some empirical results, then jumps back to a sketch of the proof and how the theory works for the VE-type diffusion models. In my opinion, presenting the paper as theorems on SDE-DPM-2 and RK-2, proof sketch, discussion on VE and then experiments seems like more logical progression of the narrative. \n\nThere are a number minor issues in terms of the paper's presentation. Here is a list I have found: \n- Many discretization methods mentioned in the paper are known only as acronyms without mentioning what the acronyms are. \n- The mentions of $x_k$ in assumption should be $x_{t_k}$ in equations such as the one in Assumption 2, eqs. 11 and 13.\n- The use of partial derivatives w.r.t. $x_{t_k}$ seems confusing. I assume it means the Jacobian matrix. Perhaps the authors can explicitly denote a notation to describe the Jacobian matrix for clarity. \n- While it is useful to see that second-order SDE solvers makes improvements empirically, Table 1 presents quite little added information other than a somewhat vague empirical confirmation that SDE-DPM-2 does have empirical value, which has already been demonstrated by Lu et al. (2022b)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- I think the authors might have a mistake in assumption 4, because I don't see them using the operator $\\nabla^3$ anywhere. \n- Can the authors add error bars to the table of the FID scores? At present, I don't feel that these illustrate their point particularly well."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The authors address a question of significant interest in the diffusion model literature, namely which discretization schemes are most sample efficient at inference time.\n- The paper gives some additional theoretical support to the observation that higher order schemes can be important for sample complexity and differentiates between subtleties, such as the additional approximation in the linear term of the SDE. \n- Experiments show a modest improvement of CIFAR-10 FID with small numbers of sampling steps and improved convergence with discretization fineness using SDE-DPM-2 over RK-2."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the convergence properties of score-based diffusion models with a second-order discretization scheme called SDE-DPM-2, which improves the complexity over a first order exponential intergation scheme. Interestingly the result for SDE-DPM-2 is also stronger than the more widely used RK-2 scheme."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- There is no comparison of the computational cost of RK-2 vs DPM-SDE-2 vs EI\n- I felt the authors should have more clearly delineated their contributions relative to Chen 2023, which they follow closely.\n- A number of the assumptions are quite strong. For example, the expectation of the second time derivative of the score is assumed to have a magnitude upper bounded by some time-independent constant. In practice, it is often the case that the score changes in quite a singular fashion near $t=0$."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "No question."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper analyzes the convergence of the higher-order discretization method (SDE-DPM-2). Under some smoothness condition as well as score estimation error and high oder estimation error, a sampling complexity at the order of O(1/epsilon) is established to ensure the KL divergence smaller than epsilon^2. In comparison, the complexity of second-order Runge–Kutta method (RK-2) scales as O(1/epsilon^2)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper analyzes the convergence of the higher-order discretization method (SDE-DPM-2). Under some smoothness condition as well as score estimation error and high oder estimation error, a sampling complexity at the order of O(1/epsilon) is established to ensure the KL divergence smaller than epsilon^2. In comparison, the complexity of second-order Runge–Kutta method (RK-2) scales as O(1/epsilon^2)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although the following paper is posted after your submission, there maybe exist some conflict messages between your paper and this work: you said that RK-2 is less efficient, while this work claimed that RK-2 is provably fast.\nWu, Y., Chen, Y., and Wei, Y. Stochastic runge-kutta methods: Provable acceleration of diffusion models.\n\nLi et al. (2024) also provided a sampling complexity of O(1/epsilon) under KL divergence and a better complexity of O(1/sqrt(epsilon)) for TV, which may reduce the theoretical contribution of this work and was not discussed here. \n\nThere exists some other convergence analysis for high-order sampling of diffusion models. It seems that their rates are better than yours, but such comparisons are missed here.\nHuang, D. Z., Huang, J., and Lin, Z. Convergence analysis of probability flow ODE for score-based generative models.\nHuang, X., Zou, D., Dong, H., Zhang, Y., Ma, Y.-A., and Zhang, T. Reverse transition kernel: A flexible framework to accelerate diffusion inference."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Assumptions 3 and 4 are both bounds for the third-order derivative of $\\log p_t$. However, I firmly believe that temporal derivatives can be represented as spatial derivatives, thereby revealing fundamental properties of the data distribution, as shown in Equation (22) in [2]. Could you please clarify why Assumptions 3 and 4 are considered separate?\n- If Assumption 2 is replaced with the corresponding assumption from [1], is the result for SDE-DPM-2 still valid? Is there any method to ensure the validity of this assumption during the training process?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper studies the SDE-DPM-2 scheme for the inference of diffusion models and improves the sample complexity from $O(1/\\epsilon^2)$ to $O(1/\\epsilon)$.\n- The mathematical proof looks sound to me.\n- Several experiments are conducted to validate the theoretical findings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigates the convergence of the second-order discretization method (SDE-DPM-2). Given an $O(\\epsilon^2)$ $L^2$-accurate score estimation, the paper demonstrates that the sampling complexity of SDE-DPM-2 is $O(1/\\epsilon)$ instead of that of the exponential integrator scheme, which is $O(1/\\epsilon^2)$. Furthermore, the paper extends the analysis to the Runge-Kutta-2 (RK-2) method, proving that SDE-DPM-2 exhibits superior efficiency compared to RK-2."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The assumptions appear overly strong and artificial to me. Unlike the conventional assumption that the neural network score function $s(t, \\cdot)$ is approximately $\\epsilon^2$ close to the true score function $\\nabla \\log p_t$, Assumption 2 is, to my understanding, contingent upon the loss function employed in training diffusion models. Consequently, it is not feasible to guarantee or even evaluate this assumption for diffusion models.\n- I recommend redrawing Figure 1 in logarithmic scale to corroborate the theoretical findings.\n- The proof appears to follow the approach outlined in [1]. I believe it is possible to enhance the sample complexity in the data dimension from $O(d^{3/2})$ to $O(d)$ by drawing techniques inspired by the state-of-the-art results presented in [2].\n- I believe this paper lacks a comprehensive literature review. It fails to cite closely related empirical studies [3] and theoretical studies [4, 5], as well as the recent advancements in accelerating diffusion models, such as knowledge distillation [6], consistency models [7], adaptive stepsizes [8], parallel sampling [9], randomized midpoint [10], among others.\n\n[1] Chen, Hongrui, Holden Lee, and Jianfeng Lu. “Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions.” International Conference on Machine Learning. PMLR, 2023.\n\n[2] Benton, Joe, et al. “Nearly d-linear convergence bounds for diffusion models via stochastic localization.” (2024).\n\n[3] Dockhorn, Tim, Arash Vahdat, and Karsten Kreis. \"Genie: Higher-order denoising diffusion solvers.\" Advances in Neural Information Processing Systems 35 (2022): 30150-30166.\n\n[4] Wu, Yuchen, Yuxin Chen, and Yuting Wei. “Stochastic Runge-Kutta Methods: Provable Acceleration of Diffusion Models.” arXiv preprint arXiv:2410.04760 (2024).\n\n[5] Li, Xuechen, et al. “Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond.” Advances in Neural Information Processing Systems 32 (2019).\n\n[6] Luhman, Eric, and Troy Luhman. “Knowledge Distillation in Iterative Generative Models for Improved Sampling Speed.” arXiv preprint arXiv:2101.02388 (2021).\n\n[7] Mei, Song, and Yuchen Wu. “Deep Networks as Denoising Algorithms: Sample-Efficient Learning of Diffusion Models in High-Dimensional Graphical Models.” arXiv preprint arXiv:2309.11420 (2023).\n\n[8] Jolicoeur-Martineau, Alexia, et al. “Gotta Go Fast When Generating Data with Score-Based Models.” arXiv preprint arXiv:2105.14080 (2021).\n\n[9] Chen, Haoxuan, et al. “Accelerating Diffusion Models with Parallel Sampling: Inference at Sub-Linear Time Complexity.” arXiv preprint arXiv:2405.15986 (2024).\n\n[10] Gupta, Shivam, Linda Cai, and Sitan Chen. \"Faster Diffusion-based Sampling with Randomized Midpoints: Sequential and Parallel.\" arXiv preprint arXiv:2406.00924 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Is it possible to get a better convergence rate for Runge Kutta methods?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The writing is very clear. It provides both theoretical and empirical comparisons with the most related papers. \n\n2. It proves a better convergence rate for a second-order sampling method. \n\n3. It also extends the setting to VE SDEs, showing that the analysis framework can be further generalized."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the convergence properties of a second-order discretization method, SDE-DPM-2, for diffusion models. The main result demonstrates that SDE-DPM-2 achieves an improved $O(1/\\epsilon)$ convergence rate to obtain an $O(\\epsilon^2)$ error in KL divergence, surpassing the performance of existing EI discretization methods. Additionally, using similar proof techniques, the paper shows that another widely used second-order method, Runge-Kutta, does not attain this level of convergence. Further analysis extends these results to the VE SDE, achieving a comparable convergence rate."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1) The biggest weakness of this paper is the stringent assumptions. In Assumption 2, this paper assumes the Taylor expansion is accurate, while in most of the previous works for SDE analysis, only the value accuracy is needed. I have seen similar assumptions in [1], which assume the closeness of the Jacobian matrix with respect to $x$ in dealing with ODE analysis. They also showed that such an assumption is not required for SDE. However, Assumption 2 in this paper, though for SDE analysis, is even stronger than that, because it assumes that the time-derivative is also close.\n\nMoreover, Assumptions 3 and 4 are also very strong. When t is close to 0, the score function will get close to the gradient of the probability density function of the data distribution. Such boundness assumptions thus will require the smoothness of the data distribution. As is shown in Appendix B, it can only hold when the data distribution is a Gaussian mixture. This diverges from many useful data distributions, especially when the data is constrained on a low-dimension manifold. As a result, I think more discussions are required to verify the reasonability of these assumptions.\n\n(2) The writing of the paper is a little inconsistent. For example, in equation (6), the first-order derivative is approximated with the value of the score function, while in equation (13) it becomes the partial derivative concerning t and x. Moreover, the notation used here, defined in Line 204 is not standard and very confusing. In Line 201, it says “The difference between the EI and SDE-DPM-2 schemes lies in the approximation of the score function”, while in Line 401, it says “The key difference between EI and SDE-DPM-2 lies in the update scheme at each time interval” It is unclear whether they have the same meaning or not.\n\n(3) The description of the contribution is a little bit inaccurate. It claims that SDE-DPM-2 is more efficient than Runge Kutta. However, no guarantee has been given (Corollary 3.3 only shows that the method used in this paper cannot provide a better guarantee for Runge Kutta). It is possible that there exists an analysis of Runge Kutta that can achieve better results. As is shown in the experiment, the performance of Runge Kutta and SDE-DPM-2 is similar, both better than first-order methods. Thus, the claim seems a little strange to me. Moreover, this paper says that for VE SDE, the convergence is aligned with VP SDE. However, the remark under Corollary 5.1 shows that it only works when overlooking the initial error, which is the key difficulty of VE SDE. This point should also be emphasized in the introduction.\n\n(4) The paper is not self-contained. For example, the proof of Proposition 4.2, directly refers to Chen et al 2023a without any explanation. In my opinion, the argument here is far from trivial and should not be omitted.\n\n----\n[1] Li et al. 2024 Towards Non-Asymptotic Convergence for Diffusion-based Generative Models ICLR2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024the,\ntitle={The Convergence of Second-Order Sampling Methods for Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yhmVrA8W0v},\nnote={under review}\n}"
},
"abstract": {
"value": "Diffusion models have achieved great success in generating samples from complex distributions, notably in the domains of images and videos. Beyond the experimental success, theoretical insights into their performance have been illuminated, particularly concerning the convergence of diffusion models when applied with discretization methods such as Euler-Maruyama (EM) and Exponential Integrator (EI). This paper embarks on analyzing the convergence of the higher-order discretization method (SDE-DPM-2) under $L^2$-accurate score estimate. Our findings reveal that to attain $\\tilde{O}(\\epsilon_0^2)$ Kullback-Leibler (KL) divergence between the target and the sampled distributions, the sampling complexity - or the required number of discretization steps - for SDE-DPM-2 is $\\tilde{O}(1/\\epsilon_0)$, which is better than the currently known sample complexity of EI given by $\\tilde{O}(1/\\epsilon_0^2)$. We further extend our analysis to the Runge-Kutta-2 (RK-2) method, which demands a sampling complexity of $\\tilde{O}(1/\\epsilon_0^2)$, indicating that SDE-DPM-2 is more efficient than RK-2. Our study also demonstrates that the convergence of SDE-DPM-2 under Variance Exploding (VE) SDEs aligns with that of Variance Preserving (VP) SDEs, highlighting the adaptability of SDE-DPM-2 across various diffusion models frameworks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"diffusion models",
"reserve SDE"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/05438fefd4ed15a3db0c998a83a272d5afc8da3f.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "The Convergence of Second-Order Sampling Methods for Diffusion Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yi3QcCGfP1 | Enhancing Certified Robustness via Block Reflector Orthogonal Layers | main | Active | Certified robustness;Adversarial | alignment, fairness, safety, privacy, and societal considerations | 3;5;6;6;6 | 4;4;3;3;2 | 1;3;2;3;3 | 1;2;2;3;3 | 1;3;3;3;3 | 5.2 | 3.2 | 2.4 | 2.2 | 2.6 | -0.733359 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is well written. The motivation is clear and the contribution BRO is well stated. \n- The BRO method leverage FFT for convolution in a similar fashion as the Caley approach\n- The authors have performed an extensive set of experiments to demonstrate their results (comparaison and ablation study)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper propose a new orthogonal parametrization, Block Reflector Orthogonal layer (BRO), that can be used in the context of Lipschitz neural networks and provide certified robustness against adversarial attacks. The authors state that the BRO method to construct orthogonal layers using low-rank parameterization is both time and memory efficient, while also being stable during training as it does need \nan iterative approximation algorithms. The authors also propose a theoretical analysis and develop a novel loss function, Logit Annealing loss, to increase certified accuracy. The authors perform an extensive set of experiments to demonstrate their finds."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- BRO is not compared to the Cayley approach in Figure 2. Does BRO offer better memory and runtime against Cayley? can Cayley orthogonal layers appear in Figure 2?\n- For the comparison on certified accuracy, there are a lot of moving parts in the experimental section and this tends to become confusing, I would suggest the authors to simplify the experiments and focus on the comparison with the state of the art, which is LiResNet (Hu et al. 2024). \n- It seems that the authors did not take the results of the latest version of Hu et al. 2024 (which came out in June 2024) as there seems to be a huge gap in the reported results. Can the authors comment on this?\n- Table 1 reports the first results of SLL, but there was an erratum in the latest version and the results have been revised (see Table 7 in the arxiv version of the paper). \n- It seems that BRO does not perform very well at large radius, is this due to the parameterization of the loss? \n- Hu et al. 2024 have shown that their approach scales to ImageNet, can the authors provide certified results for ImagNet?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed method is supported by good theoretical analysis.\n\n2. The BRO improves both efficiency and stability compared with previous work.\n\n3. The proposed Annealing Loss could serve as a scalable method to enhance the training for the Lipschitz neural network."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work presents a novel BRO layer for constructing Lipschitz neural networks that leverages low-rank parameterization, avoiding iterative approximations to enhance both memory and computational efficiency. The introduction of the Logit Annealing loss function addresses the complexity limitations of Lipschitz networks, contributing to improved learning of margin properties. This work promises good advancements in robust and efficient Lipschitz neural networks design."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **The scalability to other large models**: The proposed BRO layer is currently designed to function within specific neural network architectures, such as the BRONet Architecture, maybe with constrained parameters and fixed configurations. This limitation raises concerns about its practical applicability to large-scale foundation models. Ensuring the Lipschitz condition for these more complex and expansive models could be challenging. Nonetheless, the impact of this work would be significantly enhanced if the authors could empirically demonstrate that BRO effectively improves the robustness of more complex and diverse neural networks.\n\n2. According to lines 276-279, the BRO layer is not a universal approximation for orthogonal layers and the author empirically demonstrates BRO is competitive with that of LOT and SOC. A more detailed analysis would be beneficial to illustrate that the error introduced by this non-universal approximation is minimal.\n\n3. I am curious about whether the choice of activation function affects the Lipschitz constant."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Have you tested BRO's performance on larger datasets like ImageNet?\n- How sensitive is the method to the choice of rank in BRO and how should practitioners select it?\n- How does the LA loss perform compared to other margin-based losses beyond CE+CR?\n- What are the main failure cases or limitations of your approach?\n- Have you considered comparing against other certified defense methods beyond Lipschitz approaches?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Strong theoretical foundations with clear mathematical proofs for their proposed methods\n- Significant computational efficiency gains compared to existing approaches (SOC, LOT)\n- Comprehensive experiments across multiple datasets and architectures\n- Novel loss function with clear theoretical motivation and empirical benefits\n- State-of-the-art certified robustness with fewer parameters\n- Thorough ablation studies demonstrating impact of each component"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel approach called Block Reflector Orthogonal (BRO) layer for constructing Lipschitz neural networks with certified robustness guarantees against adversarial attacks. The key innovation is a new parameterization scheme that creates orthogonal layers without requiring iterative approximation algorithms, making it both computationally efficient and numerically stable compared to existing methods like SOC and LOT. The authors use BRO to develop BRONet, which achieves state-of-the-art certified robustness on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets. Additionally, they provide theoretical analysis showing that Lipschitz networks have inherent limitations in margin maximization due to limited model complexity, leading them to propose a new Logit Annealing (LA) loss function that employs an annealing mechanism to help models learn appropriate margins for most data points rather than overfitting to maximize margins for specific examples. Through extensive experiments, they demonstrate that their combined approach (BRO layer + LA loss) outperforms existing methods while requiring fewer parameters and computational resources."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- BRO layer is not a universal orthogonal parameterization (acknowledged by authors)\n- Limited experiments on larger datasets beyond Tiny-ImageNet\n- No comparison with empirical defenses or other certified robustness approaches beyond Lipschitz methods\n- Lack of investigation into potential failure cases or limitations of the LA loss\n- Some hyper-parameters (rank selection, LA loss parameters) require manual tuning"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses section for questions about the theory, notation and typos.\n\n- Can authors implement the convolution without the FFT? Just creating $W_{\\text{Conv}}$ and applying the standard Conv2D function from torch. Are the results the same as in Algorithm 1? Your outputs should be the same and its a good way to debug if anything is wrong.\n- Did authors try LiResNet + BRO only for table 2? That is, only changing the backbone without changing the loss.\n- The BRO layer is for sure orthogonal in the case of fully connected layers. Have authors tried comparing the different layers in the fully connected case? I.e. training fully connected classifiers in a small dataset (Cifar10 / MNIST). I don’t believe this experiment is super interesting but is the only case where orthogonality is clear at the moment."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The strengths of this work are mainly on the experimental side. Assuming the theoretical results hold:\n\n- More efficient and scalable models than the previous art due to exact 1-Lipschitz layers without the need of iterative approximations.\n- Improved certified accuracy. Authors demonstrate that both their BRO convolutional layer and LA loss help improving the certified accuracy."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel 1-Lipschitz layer based on Block-reflector matrices (BRO convolution), which are orthogonal. Additionally, authors propose a Logit Annealing (LA) loss to optimize during training in order to favor certified accuracy. Their proposed recipe consistently improves certified accuracy in CIFAR10/100 and TinyImageNet and is shown to be more efficient than previously proposed orthogonal layers."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The weaknesses of the paper appear on the theoretical and implementation sides. In general, the paper is very hard to read and has many errors in key parts of the paper. This questions the validity of their 1-Lipschitz claims for the BRO convolution.\n\n- **Many typos and unclear notation:**\n\t- Everywhere: Please use $c_{\\text{in}}$ and not $c_{in}$ when subscripting or upperscripting with words.\n\t- Algorithm 1, line 1: I assume $c_{\\text{in}}$ and $c_{\\text{in}}’$ are the number of input channels for the current and next layers respectively. Then, why do you need $c_{\\text{out}}$? It is very confusing. I would just remove $c_{\\text{in}}’$ and use $c_{\\text{out}}$ instead.\n\t- Algorithm 1, line 4: $\\tilde{V}:= \\text{FFT}(V)$ should be $\\tilde{V}:= \\text{FFT}(V^{\\text{pad}})$.\n\t- Lines 205 and 214: Is $\\circledast$ the convolution operator? \n\t- Proposition 2: If $J \\in \\mathbb{C}^{m \\times m}$, then $J^* \\in \\mathbb{C}^{m \\times m}$ as well. How can you even perform $J\\tilde{V}J^{*}$ if $\\tilde{V} \\in \\mathbb{C}^{m\\times n}$? Does it have to be that $m=n$?\n\t- Proposition 2, equation 14: If assuming $n=m$, the proof is right, but equation 14 should be: $(\\tilde{V}^* \\tilde{V})^{-1} = J(J\\tilde{V}^*\\tilde{V}J^*)^{-1}J^*$\n\n- **Unclear if BRO convolutions are orthogonal:**\n\nFrom Proposition 1, it’s clear that in the dense case, the BRO layer is orthogonal. However, in the convolutional case there are many unclear aspects.\n\nIn lines 205 and 214, authors state that they construct their BRO convolution based on constructing the kernel:\n\t$$\n\t\tW_{\\text{Conv}} = I - 2V \\circledast (V^{\\top} \\circledast V)^{-1} \\circledast V^{\\top}\n\t$$\nAuthors argue that the circular convolution with this kernel is orthogonal, but do not provide any proofs. Then, given $\\tilde{V} = \\text{FFT}(V)$ authors conclude $\\tilde{W} = \\text{FFT}(W_{\\text{Conv}}) = I - 2\\tilde{V}(\\tilde{V}^*\\tilde{V})^{-1}\\tilde{V}^*$. It’s unclear how authors arrive to this expression as it would need to assume that: (i) $n=m$ to be able to do element wise products (which authors do not mark with $\\cdot$, leading to confusion) (ii) $\\text{FFT}(I)=I$ (iii) $\\text{FFT}((V^{\\top} \\circledast V)^{-1}) = \\text{FFT}((V^{\\top} \\circledast V))^{-1}$, which are false in general. \n\nMoreover, it is not clear what Proposition 2 implies regarding orthogonality, or how the case when $m=n$ is handled (lines 237-241 are very vague). It would be nice to include an explicit proof that the norm of the output of algorithm 1 is the same as the input. Exactness of the 1-Lipschitz result is important for ensuring the experimental results about the certified accuracy are valid. So, I believe more rigour should be put into proving the results claimed in this paper.\n\n- **Unclear motivation to use LA and differences with CR:**\n\nAuthors start motivating the need of their LA loss with a very convoluted argument about the minimal CR loss being constrained by the model complexity. Then, they simply present their loss without relating it with the theory they developed for the CR loss. I believe the analysis doesn’t add anything if it is not performed for the LA loss. While I see that LA is less strict I don’t see how LA solves the issue in Theorem 1. \n\nIt would be more beneficial to fully state the CR loss and compare it with LA. Right now, only the reference is provided. I also think the results in Figure 8 and section C.2 are more interesting than the current write-up of the main paper.\n\nAll in all, I believe authors should be more careful about their theoretical derivations and be very clear about why Algorithm 1 results in orthogonal convolutions. Without this being clear, the results are meaningless and I cannot propose the paper for acceptance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. What factors contribute to the inconsistent performance gains observed in the proposed BRONet as the perturbation budget increases?\n2. Why were standard deviations or confidence intervals not included in the reported results? Given the marginal improvements, understanding the variability of the outcomes would be particularly valuable.\n3. Has the BRO method been evaluated on more complex datasets, such as full ImageNet, or applied to tasks beyond classification, to assess its scalability and generalization?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The BRO method effectively utilizes low-rank parameterization to construct orthogonal layers, resulting in significant improvements in both computational time and memory efficiency, which are critical for scaling neural networks.\n2. The paper provides a thorough comparison with state-of-the-art techniques, presenting results that are well-structured and articulate, allowing readers to easily grasp the contributions and effectiveness of the proposed method.\n3. The paper is well-structured and logically organized. The presentation of the results is clear and systematic."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents the block reflector orthogonal (BRO) layer, which enhances the construction of Lipschitz neural networks. By employing low-rank parameterization and circumventing iterative approximations, the proposed approach achieves notable gains in both memory and time efficiency. Comprehensive evaluations against existing orthogonal layers reveal its superior robustness, underscoring its potential in advancing neural network performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The evaluation results in Table 1 indicate a degradation in the performance of the proposed BRONet with increasing perturbation budgets. A discussion on this inconsistency would enhance the paper's credibility and provide insight into the limitations of the model.\n2. Results in Table 3 show only marginal improvements over existing methods, raising concerns about the significance of these gains. Including standard deviations or confidence intervals would clarify the statistical significance of the results and help determine whether observed improvements are due to random variance in training.\n3. The experiments are conducted solely on CIFAR and Tiny-ImageNet datasets, which, while widely used, may not fully demonstrate the method's robustness and scalability. Including additional datasets, particularly larger or more complex ones (e.g., ImageNet or real-world benchmarks), would provide a more comprehensive evaluation of the method's efficacy."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a new orthogonal convolution and a novel loss function to enhance certified robustness."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing Certified Robustness via Block Reflector Orthogonal Layers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yi3QcCGfP1},\nnote={under review}\n}"
},
"abstract": {
"value": "Lipschitz neural networks are well-known for providing certified robustness in deep learning. In this paper, we present a novel efficient Block Reflector Orthogonal layer that enables the construction of simple yet effective Lipschitz neural networks. \nIn addition, by theoretically analyzing the nature of Lipschitz neural networks, we introduce a new loss function that employs an annealing mechanism to improve margin for most data points.\nThis enables Lipschitz models to provide better certified robustness.\nBy employing our BRO layer and loss function, we design BRONet, which provides state-of-the-art certified robustness.\t\nExtensive experiments and empirical analysis on CIFAR-10, CIFAR-100, and Tiny-ImageNet validate that our method outperforms existing baselines."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Certified robustness",
"Adversarial"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8de0265482b6bf213f9bf4c41983b4af6df5d33e.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/93b1b773838c55b3e5b3427eb3cb88fd6d88b65f.zip"
},
"title": {
"value": "Enhancing Certified Robustness via Block Reflector Orthogonal Layers"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yiGSI7Ou3i | Text-to-Model: Text-Conditioned Neural Network Diffusion for Train-Once-for-All Personalization | main | Active | diffusion model;parameter generation;personalization | foundation or frontier models, including LLMs | 3;5;5;6 | 3;2;3;4 | 2;3;3;3 | 2;3;3;3 | 3;2;3;3 | 4.75 | 3 | 2.75 | 2.75 | 2.75 | 0.324443 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Nit: Please add a reference for the claim” We choose DiT as the backbone because it can be easily scaled up and is shown to have great\ngeneralization and expressiveness.” Line 210\n\nIn Table 3, it is not clear how many classes were predicted. This is important to assess the reported accuracies.\n\nCan you please comment on how would you set the number of model parameters for new unseen tasks?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Comprehensive Experimental Analysis: The paper includes a robust set of experiments, covering different prompt types, model architectures, dataset sizes, and scaling laws. These analyses provide a clear understanding of Tina’s capabilities and boundaries, and they validate (to some extent) the model’s effectiveness in generating personalized networks under varying conditions.\n2. Novel Approach to Model Personalization: The paper builds on the concept of train-once-for-all personalization, allowing a single pre-trained model (Tina) to generate personalized models dynamically based on text prompts. This can potentially eliminates the need for separate training per task, making the approach highly efficient and versatile."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a new generative AI framework named Tina, which can generate \"personalized\" neural network models based on text prompts. This approach, called train-once-for-all personalization, enables a single model to generalize and create task-specific models on demand without the need to fine-tune the model on task related data. Tina leverages a diffusion transformer model conditioned on descriptions encoded with a CLIP model to understand and apply user-specific knowledge, even with a small training dataset. It demonstrates strong performance in generating models for both in-distribution and out-of-distribution tasks, supporting zero-shot and few-shot scenarios with images and adapting to different classification settings. The framework opens possibilities for text-to-model applications, expanding the range of personalization within neural network."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The approach is not scalable. The experiments are not showing a possibility to scale the approach for larger number of classes (limited to 10) or for more complex models. The paper presents what seems like a good proof of concept but it would require more work to demonstrate the effectiveness of the approach on larger more complex problems.\n2. The datasets used are too small and simple to validate the approach properly. \n3. One very important baseline that is missing is a direct fine tuning which should be an upper bound. The selected baselines are not representative enough to see what is the loss in performance to expect with Tina.\n4. The generic model in the experiments seems to be quite bad even on the in-distribution tasks. I would have expected it to perform better with improvements coming from Tina"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "see the Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. It’s an interesting idea of using text-conditioned diffusion models to generate neural network parameters based on varying requirements. \n\n2. Extensive experiments have been conducted to validate the effectiveness of the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on neural network parameter generation and utilizes diffusion models for text-to-model generation. With just one training session, the proposed method achieves outstanding results in both out-of-distribution and in-distribution model personalization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The proposed method is currently limited to personalizing models for image classification tasks. As a pilot study for generating neural networks with diffusion models, it does not fully support the title of \"train-once-for-all.\" Conducting more experiments on detection and segmentation would enhance the overall credibility of the study.\n\n2. The method can generate only a relatively small number of parameters—specifically, around 640 parameters in the classifier layers of ResNet-20. It still heavily relies on the feature extraction module of the generic model. Therefore, the significance of \"text-to-model\" is weakened if the partial model parameters are already provided.\n\n3. The ablation of text prompts indicates that the proposed method is sensitive to the input prompt. Could training with mixed prompts improve the stability?\n\n4. In traditional diffusion models, the inclusion of random noise could improve the diversity of the output. But it seems useless in the proposed method because the aim of Tina is to find the best classifier without considering diversity."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see weakness"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The writing is clear and easy to follow.\n- The discussed topic and motivation are both innovative and significant."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents Tina, a novel framework that leverages text-conditioned neural network diffusion for generating personalized models from textual prompts. It addresses the scenario of train-once-for-all personalization, aiming to create customized models for diverse end-users and tasks using text prompts. Tina is designed to generalize across in-distribution and out-of-distribution tasks, even with limited training data. The paper claims that Tina demonstrates an understanding of world knowledge by analyzing its capabilities under various conditions, including zero-shot/few-shot image prompts, different numbers of personalized classes, and predicting unseen entities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Though I'm not well-versed in the subject of this article, I'm still amazed by the \"text-to-model\" concept. I'm skeptical about the \"train-once-for-all\" approach since the author didn't provide any code or demos to back up the experimental results.\n\n- What kind of experimental settings did Tina use in text-to-model task—simple or challenging? What are the limits of Tina's capabilities?\n\n- I'm very curious whether the proposed Tina has theoretical support."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The main difficulty for this prediction is its limitation to relatively small neural network. Assuming that we want to predict a 1B parameter transformer network, how would you address it?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper has solid technical contribution.\n2. The proposed method is novel and clean.\n3. The experimental results are also strong."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper investigate the capability of GenAI for text-to-model generation, to see whether GenAI can comprehend hyperlevel knowledge embedded within AI itself parameters. The basic idea is to use diffusion transformers to generate parameter token by token. Each token is indeed a set of parameters in a specific layer. The model is trained with supervised learning approach to feed user-provided text description and then use diffusion model to synthesize the personalized network parameters. The results seem quite interesting."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The writing of this paper need improvement. The introduction is quite obscure and high-level. It only shows some broad idea without elaborating the actual implementation much. I would suggest the authors to hint a bit in terms how they tokenize the parameters and use DDPM to predict the actual parameters, etc. This could help the authors gain more clear insights.\n2. The evaluated datasets are still a bit toy or simple. The whole paradigm still requires more thorough or large-scale experiments to validate."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nli2024texttomodel,\ntitle={Text-to-Model: Text-Conditioned Neural Network Diffusion for Train-Once-for-All Personalization},\nauthor={Zexi Li and Lingzhi Gao and Chao Wu},\nyear={2024},\nurl={https://openreview.net/forum?id=yiGSI7Ou3i}\n}"
},
"abstract": {
"value": "Generative artificial intelligence (GenAI) has made significant progress in understanding world knowledge and generating content from human languages across various modalities, like text-to-text large language models, text-to-image stable diffusion, and text-to-video Sora. While in this paper, we investigate the capability of GenAI for text-to-model generation, to see whether GenAI can comprehend hyper-level knowledge embedded within AI itself parameters. Specifically, we study a practical scenario termed train-once-for-all personalization, aiming to generate personalized models for diverse end-users and tasks using text prompts. Inspired by the recent emergence of neural network diffusion, we present Tina, a text-conditioned neural network diffusion for train-once-for-all personalization. Tina leverages a diffusion transformer model conditioned on task descriptions embedded using a CLIP model. Despite the astronomical number of potential personalized tasks (e.g., $1.73\\times10^{13}$), by our design, Tina demonstrates remarkable in-distribution and out-of-distribution generalization even trained on small datasets ($\\sim 1000$). We further verify whether and how \\Tina understands world knowledge by analyzing its capabilities under zero-shot/few-shot image prompts, different numbers of personalized classes, prompts of natural language descriptions, and predicting unseen entities."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Zexi_Li1",
"~Lingzhi_Gao1",
"~Chao_Wu1"
]
},
"authors": {
"value": [
"Zexi Li",
"Lingzhi Gao",
"Chao Wu"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"diffusion model",
"parameter generation",
"personalization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "li|texttomodel_textconditioned_neural_network_diffusion_for_trainonceforall_personalization"
},
"pdf": {
"value": "/pdf/990e8544e0172b713c5c9296e849ad2c48a0c784.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Text-to-Model: Text-Conditioned Neural Network Diffusion for Train-Once-for-All Personalization"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yiQCeXdPvs | DIRECT: Deep Active Learning under Imbalance and Label Noise | main | Active | Deep Learning;Active Learning | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;3;3;6 | 4;4;4;3 | 2;3;2;2 | 2;1;2;2 | 2;2;2;2 | 3.75 | 3.75 | 2.25 | 1.75 | 2 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "What is the starting point for AL and how to explain the initial differences?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper addresses an important problem of label imbalance and noises in AL.\n2. The paper improves upon existing methods like GALAXY."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an AL strategy that deals with label imbalance and noises. The proposed method uses separation thresholds similar to an existing method GALAXY, however could enable parallel annotations and be robust against label noises. The proposed method is compared with GALAXY and other AL baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation of the paper is poor. While there is additional content in the appendix, the paper does not utilize the main page limit well. The figure caption of Figure 3 overlaps with main text. The algorithm is not clearly presented with too much text and no numbers for equations. \n2. The results are presented poorly and unreliable. It is unclear where the AL starts and the different starting points of curves are confusing. It is also difficult to see the difference between variants of the proposed method and GALAXY in some cases."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1)\tAbout the construction of the imbalanced datasets, I am unclear about the rationale behind grouping certain classes into a larger class to create an imbalanced dataset, as this results in an inconsistent number of classes compared to the original dataset. A simpler approach, such as using a pre-existing imbalanced dataset like CIFAR-10-LT or selectively sampling data to form a minority class, might have been more straightforward. Could the authors explain the motivation behind this grouping strategy? \n\n2)\tThe methods used for comparison appear inconsistent across different datasets. Could the authors clarify the criteria for choosing comparison methods, and explain any reasons for these discrepancies?\n\n3)\tWhen plotting the learning curves, is the budget spent on B_{parallel} included in the overall labeling budget? Clarification on this point would help in interpreting the learning curves accurately?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The study tackles a complex and meaningful setting where both class imbalance and label noise are present, which is a valuable and practical area of focus for active learning. The idea of identifying and querying near the decision boundary, particularly for minority classes, is technically sound and shows promise for improving classification performance in imbalanced data contexts. The experiment includes recent active learning methods in its comparative analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an active learning algorithm designed to handle both class-imbalance and label noise in classification tasks. The proposed method reformulates the multi-class classification problem into multiple one-vs-rest tasks and employs the VReduce algorithm to estimate classification thresholds for each class. Data points near these thresholds are then selected for querying. Experiments are conducted to demonstrate the effectiveness of the approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1)\tMy primary concern is the limited technical contribution. The proposed method largely leverages previously established algorithms, and the theoretical contributions appear modest.\n\n2)\tAlthough the paper claims to address label noise, the algorithm itself does not explicitly manage or mitigate label noise. I think the noise may relate to the agnostic learning, but in section 5.2 in the experiment, the authors conduct experiments with different levels of label noise, which confuses me.\n\n3)\tEstimating the prediction threshold for minority classes could be challenging due to the limited sample sizes within these classes. This issue is not thoroughly addressed in the paper.\n\n4)\tThe paper would benefit from further proofreading to address minor errors and improve readability. For instance, lines L269 and L278 contain typographical or presentation mistakes that should be corrected."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the above Weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The approach of combining class separation thresholds with one-dimensional active learning is innovative and provides a fresh perspective on tackling these issues.\n\n2. The extensive experiments conducted on imbalanced datasets provide a strong foundation for the claims made. The results indicating a 60% to 80% reduction in annotation budgets are compelling and demonstrate the algorithm's effectiveness compared to existing methods.\n\n3. The paper is well-organized, with clear explanations of the methodology and results, making it accessible to readers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel active learning strategy called DIRECT, which performs well under conditions of class imbalance and label noise. Specifically, the algorithm effectively identifies the optimal class separation threshold and adaptively selects samples for annotation. Experimental results demonstrate that DIRECT significantly improves labeling efficiency, providing an effective solution for scenarios affected by class imbalance and label noise."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. There is a formatting error at the bottom of page 5 that requires correction to improve the document’s presentation.\n\n2. The authors are encouraged to include additional visualizations to more effectively demonstrate the experimental results and clarify the method’s performances.\n\n3. The authors identified three limitations in the GALAXY method and addressed these by optimizing the separation threshold. It would be beneficial to provide a more detailed explanation of this approach to help readers better understand the proposed improvements.\n\n4. The authors demonstrated the robustness of their method under class imbalance and label noise conditions. Please clarify the sources of robustness in each scenario to highlight the model’s adaptability.\n\n5. In the experimental section, the authors only compare label noise levels at 0%, 10%, 15%, and 20%, leaving out 5%. It would strengthen the study’s comprehensiveness to either include results at 5% noise or explore higher noise levels to make the noise experiments more comparable and thorough."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The proposed approach relies on an \"optimal threshold\" derived from the training set, but this threshold is based on true labels, which are inaccessible during active learning. Given that this threshold may not generalize to the unlabeled dataset, how can we justify that the threshold remains optimal or even effective for active learning purposes, especially without analyzing data distribution shifts between labeled and unlabeled sets?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.It provides a thorough summary of some classical methods in active learning within the related work section, which is informative for readers less familiar with the field.\n2.The paper aims to address the significant problem of active learning under conditions of label noise and class imbalance which are increasingly relevant in large-scale data applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a threshold-based active learning approach aimed at handling label noise and class imbalance. The main idea for handling class imbalance is to use a deep learning method to simplify and reformulate it into a one-dimensional active learning task with a threshold learner."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.I believe the method proposed by the authors contains fundamental and theoretical flaws. The validity of the “sample around the optimized threshold” approach itself is highly questionable. This sampling strategy is merely an intuition; even if the authors could accurately identify this threshold, they lack any proof that sampling around it is optimal. While the authors claim this method would encourage the model to select a balanced number of samples from each class, this strategy is not necessarily the best. From a learning theory perspective, as with all active learning methods, this sampling process distorts the training distribution, potentially undermining the learning guarantees. Furthermore, from a statistical learning perspective, if the majority class has a more complex distribution, naturally more data would be required to approximate the distribution.\n2.The so-called \"optimal threshold\" is based on true labels, which are inherently inaccessible in active learning, making accurate computation of this threshold infeasible in practice. Moreover, their hypothesis class is defined solely as a threshold classifier over the output of a deep learner. Without analyzing the behavior of this deep learning output and the underlying data distribution, any claims regarding the behavior of a hypothesis learned by empirical risk minimization (ERM) within this hypothesis class are, in my view, meaningless. Therefore, their proof is fundamentally flawed and fails to establish any meaningful guarantee for the proposed threshold and sampling strategy in active learning contexts. In fact, I think the authors’ proof in the appendix only establishes the equivalence between the ERM solution on the training data based on the learner’s output and an \"optimal threshold\" on this same training set. This is unrelated to what they actually need to prove—that this solution can serve as an optimal threshold for active learning on the unlabeled data.\n3. The authors claim to address label noise in addition to class imbalance; however, the paper lacks even a formal definition of label noise, and it remains unclear how label noise is actually addressed. Despite incorporating noisy scenarios in their experiments, the authors do not propose any specific strategies to mitigate or manage noisy labels, leaving it ambiguous how their method effectively handles this issue."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024direct,\ntitle={{DIRECT}: Deep Active Learning under Imbalance and Label Noise},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yiQCeXdPvs},\nnote={under review}\n}"
},
"abstract": {
"value": "Class imbalance is a prevalent issue in real world machine learning applications, often leading to poor performance in rare and minority classes. With an abundance of wild unlabeled data, active learning is perhaps the most effective technique in solving the problem at its root -- collecting a more balanced and informative set of labeled examples during annotation. Label noise is another common issue in data annotation jobs, which is especially challenging for active learning methods. In this work, we conduct the first study of active learning under both class imbalance and label noise. We propose a novel algorithm that robustly identifies the class separation threshold and annotates the most uncertain examples that are closest from it. Through a novel reduction to one-dimensional active learning, our algorithm DIRECT is able to leverage classic active learning theory and methods to address issues such as batch labeling and tolerance towards label noise. We present extensive experiments on imbalanced datasets with and without label noise. Our results demonstrate that DIRECT can save more than 60% of the annotation budget compared to state-of-art active learning algorithms and more than 80% of annotation budget compared to random sampling."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Deep Learning",
"Active Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/79f9aff73438cd6e08d054510745d2757fc7dcdc.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "DIRECT: Deep Active Learning under Imbalance and Label Noise"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yitH9xAHQs | Forewarned is Forearmed: Harnessing LLMs for Data Synthesis via Failure-induced Exploration | main | Active | data synthesis;preference learning;LLM alignment | applications to computer vision, audio, language, and other modalities | 3;5;5;6 | 3;4;4;4 | 2;2;2;3 | 3;3;2;3 | 2;2;3;3 | 4.75 | 3.75 | 2.25 | 2.75 | 2.5 | 0.927173 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- What is $R(\\cdot)$ in line 204?\n\n- Performance for ReverseGen in Tables 1 and 2 has not saturated with increasing iterations and it seems like you are under reporting results. What would happen if you ran this for 4 or 5 iterations? At what point would performance saturate? What if you generated 10000k instruction candidates and performed t=1 iteration versus 2000k instruction candidates with t=5 iterations?\n\n- When using harder and harder samples to train a model for example in active learning, or data selection, where data points are prioritized with a larger loss then this can cause a negative feedback loop with a catastrophic drop in performance from a target model. Did you observe similar artifacts in your experiments?\n\n- In Tables 1, 2, 3 you have a rows ‘without failure induction’ but you do not describe what this ablation is?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The idea of using the language model prediction errors to create a set of easy and hard examples to train a proposer model with DPO is novel and a nice idea.\n\n- Interesting results that show that harder to predict data points aka using a curriculum harder and harder questions is beneficial for some red teaming and honesty benchmarks. This is interesting since, in comparison other papers such as [1], show that hard samples actually hurts target model performance albeit in a different dataset domain.\n\n- Wide range of experiments: red teaming, honesty and mathematical reasoning to demonstrate that the method can generalize to multiple domains.\n\n[1] Evans, Talfan, et al. \"Bad students make great teachers: Active learning accelerates large-scale visual understanding.\" arXiv preprint arXiv:2312.05328 (2023)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose an iterative finetuning method for finetuning a target langague model by using synthetic data generation from a proposer language model which proposes harder and harder questions to a target language model. This is in effect a curriculum learning approach which trains a target model on harder and harder samples. The proposer model is also trained to propose harder and harder questions by using errors in the target model’s answers.\n\nA proposer language model generates few-shot candidate questions. Then the target model predicts answers to these questions. The answers are then compared to gpt-4o-mini's answers, which is used as a gold-standard. If the answers agree then this question is placed into the negative set {x^-}. If the answers from the target model does not agree with the gpt-4o-mini then the question is placed in the positive set {x^+}. These sets are then used by DPO is used to finetune the proposer model to produce harder and harder samples by leveraging the positive and negative sets. Finally, the proposer model generates synthetic questions which are deemed hard for the target model, labels are generated by the proposer model or gpt-4o-mini. The target model is then trained with SFT."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Weak results on mathematical reasoning which do not demonstrate considerable improvement in performance. Nor are many other similar iterative methods which have some sort of synthetic question-answer prioritization compared to [2, 3].\n\n- No ablation experiments, what is the performance with 1 iteration versus 5? What if you generate 1k or 10k samples to populate the positive and negative sets for DPO training?\n\n[2] Lee, Nicholas, et al. \"Llm2llm: Boosting llms with novel iterative data enhancement.\" arXiv preprint arXiv:2403.15042 (2024).\n[3] Jiang, Yuxin, et al. \"Lion: Adversarial distillation of proprietary large language models.\" arXiv preprint arXiv:2305.12870 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "None"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper is overall well written and easy to understand.\n* The proposed approach is novel. It performs RLHF to train the data generator, with the target model works as a preference provider."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a data synthesis approach by training a data generator and leverage the performance of the target model as a training signal. Specifically, the predictions of the target model are used to construct a preference dataset (target model's failure cases are preferred) for the training of the data generator which performs DPO on top of those preference data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The experiments miss 2 significant baselines. \n* To verify the effect of the proposed RLHF approach, there should a baseline finetuning the data generator (proposer LLM) with a collection of failed samples, and generate a dataset.\n* A strong LLM (i.e., gpt-4o) plays an important role in the proposed method when obtaining the oracle label, so there should another baseline directly prompting the gpt-4o multiple times to generate a synthetic dataset."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Since the technical section and implementation details are somehow mixed, it is really hard to understand some of the details of each of the experiments. I have the following questions about the evaluation:\n\n- In what experiments, GPT-40-mini used? How was this decided? Is there an ablation study on its use?\n- For the honesty calibration and mathematical reasoning experiment, what number of \"proposer fine-tuning\" iterations are used in ReverseGen? \n- Why is Llama3 used in 4.4 but not in the previous experiments?\n\nCan you add a table that maps each of the variables in the technical section to the specific choice made in each experiment? This would make it easier for the reader to understand each experiment. For instance, the columns could be $M_{prop}, M_{tgt}$, number of examples, seed data, number of iterations, use of GPT-4o-mini, and other implementation details for each experiment."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper introduces a new technique for synthesizing failure-inducing data for a target LLM \n\n- The technique is effective in 3 distinct domains of safety, honesty, and mathematical reasoning and shows improvement on SOTA on each. I appreciate the inclusion of Table 5 and Table 7 with examples in the evaluation section"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a technique called ReverseGen for generating failure-inducing examples for an LLM. The technique uses a proposer model that is fine-tuned using pairs of positive and negative examples. The evaluation shows that the generated data can be used to fine-tune and improve models in safety, honesty, and mathematical reasoning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The writing of the paper, especially the separation of the technical section and implementation details needs to be improved. Many of the evaluation choices seem a bit arbitrary and need to be organized better to understand how each of the applications of the proposed technique fits into a single framework (if they do). \n\n- Line 126: “ these studies tend to focus on particular tasks or inputs, and overlook the task generalization and the data salience on model enhancement. Our work differs from these prior works in that we focus on generating failure-guided data for target models”\nCan you be more specific in comparison with each of these prior works? It is unclear to me how ReverseGen differs from many of these works mentioned in the related works\n\n\n- Line 176: The term “solvable” and “unsolvable” is defined here and never mentioned here. The term “solvable” for positive examples is quite unclear. Use a more appropriate name for this.\n\n\n- Line 202: Section 3.2 is the technical section and it starts talking about “gpt-40-mini”. I would recommend authors to separate the implementation details from the technical section. “Gpt-40-mini” is a specific model and used to label the responses, define the model used for labeling as a parameter of the technique that’s instantiated as a specific model. \n\n## Minor:\n- Line 189: “We begin by warming up the proposer model $M_{prop}$ with the initial task-specific instruction set” - A bit unclear wording. Can be more technically precise, especially in the technical section of the paper\n\n- Line 190: “3-shot prompts” - this seems like implementation detail as well which would be more appropriate if it was in the evaluation section\n\n- Line 204: What’s R(.)? I don’t see it defined anywhere before\n\n- Line 220: “$M_{ref}$ is the reference model that remains unchanged”: this doesn’t really define what is $M_{ref}$\n\n- Line 227: typo - two dots “..”\n\n- Line 275: why were these hyperparameters chosen?\n\n- Line 284: “Responses from the target model are generated using greedy decoding for deterministic quality measurement, with the temperature parameter set to 0” - Doesn’t greedy already mean temperature doesn’t matter?\n\n- Line 287: “Instructions for the SFT data are produced by proposer models, while responses are derived from the target models using tailored prompts for safety tasks, generated by gpt-4o-mini for knowledge-intensive tasks.” -> the sentence is too long and hard to understand. What are “knowledge-intensive tasks” in this context?\n\n- Line 319: “well-safe”?\n\n- Line 429: “ReverseGen solely relies on synthetic data” - Doesn’t it use MMLU as the initial instruction seed?\n\n- Line 523: “This may result from inadequate assessment of REVERSEGEN’s proposed questions by the current benchmark or the need for more effective fine-tuning algorithms for difficult questions.” - a bit unclear"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* Could you provide theoretical or empirical justification for why failure-inducing data would be more valuable than standard training data for model improvement?\n* How does REVERSEGEN's performance evolve with increasing iterations? What determines convergence?\n* Could you analyze why failure induction appears less effective for mathematical reasoning (Table 6) compared to other tasks?\n* What are the computational requirements (tokens, API calls, training time) compared to baseline methods?\n* Could you provide more details about the reward mechanism design and validation process?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Novel approach to data synthesis through failure exploration\n* Comprehensive evaluation across three important tasks (safety, honesty, math reasoning)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces REVERSEGEN, a method for generating training data by identifying model weaknesses through failure-inducing exploration."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The paper lacks clear justification and motivation for why generating failure-guided data would improve model performance\n* No theoretical framework explaining why failure cases would be more valuable than standard training data\n* Table 6 shows similar result w and wo failure induction in math reasoning task, does this mean failure induction does not benefit math reasoning tasks?\n* No analysis of computational costs or token/API budget comparisons with baseline methods\n* Reward mechanism not clearly explained\n* Insufficient baseline comparisons, especially for mathematical reasoning task"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present a novel approach for automatically generating effective training samples from the target model's failure cases by optimizing another model to create samples via iterative preference learning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024forewarned,\ntitle={Forewarned is Forearmed: Harnessing {LLM}s for Data Synthesis via Failure-induced Exploration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yitH9xAHQs},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data, leading to impressive performance across a range of downstream applications. Current methods often rely on human-annotated data or predefined task templates to direct powerful LLMs in synthesizing task-relevant data for effective model training. However, this dependence on manually designed components may constrain the scope of generated data, potentially overlooking critical edge cases or novel scenarios that could challenge the model. In this paper, we present a novel approach, \\name, designed to automatically generate effective training samples that expose the weaknesses of LLMs. Specifically, we introduce a dedicated proposer trained to produce queries that lead target models to generate unsatisfactory responses. These failure-inducing queries are then used to construct training data, helping to address the models' shortcomings and improve overall performance. Our approach is flexible and can be applied to models of various scales (3B, 7B, and 8B). We evaluate \\name on three key applications—safety, honesty, and math—demonstrating that our generated data is both highly effective and diverse. Models fine-tuned with \\name-generated data consistently outperform those trained on human-annotated or general model-generated data, offering a new perspective on data synthesis for task-specific LLM enhancement."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"data synthesis",
"preference learning",
"LLM alignment"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/08b445879093527964bd89e7fbecbffdd8389f6b.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Forewarned is Forearmed: Harnessing LLMs for Data Synthesis via Failure-induced Exploration"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yizEOJVFFd | Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment | main | Active | Large Language Model;Fine-tuning;Self-play | alignment, fairness, safety, privacy, and societal considerations | 3;3;5;6 | 4;4;4;4 | 2;2;2;3 | 2;2;2;2 | 2;3;3;3 | 4.25 | 4 | 2.25 | 2 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above questions."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Abundant Experiments: The paper conducts many experiments to show the performances of the proposed and compared methods\n2. Clear Presentation: The paper's presentation is clear and easy to follow"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces an improved method for LLM alignment called SAPO. The method targets on mitigating the need for paired preference data in the alignment stage, by introducing EMA model and data augmentation methods for creating dispreference data. Results show that SAPO achieves some improvements compared to selected baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Marginal and Inconsistent Improvement: Experiments show that the improvements from SAPO in many cases are marginal (e.g., DPO-based Llama-3-8B improves 0.8 and Mistral-7B improves 0.1 from the SPIN baseline). Sometimes, it is even worse than baseline methods (e.g., AlpacaEval 2.0).\n2. Potential Low Training Efficiency: Since the method involves sampling and building new dispreferred responses at each iteration, its training efficiency could be problematic compared to typical self-play method like SPIN which samples at the end of each phase. Can you provide detailed profiling of the time consumed for each stage in the iteration?\n3. Weak Baselines: I notice authors to use meta-llama/Meta-Llama-3-8B and mistralai/Mistral-7B-v0.1 (which are the base models, rather than instruct ones) as the baselines. However, these base models have not been aligned using SFT and RLHF to serve as proper baselines against those alignment methods.\n4. Intuition of generating B': I am concerned with the continuity of regenerating middle segment B' from B. How do you select the segment, by mere 256 tokens? In that case, how to ensure that the new B' is semantically continuous to the original C? Besides, it is doubtful that the regenerated B' is guaranteed to be worse than the original one. These questions make me unconfident of the reasonability of the method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Along with some points above, some additional questions are:\n\n**1. Clarification on segment-level augmentation?**\n\nAs the authors stated in Equations (4) and (5), segment B is selected as a target to refine/augment as a rejected response. Then, the segment C is appended to the refined B' for the next iteration. While refining B and C simultaneously as a continuation of A sounds intuitive, the rationale behind appending C to B' is quite unclear as some contextual mismatch could exist between B' and C. What are the intuitions behind the choice of this specific segmentation scheme?\n\n\n \n\n\n**2. Segment token length and general performance?**\n\nThe last paragraph of Section 5.3 ablates different segment token lengths and concludes that 256 was ideal. However, the impact of 256 tokens differs by the expected response length of the dataset, especially in the general chat dataset like Capybara. Interpreting the effect of segment token length by its ratio over the expected response length would more clearly demonstrate the effectiveness of SAPO.\n\n\n \n\n\n**3. Abnormal AlpacaEval generation length for Mistral-SFT in Table 2?**\n\nI noticed that the AlpacaEval 2.0 generation length for wandb/mistral-7b-zephyr-sft is excessively long, as are the DPO and SPIN-DPO trained versions. However, SAPO-DPO-Mistral-7B suddenly recovers to the normal range (referred to the overall generation lengths of the models registered in the official leaderboard). Some clarifications on the excessively long generation length of wandb/mistral-7b-zephyr-sft and some insights on how SAPO-DPO is the only method resolving such issue in the Table would be helpful."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. SAPO introduces a versatile LLM alignment framework that can be used in a single trajectory setting, widening the applicability of alignment methods to diverse tasks.\n2. SAPO demonstrates strong performance in general compared to the default settings of DPO and ORPO.\n3. The paper presents ablations over various experimental axes, providing a better understanding of SAPO's mechanism."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents SAPO, an LLM alignment framework applicable to pairwise preference-based methods like DPO and ORPO with a gradually updated reference model and self-augmented preference pairs. Mistral and Llama models trained with the SAPO framework outperformed conventional offline alignment schemes on both instruction-following benchmarks and leaderboard benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**1. Impact of EMA and segment-level augmentation in SAPO**\n\nSAPO comprises two main techniques that differ from the conventional offline methods (DPO, ORPO): EMA and segment-level response augmentation. While the method is widely tested over different models and methods, the impact of EMA strategies and hyperparameter choices has not been sufficiently studied. For example, the update coefficient $\\alpha$ and EMA update frequency were fixed to 0.5 and 2 throughout the experiments. Also, the impact of segment-level augmentation is partially studied, and only the high-level interpretations were presented in Section 5, lacking an in-depth understanding of how and why it makes SAPO a strong method (which is connected to the questions below). Thus, the necessity of EMA and segment-level augmentation in SAPO is left unclear.\n\n \n\n**2. Clarity in experiments and ablations**\n\nSome explanations are not clear enough to follow. For instance, in the first paragraph of Section 5.3, the experiments on \"on-policy sampling\" are not clear if means: (1) sampling from the trained policy in the middle of training [1] is assumed as rejected samples, (2) sampling multiple responses from the policy that the trained policy is initialized from and somehow labeled as chosen/rejected [2], or else. Also, it is unclear if \"epoch\" in the training details of SAPO and \"iteration\" in Algorithm 1 are equivalent or distinct values. Including these two examples, overall, the paper does not precisely state some terminologies. \n\n \n \n\n\n**References**\n\n[1] Direct Language Model Alignment from Online AI Feedback (Guo et al., 2024)\n\n[2] SimPO: Simple Preference Optimization with a Reference-Free Reward (Meng et al., 2024)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How do the authors address the potential discontinuities caused by segment-level supervision, and what impact do they anticipate these discontinuities might have on the overall model performance and generalizability?\n\n2. In the ablation study (Table 3), the segment-level augmentation shows varying effects on different benchmarks. Have the authors explored any adjustments to this augmentation strategy to mitigate these inconsistencies, or are there alternative augmentation methods they would consider?\n\n3. What is the rationale behind starting with epoch 3 rather than epoch 1 in Figure 2? \n\n4. Can the authors include results for SPIN in Figure 2 to strengthen the comparative analysis and provide more context regarding the effectiveness of their method?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The motivation to integrate EMA and replay buffer into the self-play pipeline is well-grounded. Experimental results across various benchmarks demonstrate the proposed method's effectiveness over contrastive and self-play baselines."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce off-policy reinforcement learning (RL) techniques, including Exponential Moving Average (EMA) and replay buffer, into the self-augmented preference optimization, where only a dataset of desired responses is needed, while negative samples are generated by the model itself during optimization. The proposed approach aims to reuse timely feedback to train the model, thereby mitigating the delays often encountered in the traditional self-play training paradigm, which has separate sampling and training phases. Additionally, the authors propose a segment-level data augmentation strategy, where a segment of the full response is regenerated by the EMA model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. One crucial aspect of the self-play policy optimization paradigm is to construct negative samples that are generalizable. However, the segment-level supervision employed by the authors could lead to noticeable discontinuities due to the concatenation of the regenerated segment B' with the original segment C. This may introduce unintended bias in the preference learning stage, which could affect the model's ability to generalize.\n\n2. The paper lacks a direct comparison between the effect of curriculum learning and the approach of SPIN that alternates sampling and training phases. For instance, including a comparison with the SPIN method in Figure 2 could provide clearer insights into the effectiveness of this approach.\n\n3. The ablation study in Table 3 presents inconsistent results across different benchmarks. The proposed segment-level augmentation does not consistently outperform direct sampling, at least based on the current experimental results. Additional studies or further refinement of the augmentation technique could help clarify its benefits, e.g., investigating how varying the segment length affects performance across different benchmarks, or exploring alternative segmentation strategies that might lead to more consistent improvements."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Did the author conduct any experiments to show that the generated $y^{-}$ is indeed worse than $y^{+}$? For example, you might use some off-the-shelf reward model to compute the average rewards of $y^{+}$ and $y^{-}$."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The strengths of this paper are listed as follows\n\n1. The paper proposes a very novel and interesting way to construct negative responses from high-quality dataset\n\n2. The experiments are conducted over a wide range of models and evaluation benchmarks, which makes it solid and comprehensive\n\n3. The idea of leveraging of EMA and replay buffer is interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a new method, SAPO, for aligning language models with human preferences. Previous methods either requires labeled pairwise data or an external reward signal provider. SAPO overcome this by constructing negative responses from SFT dataset and then adapt DPO / ORPO to train the model. Experiments are conducted to empirically verify the performance of SAPO"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My first concern pertains to the high-level intuition behind the methodology. SAPO requires to generate negative samples to pair up those targets in SFT dataset. However, the motivation behind this is unclear. Specifically, SAPO generates negative responses by replacing segments of the SFT response with the model's own generation, which produces off-policy negative responses. The the DPO / ORPO loss will push the model to further penalize these responses. My concern is that, given that these responses is already unlikely generated by the model, what is the necessity of penalizing them further? What specific benefit does penalizing such off-policy data offer, especially when their likelihood of occurrence is already minimal?\n\nAdditionally, while SAPO introduces several improvements over SPIN—including a new method for negative sample generation, as well as the use of EMA and a replay buffer—these enhancements can be integrated into SPIN. However, whether SPIN can benefits from these techniques and how SPIN with these compares to SAPO, both at a high level and empirically, is not discussed sufficiently.\n\nFinally, since SAPO's objective is ultimately to fit the SFT dataset responses, I believe that SFT itself should also be included as a baseline for comparison."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Our paper introduces Self-Augmented Preference Optimization (SAPO), a dynamic, scalable training paradigm that outperforms traditional methods by autonomously generating negative responses and integrating real-time data updates."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024selfaugmented,\ntitle={Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yizEOJVFFd},\nnote={under review}\n}"
},
"abstract": {
"value": "Traditional language model alignment methods, such as Direct Preference Optimization (DPO), are limited by their dependence on static, pre-collected paired preference data, which restricts their adaptability and practical applicability. To address this limitation, we introduce Self-Augmented Preference Optimization (SAPO), an effective and scalable training paradigm without the need of existing paired data. Built upon the self-play concept that autonomously generate negative responses, we further involve the off-policy learning pipeline to improve the data exploration and exploitation. Specifically, we employ an Exponential Moving Average (EMA) model along with a replay buffer to enable dynamic updates of response segments, effectively integrating real-time feedback with historical data insights. Our comprehensive evaluations of the LLaMA3-8B and Mistral-7B models across benchmarks—including the Open LLM Leaderboard, IFEval, AlpacaEval 2.0, and MT-Bench—demonstrate that SAPO matches or surpasses established offline contrastive baselines, such as DPO and Odds Ratio Preference Optimization (ORPO), and outperforms offline self-play methods like SPIN."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Model",
"Fine-tuning",
"Self-play"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5e9b373f0bb3e7ab24dea515874441bacf7dc126.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yj6P8OdWyj | Open-Set Learning for Addressing Label Skews in One-Shot Federated Learning | main | Active | federated learning;open-set learning | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;5;5;5 | 4;3;4;3 | 2;3;3;2 | 2;2;3;2 | 3;3;3;2 | 4.5 | 3.5 | 2.5 | 2.25 | 2.75 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "(1) The server updating procedures of Algorithm 1 can be further explained. Will the prediction on every testing example be executed on the server? Is the the known confidence of the input $u^i(x)$ required to provide the ensemble prediction?\n\n(2) What are the implications of $\\alpha$ in Definition 3.5 and Definition 3.6? How will it affect the design of practical algorithm, e.g., FedAdav without $\\alpha$?\n\n(3) In Table 1, FedAdav has very large standard deviation on MNIST (#C=2). This phenomenon can be further explained."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "**Originality:** It proved the learnability of one-shot FL ensembles with OSL. Specifically, this theoretical analysis highlighted the importance of OOD detection. By combining multiple OSL methods, the proposed FedAdav algorithm achieved superior performance than baselines.\n\n**Quality:** Theorem 3.7 showed the impact of OOD detection function in proving the learnability of one-shot FL ensembles. Experimental results confirmed the effectiveness of the combination of multiple OSL methods in one-shot FL problems.\n\n**Clarity:** The paper was well-written. The motivation of the proposed theory and algorithm is clearly illustrated.\n\n**Significance:** It provides theoretical supports for analyzing one-shot FL ensembles with OSL."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studied open-set learning (OSL) for one-shot federated learning under label shifts. It proved the learnability of one-shot federated learning ensembles with open-set learning. Then it introduced a FedAdav algorithm to combine multiple OSL approaches. Experimental results showed that FedAdav could outperform SOTA baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1) The connection between the theoretical analysis and the proposed FedAdav algorithm is weak. Theorem 3.7 shows that OSL is key to improving the performance of one-shot FL. However, it generally works well for all OSL methods, including baselines FedOV and RotPred. It is unclear what properties a OOD detection approach require to enhance one-shot FL. \n\n(2) The proposed FedAdav is simple combination of FedOV and RotPred. As illustrated in section 4.1, both approaches have some limitations. It is not explained why the simple combination of FedOV and RotPred can solve these limitations. Moreover, it is confusing why the combination of FedOV and RotPred enables better learnability of one-shot FL ensembles in Theorem 3.7.\n\n(3) The parameter sensitivity of FedAdav is not analyzed. There are two key hyperparameters: $T_{check}$ and $\\tau$. Both affects how FedOV and RotPred loss function are performed in the proposed algorithm."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1.FedAdav simply combines the loss functions from FedOV and RotPred. What is the purpose of doing so? How does this enhance the method’s capability?\n2.The trade-off parameters in FedAdav are derived through experimental results. How can you ensure that the same parameter settings would work effectively if the dataset or model changes?\n3. The experiments only use very simple CNN models, such as LeNet, on simple datasets. Can such a simple setup properly evaluate the impact of OSL in one-shot FL? Why did the authors not use more sophisticated models, such as Transformer-based models, on more complex datasets?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper includes a comprehensive theoretical analysis proving the utility of OSL in the context of one-shot FL.\nExtensive experimental comparisons with multiple baselines are provided."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the effectiveness of open-set learning (OSL) in improving one-shot federated learning (FL), especially when facing label skews. The authors provide a theoretical proof of OSL's benefits and propose a new method, FedAdav, combining features from previous works."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The proposed method lacks innovation, as it is simply a combination of the loss functions from two existing works (FedOV and RotPred).\nThe experiments use simple models and datasets, making it difficult to effectively validate the proposed method's robustness and general applicability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please carefully address W1."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper introduces a novel approach, FedAdav, that combines multiple OSL methods, enhancing the handling of label skew in one-shot FL. This combination of OSL signals is innovative for managing highly imbalanced class distributions across federated clients.\n\n2. The theoretical analysis is rigorous, with proofs supporting the benefits of using OSL for addressing label skew in one-shot FL. \n\n3. The paper is well-organized, presenting both theoretical foundations and empirical results to support the proposed approach. The steps in the algorithm and experiment setups are clearly described."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the challenge of label skew in one-shot federated learning (FL), where clients communicate with the server only once, and class distributions across clients are imbalanced. Existing open-set learning (OSL) methods, like FedOV, help by identifying unknown samples but are limited in flexibility. The paper proposes an adaptive algorithm, FedAdav, combining multiple OSL signals to improve accuracy under label skew. The theoretical contribution proves the learnability of one-shot FL with OSL, and extensive experiments show FedAdav's effectiveness in enhancing performance over other state-of-the-art (SOTA) OSL methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My biggest concern is about the setting of this paper, which only considers the one-shot FL. In a more general case, we usually have multiple communication rounds and the model will be aggregated in each round. However, the paper only focuses on one-shot FL, where clients communicate only once and the models are not aggregated. Instead, only the model predictions are aggregated. This limitation may restrict the effectiveness (at least from the theoretical part) in the general multi-round communication settings.\n\n2. Limited Real-World Data Testing: While the experiments use standard datasets like MNIST and CIFAR, these do not fully represent the complexities of label skew in real-world FL applications, such as varying regional disease prevalence in healthcare. Adding real-world data (if any) could strengthen the practical relevance of FedAdav."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could the authors provide further insights into the scenarios where FedAdav might underperform compared to other methods? The paper could benefit from a more detailed discussion on the limitations of the proposed method, including scenarios where FedAdav might not perform as expected."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This paper theoretically proves that closed-set learning cannot effectively address label skews, whereas integrating OSL into FL could ensure the learnability of one-shot federated learning, which is the main contribution and a significant exploration.\n- The theoretical analysis and empirical results are of reasonable quality, providing support for the proposed method.\n- The paper is generally well-structured, but some sections could benefit from more detailed explanations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tackles the issue of label skews in one-shot federated learning (FL) by integrating open-set learning (OSL) techniques. The authors provide a theoretical analysis proving the learnability of one-shot FL ensembles with OSL algorithms and propose FedAdav, an adaptive algorithm that combines multiple OSL signals to improve ensemble accuracy under label skews. Extensive experiments demonstrate that FedAdav outperforms state-of-the-art OSL algorithms in severe label skew conditions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper makes a reasonable contribution to the field of federated learning, yet there are areas that could be improved.\n- Although the theoretical part seems solid, the proposed algorithm appears somewhat incremental, which shares similarities with existing OSL methods. I suggest the authors to explore the algorithm's performance in other federated learning challenges such as data heterogeneity with different Dirichlet distributions to demonstrate its broader application potential.\n- The empirical results are somewhat limited and could be strengthened by additional experiments on more diverse datasets or real-world applications. I recommend the authors conduct experiments in practical application areas such as natural language processing to validate the algorithm's effectiveness in different domains. Additionally, consider testing on more challenging datasets, such as large-scale image datasets and multilingual text datasets, to demonstrate the algorithm's robustness and generalization capabilities.\n- The presentation of the theoretical proofs could be made more accessible to readers who are not experts in the field. I suggest the authors add more intuitive explanations at key steps, such as using charts to illustrate the proposed algorithm."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We study the theory and experiments on open-set learning for label skews in one-shot federated ensemble."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024openset,\ntitle={Open-Set Learning for Addressing Label Skews in One-Shot Federated Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yj6P8OdWyj},\nnote={under review}\n}"
},
"abstract": {
"value": "Federated learning (FL) is crucial for collaborative model training, yet it faces significant challenges from data heterogeneity, particularly label skews across clients, where some classes may be underrepresented or absent entirely. In one-shot FL, where clients only communicate with the server once, this problem becomes even more challenging. Recent solutions propose incorporating open-set learning (OSL) to tackle this issue by detecting unknown samples during inference, but current methods like FedOV lack adaptability to varying client data distributions. In this paper, we provide a theoretical analysis proving that improving OSL algorithms can effectively address label skews in one-shot FL, since one-shot FL is learnable through good OSL algorithms regardless of label skews. We also empirically evaluate state-of-the-art OSL algorithms and identify their limitations. Based on these insights, we propose FedAdav, an adaptive algorithm that combines OSL signals to significantly improve ensemble accuracy in one-shot FL under label skews. Through extensive experiments, we demonstrate that exploring better OSL is key to overcoming label skew challenges in federated learning."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"federated learning",
"open-set learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6ac6434d8b21591937ca84fcd668caf89564d366.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/3a19bad288d903f0e961395da58314ecff7f71d2.zip"
},
"title": {
"value": "Open-Set Learning for Addressing Label Skews in One-Shot Federated Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yj9lLwMjnE | UniWav: Towards Unified Pre-training for Speech Representation Learning and Generation | main | Active | speech foundation model;generative pre-training;self-supervised learning;speech generation;speech tokenization | applications to computer vision, audio, language, and other modalities | 3;5;6;6;8 | 4;3;5;4;3 | 2;3;3;3;3 | 2;3;2;2;3 | 3;3;3;3;4 | 5.6 | 3.8 | 2.8 | 2.4 | 3.2 | -0.230283 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Can the method proposed in this paper be applied to general audio such as sound and music?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper introduces the first unified pre-training framework (UniWav) for speech representation learning and generation.\nUniWav can compete with different foundation models with low bitrate speech tokenization and high-quality resynthesis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "UniWav is an encoder-decoder framework designed to unify pre-training representation learning and generative tasks. With the appropriate design choices for pre-training, UniWav can jointly learn a representation encoder and generative audio decoder that can be applied to both types of tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The results of the method in this paper do not show an advantage over existing methods in speech recognition and speech generation. \n2. In the speech tokenization section, there is a lack of experiments related to the modeling performance of the speech language model (LLM-TTS) as mentioned in SpeechTokenizer[1]. Such experiments could effectively evaluate the potential of the tokenizer proposed in this paper when applied to autoregressive speech synthesis.\n[1]"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How does the experimental section provide evidence for the claim that simultaneous pre-training on speech representation learning and generation enhances performance compared to pre-training on only one task, and what justifies the assertion of reduced overhead and cost?\n\n2. Could you include comparisons with models like Natural Speech 3, Funcodec, Encodec, DAC, and DinoSR to improve the evaluation of the proposed methods in speech generation and tokenization tasks?\n\n3. Could you incorporate additional subjective and objective metrics, such as Mean Opinion Score (MOS) and Mel/STFT distance, provide a more comprehensive assessment of model performance?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper conducts experiments across multiple tasks like speech recognition, text-to-speech, and tokenization. It includes analyses, such as ablation studies and mutual information metrics.\n2. The paper is well-organized and clearly presents technical details, such as the encoder-decoder structure and the Flow Matching method, making it easy to follow. Visual aids and concise explanations further contribute to the clarity of the complex concepts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes UniWav, a unified pre-training framework for speech representation learning and generation. Traditionally, pre-training models for speech have been specialized either for discriminative tasks, like speech recognition, or for generative tasks, such as text-to-speech. UniWav aims to bridge this gap by integrating both functions into a single encoder-decoder model. The encoder is responsible for learning robust speech representations, while the decoder generates speech through Flow Matching."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Firstly, I have some doubts regarding the motivation and novelty validation in this paper. The introduction states that ideally, speech representation learning and generation can mutually enhance each other and that this approach can reduce the overhead and cost of pre-training. However, I did not find evidence for these conclusions in the experimental section. Specifically, it seems that the paper does not address whether pre-training on just one task (either speech representation learning or generation) would yield better performance on downstream tasks compared to simultaneous pre-training on both tasks. Additionally, the model's performance on downstream tasks does not appear to be very strong; for instance, its performance on the speech recognition task is worse than that of other baselines. So the experiments can not show that the overhead and cost can be reduced. Because there is neither proof that pre-training on both tasks together is better, nor evidence that there are significantly better results on a single downstream task. If these advantages cannot be clearly demonstrated, then why combine the two tasks? Merely putting speech representation learning and speech generation together without thoroughly explaining the rationale and benefits of this approach significantly limits the contributions of the paper. Overall, the results seem to contradict the stated motivation and novelty, or these points have not been well validated. \n\n2. Secondly, the selection of baselines in the paper is quite limited, especially for the speech generation and tokenization tasks. Speech generation could be compared with Natural Speech 3 [1], while the speech tokenization task could benefit from comparisons with the latest models like Funcodec [2], Encodec [3], and DAC [4]. Furthermore, the paper should include comparisons with the experimental results of DinoSR for both the speech recognition and tokenization tasks, as the encoder component of the paper is primarily based on the DinoSR model.\n\n3. Lastly, the metrics used for evaluating downstream tasks are insufficient. For example, in the speech generation task, subjective evaluations such as Mean Opinion Score (MOS) should be included, as this is a critical metric. For speech tokenization, additional metrics related to speech reconstruction, such as Mel/STFT distance, could be incorporated.\n\n[1] Ju, Zeqian, et al. \"Naturalspeech 3: Zero-shot speech synthesis with factorized codec and diffusion models.\" arXiv preprint arXiv:2403.03100 (2024).\n\n[2] Du, Zhihao, et al. \"Funcodec: A fundamental, reproducible and integrable open-source toolkit for neural speech codec.\" ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2024.\n\n[3] Défossez, Alexandre, et al. \"High fidelity neural audio compression.\" arXiv preprint arXiv:2210.13438 (2022).\n\n[4] Kumar, Rithesh, et al. \"High-fidelity audio compression with improved rvqgan.\" Advances in Neural Information Processing Systems 36 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. Based on Figure 2, have the authors considered only conditioning the decoder on the first, e.g., 12 layers of the encoder? \n2. Can the authors clarify how speaker similarity is computed? Is it the average cosine similarity between the WavLM speaker embedding of pairs of ground-truth and synthesized utterances? If so, I think it would make sense to share the standard deviation as well. \n3. Can the authors comment on their batch size and how they are sampled (e.g., like wav2vec 2.0)? This would help with reproducibility, and figuring out how much data is seen throughout pre-training, and where UniWav lies in Figure 2 of the DinoSR paper (trade-off between performance and data efficiency)\n4. Line 233, \"sinusoidal pos. enc. is appended to the input.\" Is this appended to the feature dimension or the time dimension? Isn't it normally the case that positional embeddings are summed with the input features? Can the authors comments on this design decision?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "### Originality\n\nThe method proposes a new speech SSL method, combining existing methods DinoSR (encoder-only) and SpeechFlow (encoder-decoder) . This method has strong performance on generative and discriminative tasks compared to foundation models which are generative-only or discriminative-only. \n\n### Quality\n\nThe method is evaluated on multiple speech technology tasks, and a small ablation study is performed for further insights. \n\n### Clarity\n\nThe paper is well-written, easy to follow, and appropriately places itself into existing literature.\n\n### Significance\n\nThis work will definitely spark future work in the speech community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a self-supervised speech representation learning objective which combines 1) masked prediction with online clustering from an EMA teacher model (DinoSR) to train a transformer-encoder network and 2) reconstructing noise-inserted input data based on the encoder representations with Flow Matching to train a transformer-decoder network. The aim of this method is to unify the creation of foundation models for discriminative tasks (such as ASR) and generative tasks (such as TTS). \n\nThe method is evaluated by pre-training on 60k hours of LibriLight, and fine-tuning for speech recognition, speech synthesis, and speech tokenization and resynthesis, on Librispeech. \n\nFor speech recognition, they show limited degradation of performance compared to SSL methods like HuBERT, WavLM, and data2vec. \nFor speech synthesis, they show performance matching contemporary models like VoiceBox and SpeechFlow. \nFor speech tokenization and reconstruction, performance exceeds SpeechTokenizer and HuBERT+HifiGAN. \n\nThe method is also ablated on the encoder and decoder depth, which shows that a 12-layer encoder does not benefit from adding the decoder objective, while a 24-layer encoder does benefit. Moreover, it is shown through a mutual-information analysis that their encoder has different characteristics on how speaker and speech information is processed compared to HuBERT."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "### domain\nThis works only evaluates on Librispeech, i.e., the audio book domain, which has very homogenetic speaker conditions. The observation that ASR performance closely matches HuBERT, WavLM, etc, does not take into account the robustness these models have to other domains. The authors could discuss this limitation or add experiments with the SUPERB benchmark.\n\n### modelling \nAs the proposed method is an extension of DinoSR, it would be nice to see UniWav w/o decoder, as in Table 3 for 200k steps, in Table 1 with 600k steps of pre-training. Moreover, Speechflow also uses an encoder-decoder model. This encoder could in theory be fine-tuned for ASR. The original work did not do this. Can the authors comment on expected results when the SpeechFlow encoder would be fine-tuned for ASR, and how this would compare to UniWav? Could this be discussed in a future work section?\n\n### speaker recognition\n\nOne of the central claims is that UniWav bridges the inherent orthogonality between speech recognition, which normalizes over speaker and environment information, and speech synthesis, which requires speaker and environment information. This is touched upon slightly by the mutual information analysis, where UniWav is seen to lose speaker information through the encoder layer. It would seem to me that UniWav will not be better for the speaker recognition task than, e.g., WavLM, based on the speculation that most environment and speaker information is stored in the decoder. I think evaluating UniWav on the (SUPERB) speaker recognition and speaker verification task would strengthen the claim significantly. I think for now, this limitation should be made more explicit in the limitation section, or if possible, the authors could perform additional experiments on SUPERB.\n\n### Minor comments\n\n1. line 215: Encodec has not been introduced yet, so cite it here. I find this paragraph confusing due to missing context on how the network uses Encodec as input features (line 239) instead of Mel spectrograms as suggested in line 108. \n2. line 236: \\proposed~is\n3. line 242: in~\\ref \n4. line 358: kmeans instead of k-means\n5. line 370: we follow...run k-means, use e.g., by first identifying .. on which to run"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "On line 108, the authors claim that the model uses mel spectrograms as input, but on lines 239–240, they mention using Encodec features instead."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well motivated to jointly pre-train for discriminative and generative tasks. The paper shows SOTA, if not comparable, results on the generative tasks and lag a little on the discriminative task.\nThey show very good results when tokenizing speech using UniWav, compared to other works in the literature. \nThe paper is well written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces UniWav, a new pre-training framework focused on both speech representation learning for discriminative task and for generation task. Using an encoder-decoder architecture, UniWav learns speech representations and generates speech together, letting it perform well in both discriminative tasks and generative tasks. UniWav aims to unify the approach, simplifying the speech processing pipeline and reducing need for many specialized pre-trained models. Experiments in speech recognition, text-to-speech synthesis, and speech tokenization show UniWav's effectiveness, achieving results competitive with other top models for specific tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors propose UniWav, a pre-training method for jointly learning representations for both discriminative and generative tasks in speech processing. This method uses an encoder-decoder architecture to learn effective representations as part of the training process.\n\nFor the discriminative task of speech recognition, the authors fine-tune the encoder on 100-hour and 960-hour splits, showing competitive results compared to other methods. However, they do not provide results for low-resource scenarios (e.g., 10 hours or 1 hour of data) or results using a frozen encoder (SUPERB benchmark). These two setups are essential to support their claim that the learned encoder representations are effective for speech recognition tasks. Because with large finetuning data, effect of pre-training is less. \n\nFigure 2 shows that the encoder’s full capacity is not optimized for learning features specific to speech recognition. UniWav achieves the highest mutual information (MI) on layer 10 out of 24, which then gradually decreases. This pattern suggests that the model divides its capacity between the discriminative and generative tasks, similar to observations in WavLM and prior works. This could explain UniWav’s strong performance in speech generation tasks, as a significant portion of the model’s capacity is allocated to optimize for generation. This trend is consistent across the paper’s reported results.\n\nRegarding speech tokenization, it’s unclear if the comparison is entirely fair. For speech tokenization, the input to the SpeechTokenizer is raw audio, while for UniWav, the input is Encodec encoder features, which are further transformed by UniWav’s 10-layer encoder. It is as if UniWav is heavily overparameterized and most of the parameters are used for generation task."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "why \"For speech recognition with a shallow encoder, we found introducing the decoder degrades WER regardless of the size. Interestingly, an opposite trend is observed when the encoder size is doubled\"? This is something requires more analysis and explanation.\n\nIn section 3.1, \"We extract the prequantized latent feature from a 16kHz EnCodec (D´efossez et al., 2022) encoder, pre-trained on the same dataset, to serve as the input for our model.\" This is contradictory to the claim in early sections that the input to your model is mel spectrum? In addition, this makes the comparison with other approaches unfair since you used another model as the pre-processing module (i.e., total model size is actually increased significantly)."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The main contribution of this work is the combination of DinoSR and a flow matching model to train a unified representation model. While not considered significant, this special setup is somewhat novel.\n\nThe paper contains sufficient experimental results to support their claims although some setups are questionable (in detail below). \n\nThe paper clearly indicated the limitations of the work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a framework named UniWav. It is an encoder-decoder framework aimed at unifying pre-training representation learning and generative tasks. The authors claimed that this is the first such framework and achieves comparable performance to different existing foundation models, each trained on a specific task, on speech recognition, text-to-speech, and speech tokenization tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The claim their work is \"the first unified pre-training framework for speech representation learning and generation\" is arguable. For example, the paper \"Speechgptgen: Scaling chain-of-information speech generation.\" introduced an approach to train a discrete representation that works for both understanding and generation. The paper \"Moshi: a speech-text foundation model for real-time dialogue\" further improved the approach. \n\nThe presentation needs to be improved. \n1. the DinoSR part in Figure 1 is confusing since the softmax layer is not clearly shown since the encoder, according to the text description, does not include that layer. In contrast, the figure in the original DinoSR paper is very clear. \n2. The majority of Section 2.1 is describing DinoSR. However, notation is not clearly explained sometimes. for example, what is \"k\" in s^k_v right below eq. 3? it's unclear why Euclidean distance (instead of COS distance) is used given that Cos distance is usually more preferred in higher-dim spaces. \n3. notation \"z\" is overloaded in eq 9.\n4. the footnote under Table 1 is not clear. alignment is only used for generation tasks?\n5. eq 11 is wrong? argmin returns an index instead of a representation.\n\nWhen researchers train a representation model they usually keep the encoder fixed when using them in downstream tasks such as ASR. however, in this work the encoder is finetuned. This causes the claims weaker. Similarlly if flow matching is used in the generation part the quality of the generated speech will of course become better. However, this gain comes from the flow matching model instead of the way the representation is learned. Some clarification here is needed."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "First unified pre-training method for speech representation learning and generation"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024uniwav,\ntitle={UniWav: Towards Unified Pre-training for Speech Representation Learning and Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yj9lLwMjnE},\nnote={under review}\n}"
},
"abstract": {
"value": "Pre-training and representation learning have been playing an increasingly important role in modern speech processing. Nevertheless, different applications have been relying on different foundation models, since predominant pre-training techniques are either designed for discriminative tasks or generative tasks. In this work, we make the first attempt at building a unified pre-training framework for both types of tasks in speech. We show that with the appropriate design choices for pre-training, one can jointly learn a representation encoder and generative audio decoder that can be applied to both types of tasks. We propose UniWav, an encoder-decoder framework designed to unify pre-training representation learning and generative tasks. On speech recognition, text-to-speech, and speech tokenization, \\proposed{} achieves comparable performance to different existing foundation models, each trained on a specific task. Our findings suggest that a single general-purpose foundation model for speech can be built to replace different foundation models, reducing the overhead and cost of pre-training."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"speech foundation model",
"generative pre-training",
"self-supervised learning",
"speech generation",
"speech tokenization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/043d484a53d1bb42e7cdc6c81e226e7fc074cecc.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "UniWav: Towards Unified Pre-training for Speech Representation Learning and Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ykD8a9gJvy | Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation | main | Active | generative keyframe interpolation;image-to-video diffusion models | applications to computer vision, audio, language, and other modalities | 6;6;6;6 | 4;4;4;3 | 3;3;4;3 | 3;3;3;2 | 3;3;3;2 | 6 | 3.75 | 3.25 | 2.75 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Given the model's limitations with non-rigid motions, did the authors explore any alternative solutions, such as enforcing additional temporal consistency constraints or incorporating motion priors for articulated objects?\n\nWhile the quantitative results are promising, did the authors consider conducting a user study to assess perceived motion realism, as subjective assessments might capture nuances that FID/FVD cannot?\n\nCould the authors elaborate on how sensitive the model's performance is to the choice of the 180-degree rotation in the self-attention map? Did they experiment with other configurations for reversing the temporal interaction?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Strengths\nThe paper’s fine-tuning approach makes effective use of a pre-trained model (SVD) to generate backward motion without requiring extensive additional data or full retraining. This demonstrates an efficient approach to model adaptation.\nBy developing forward-backward motion consistency through temporal self-attention, the method generates smooth and coherent transitions, especially in scenarios with long differences between keyframes. \nThe paper provides good experimental results, using both qualitative comparisons and metrics like FID and FVD to validate performance improvements over established baselines (FILM, TRF, etc.).\nAblations explore the impact of various components and the paper transparently discusses limitations, providing clarity on the model's boundaries, especially with non-rigid motion types."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents Generative Inbetweening, a method for creating intermediate frames between two keyframes by adapting a pre-trained image-to-video diffusion model. This model adapts Stable Video Diffusion with dual-directional diffusion: generating video frames that interpolate both forwards and backwards in time. \nThis approach achieves motion-coherent inbetween frames through a technique that involves reversing the temporal self-attention maps within the U-Net model to generate backward motion from the endpoint keyframe, then combining this with forward-motion frames to produce smooth video sequences.\nEvaluations on the Davis and Pexels datasets show the method’s performance against the existing techniques, including TRF and FILM, in terms of frame coherence and motion fidelity for larger motion gaps."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the paper includes comparisons with baseline models, it lacks an in-depth discussion on the unique metrics or benchmarks used to capture differences between models, particularly in subjective aspects like motion realism. Including a more detailed discussion on why certain metrics (e.g., FID or FVD) were selected over others could clarify the relevance of the performance gains.\n\nThe model relies heavily on SVD’s motion priors, which, as the authors note, can struggle with non-rigid or complex kinematic movements. \nWhile the paper acknowledges this, further discussion on how future models might address such limitations, possibly by incorporating other motion datasets or additional temporal constraints, would add depth to the future directions.\n\nAlthough the fine-tuning approach is a strength, it may be challenging for readers unfamiliar with diffusion models to follow the model adaptation process fully. More visual aids or pseudocode detailing the fine-tuning and dual-directional sampling steps would enhance clarity."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. As noted in the weaknesses, I observed that in the cases provided, the camera and object movements between keyframes are slight. Can this method still perform well when there is a significant difference between the given keyframes? Additionally, when the keyframe difference is large, the backward generation may be unable to reuse the rotated attention matrix from the forward generation, potentially causing large discrepancies in frames generated at the same time step. In such cases, can fusion still work effectively?\n2. This generation pipeline seems to require a substantial number of corresponding points between keyframes. Beyond the issue of low overlap mentioned earlier, I’m also curious whether the method could still generate smooth transitions if, for example, one object in the keyframes—such as a fish in the ocean—undergoes a mirrored flip, meaning every point has a mapped counterpart but with an orientation change.\n3. The paper adopts simple averaging for intermediate frame fusion (line 281), but intuitively, frames generated closer to the initial keyframe might exhibit higher quality. Why not use weighted averaging instead? For example, linearly blending frames based on their proportional distance from each keyframe might yield smoother transitions and higher quality."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "This paper fills a gap in the field of large-scale video generation, specifically keyframe interpolation, at a related lower cost. As summarized earlier, this paper presents a novel pipeline to generate synchronized frames and targeted frame fusion techniques to achieve smooth transitional videos."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper focuses on the keyframe interpolation problem, which has been overlooked in existing large-scale video generation models. The article proposes a solution to this task by treating keyframe interpolation as a forward video generation from the first frame and a backward generation from the last frame, followed by a coherent fusion of the generated frames. Based on this, the paper reuses existing large-scale image-to-video models to obtain a video generation model for backward motion by reversing temporal interactions. Additionally, it uses sampling techniques to blend paired frames generated by the forward and backward temporal directions with synchronized paths, producing intermediate frames."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The keyframes shown in the paper have relatively small motion ranges and require extensive pixel mapping; otherwise, obvious artifacts occur (as mentioned in the limitations), making this approach unsuitable for large-scale object or camera movements."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please kindly refer to the Weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is clear and easy to understand, with well-presented motivation and methodology.\n- The proposed method is novel, straightforward, and effective, demonstrating improvements over the selected baseline interpolation methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The author proposes a novel method for distant keyframe interpolation, leveraging pretrained image-to-video diffusion models. This approach generates intermediate frames by predicting and fusing forward and backward video sequences, conditioned respectively on the given start and end frames. The author introduces a lightweight fine-tuning technique to tackle the key challenge of predicting backward video sequences from the end frame. Additionally, a dual-directional diffusion sampling strategy is employed to effectively fuse noise in both forward and backward directions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Further ablation studies on the proposed method could explore:\n1. **Training Dataset Scale**: In the paper, the model is fine-tuned with only 100 videos. It would be interesting to investigate how the scale of the training dataset affects the model’s performance.\n2. **Fine-tuning Modules**: The paper fine-tunes only the value and output projection matrices in the self-attention layers of the backward framework. Since there might be a gap for the forward motion in the context of the image-to-video task and the interpolation task, it would be worth exploring whether the interpolation performance could be further improved by fine-tuning both the forward and backward framework matrices while preserving the attention map rotation operation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- There are three publicly available weights for Stable Video Diffusion (img2vid, img2vid-xt, img2vid-xt-1-1). Which of these weights did the authors use?\n\n- Stable Video Diffusion applies different classifier-free guidance scales to each frame. Did the authors use the same approach in this paper?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This method is both parameter-efficient and data-efficient, making it highly effective even with limited resources.\n\n- It leverages an open-source model, which enhances its accessibility and contributes to the broader video interpolation research community.\n\n- It demonstrates superior qualitative and quantitative performance compared to FILM, a well-known method for large motion interpolation, as well as TRF, which also uses Stable Video Diffusion.\n\n- In Section 5, the qualitative results are thoroughly explained, clearly highlighting the strengths of this approach in various aspects."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the keyframe interpolation problem by leveraging the large-scale image-to-video diffusion model, Stable Video Diffusion, to generate frames between a pair of keyframes. \n\nUnlike traditional image-to-video models that generate frames in a forward-moving manner, this paper proposes finetuning the model for backward-moving videos and utilizing both the original and finetuned models together during inference. \n\nTo leverage the knowledge from the forward-moving model, only the value and output projection matrices of the 3D self-attention layers are trained, and the attention maps from the forward-moving videos are rotated by 180 degrees and inserted into the finetuned backward-moving model. \n\nDuring inference, the attention maps generated by the forward-moving model are rotated and applied to the finetuned backward-moving model, and the predictions from both models are fused. \n\nThis approach demonstrates superior performance over FILM and TRF on the Davis and Pexels datasets, despite being trained on only 100 videos."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Video interpolation performance can vary significantly based on FPS and the magnitude of motion, but the paper does not provide any analysis of these factors. Besides the motion bucket ID mentioned in the paper, Stable Video Diffusion also takes FPS as a condition. The paper would benefit from demonstrating whether the method still outperforms FILM and TRF when varying the motion bucket ID and FPS during finetuning and inference.\n\n- The proposed method requires both the base forward-moving model and the finetuned backward-moving model during both training and inference, making it more computationally intensive compared to a baseline of \n fine-tuning on video interpolation dataset."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024generative,\ntitle={Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ykD8a9gJvy},\nnote={under review}\n}"
},
"abstract": {
"value": "We present a method for generating video sequences with coherent motion between a pair of input keyframes. We adapt a pretrained large-scale image-to-video diffusion model (originally trained to generate videos moving forward in time from a single input image) for keyframe interpolation, i.e., to produce a video between two input frames. We accomplish this adaptation through a lightweight fine-tuning technique that produces a version of the model that instead predicts videos moving backwards in time from a single input image. This model (along with the original forward-moving model) is subsequently used in a dual-directional diffusion sampling process that combines the overlapping model estimates starting from each of the two keyframes. Our experiments shows that our method outperforms both existing diffusion-based methods and traditional frame interpolation techniques."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"generative keyframe interpolation",
"image-to-video diffusion models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e1142e3c8ea4f762b7354c1eae008048722f605c.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/8c91d7dd13923cc570777edb10c41c551c8600e2.zip"
},
"title": {
"value": "Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yklJpvB7Dq | Label-Free Coreset Selection with Proxy Training Dynamics | main | Active | Coreset Selection;Data pruning;Label free coreset selection | other topics in machine learning (i.e., none of the above) | 5;6;6;8 | 4;3;2;3 | 3;3;3;4 | 2;2;3;3 | 3;3;2;4 | 6.25 | 3 | 3.25 | 2.5 | 3 | -0.324443 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. Given the grid search used to determine optimal hyperparameters, could a deeper analysis reveal why certain values work best for measuring sample difficulty? Specifically, how do these parameters influence the balance between easy and hard examples selected for the coreset, and could this inform a more consistent method for tuning them?\n\n2. ELFS currently uses SwAV and DINO as feature extractors for clustering. Would more powerful encoders, such as CLIP, improve the quality of pseudo-labels or provide more stable performance across datasets? Additionally, what effect might these alternative encoders have on the distribution of selected hard and easy examples?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. ELFS effectively addresses the limitations of previous label-free coreset selection approaches, providing a feasible solution that leverages deep clustering for pseudo-labeling.\n\n2. By employing double-end pruning, ELFS improves the selection of informative samples, achieving consistent performance improvements over baselines, even in challenging scenarios.\n\n3. The evaluation across multiple datasets and pruning rates, along with an ablation study, showcases ELFS's flexibility and robustness, which may benefit a range of vision tasks.\n\n4. The authors show that including more challenging samples enhances model performance, with ELFS effectively prioritizing hard examples through double-end pruning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents ELFS (Effective Label-Free Coreset Selection), a method designed to improve label-free coreset selection by estimating data difficulty scores without requiring ground truth labels. The authors tackle challenges in label-free selection by employing pseudo-labels from deep clustering to approximate training dynamics and mitigate distribution shifts with a double-end pruning technique. ELFS shows superior performance over existing label-free methods across various vision benchmarks (e.g., CIFAR10, CIFAR100, and ImageNet-1K) and achieves results close to those of supervised selection methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experiments involve numerous hyperparameters, optimized through grid search. A more in-depth analysis of the underlying reasons behind these optimal values would strengthen the understanding of how different parameters affect the measurement of sample difficulty, offering clearer insights into the importance of hard examples.\n\n2. The approach heavily relies on feature extractors like SwAV and DINO for clustering. It remains unclear if using more advanced encoders, such as CLIP, could further improve performance or stability, suggesting potential limits in ELFS's generalizability with different encoders."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please consider responding to the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. It is an elegant and effective idea to estimate the data difficulty score through deep clustering. This handles the challenge to measure the prediction uncertainty and sample difficulty without any human labels.\n\n2. The proposed method is evaluated on multiple classification benchmark, showing notable performance gain compared with state-of-the-arts.The design of each module is well justified through ablation studies."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new label-free coreset selection algorithm called ELFS to relieve the costly human annotation efforts. ELFS utilizes the deep clustering to generate pseudo-labels and estimate data difficulty scores. Afterwards, a double-end pruning method is introduced to mitigate the bias of data difficulty scores. Experiments show that ELFS can surpass previous label-free coreset selection baselines on several benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My major concern lies in the selection of hyper-parameter $\\beta$. I can understand they require some grid search for hyper-parameters. However, according to Fig. 5, the optimal value is different for multiple datasets or sampling ratios, which is quite inefficient. For example, if there is a large dataset with millions of images, it is infeasible to do grid search on it.\n\n2. Based on Tab. 7, it is quite strange that ResNet50 cannot outperform ResNet18 on the selected subset. I assume it reasons from the simplicity of CIFAR10. Maybe the authors can do the transferability experiments on complex datasets like ImageNet since it is a main difference between corset selection and active learning. \n\n3. For Sec. 4.1, I assume the formulation of label-free coreset selection is already covered in previous work. It may be moved to Sec. 3 for clarity."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "ELFS presents a compelling label-free coreset selection method that reduces the need for extensive and costly labeled datasets while achieving accuracy close to supervised methods. By effectively utilizing pseudo-labels, ELFS not only significantly outperforms other label-free baselines but also exhibits strong performance despite the inherent inaccuracies and noise associated with pseudo-labels. Moreover, the method demonstrates robustness and versatility, showing good transferability across different datasets and model architectures, thereby enhancing its applicability in diverse machine learning tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel method called ELFS (Effective Label-Free Coreset Selection) for selecting coresets without relying on labeled data. This approach uses pseudo-labels derived from deep clustering to approximate training dynamics, enabling the estimation of data difficulty scores. These scores help identify coresets that can be labeled for training high-performance models while minimizing human annotation costs. ELFS addresses the significant performance gap typically found in label-free coreset selection by introducing a double-end pruning technique to manage the distribution shift caused by pseudo-label inaccuracies. This method shows notable improvements in various vision benchmarks over existing label-free methods, demonstrating its ability to approximate the effectiveness of supervised coreset selection."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The ELFS method is quite effective, but it mainly builds on familiar techniques like pseudo-labeling and coreset selection. This might make it seem less novel or groundbreaking to those familiar with the field. Despite this, it does a great job using these methods to ensure high accuracy and reliability.\n\nMoreover, to really show how well ELFS works and to expand its use, it would be beneficial to test it on a wider variety of datasets. This includes tackling larger and more complex datasets such as ImageNet, as well as datasets with uneven distributions or long tails. Testing ELFS in these contexts would help validate its effectiveness across different challenges and environments.\n\nPotential Application Areas for ELFS: Beyond vision tasks, are there other types of data or tasks where ELFS could be effectively applied? Exploring its adaptability to different domains like text, audio, or even structured data could open up new applications.\n\nExplanation of Hard and Easy Examples in Section 4.4.2: Could a visual representation or graph be used to clarify the difference between hard and easy examples as discussed in the section? Visual aids could help illustrate how ELFS handles these types of data, enhancing understanding of its approach.\n\nAnalysis of Data Distribution in Table 1: Is it possible to analyze further how the data distribution of the coreset selected by Random compares to that selected by ELFS? Understanding the differences in selection criteria and resulting coreset characteristics could provide deeper insights into the strengths and limitations of ELFS compared to simpler random sampling methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Please explain why double-end pruning helps the performance.\n2. Do you fine-tune the model with the coreset? Or do you train the model from scratch?\n3. Do you use only the coreset to train the model? It would be better to show the result of using the coreset as the labelled set and the rest data as the unlabelled set to train a model with a semi-supervised learning algorithm such as SemiReward[1]. If, with the help of semi-supervised learning, a randomly sampled labelled set achieves good performance, and the labelled set selected by your model yields similar performance, then the benefits of using a coreset to train the model need to be clarified.\n```\n[1] SemiReward: A General Reward Model for Semi-supervised Learning, Siyuan Li and Weiyang Jin and Zedong Wang and Fang Wu and Zicheng Liu and Cheng Tan and Stan Z. Li, ICLR 2024\n```"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The motivation of this paper is solid, and the topic of this paper exactly matches ICLR.\n2. The writing of introduction clearly delivered the motivation and idea.\n3. The experiment result looks good. It's interesting that many methods even cannot beat random sampling as suggested in Tab.1.\n4. The ablation study is extensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new policy to sample a core subset for deep models. It introduces a deep clustering with the pseudo-labelling to estimate the score for each sample. Meanwhile, they try to fix the bias issue of pseudo-labelling. Experiments demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Some sentences are redundant, such as these two questions proposed in the paper.\n2. It would be better to move sec 4.1 to sec 3 to give readers an overview of the problem you are solving.\n3. My **main concern** is that more benchmarks in different distributions should be evaluated. As described in the paper, this method relies on a pretrained vision encoder to get the visual features for each sample. Then, a deep clustering algorithm is introduced to get the pseudo labels and scores. However, the evaluated datasets in this paper are too easy for pretrained vision encoders. I believe that much of the data in the evaluation datasets is included during pretraining. If we use a dataset in a different distribution, such as a medical image dataset, without a good visual feature, will this method still work?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a novel label-free coreset selection methods (ELFS) that outperforms existing baselines on four vision datasets."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024labelfree,\ntitle={Label-Free Coreset Selection with Proxy Training Dynamics},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yklJpvB7Dq},\nnote={under review}\n}"
},
"abstract": {
"value": "High-quality human-annotated data is crucial for modern deep learning pipelines, yet the human annotation process is both costly and time-consuming. Given a constrained human labeling budget, selecting an informative and representative data subset for labeling can significantly reduce human annotation effort. Well-performing state-of-the-art (SOTA) coreset selection methods require ground truth labels over the whole dataset, failing to reduce the human labeling burden. Meanwhile, SOTA label-free coreset selection methods deliver inferior performance due to poor geometry-based difficulty scores. In this paper, we introduce ELFS (Effective Label-Free Coreset Selection), a novel label-free coreset selection method. ELFS significantly improves label-free coreset selection by addressing two challenges: 1) ELFS utilizes deep clustering to estimate training dynamics-based data difficulty scores without ground truth labels; 2) Pseudo-labels introduce a distribution shift in the data difficulty scores, and we propose a simple but effective double-end pruning method to mitigate bias on calculated scores. We evaluate ELFS on four vision benchmarks and show that, given the same vision encoder, ELFS consistently outperforms SOTA label-free baselines. For instance, when using SwAV as the encoder, ELFS outperforms D2 by up to 10.2% in accuracy on ImageNet-1K."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Coreset Selection",
"Data pruning",
"Label free coreset selection"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f458be4e5699f8fca6e3d45dbb4a8dfec290677c.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/65b14e08567aabdff96068d0173a7651ac7ff51c.zip"
},
"title": {
"value": "Label-Free Coreset Selection with Proxy Training Dynamics"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ykt6I21YQZ | Ensemble Kalman Diffusion Guidance: A Derivative-free Method for Inverse Problems | main | Active | inverse problem;diffusion model;derivative-free | other topics in machine learning (i.e., none of the above) | 3;3;5;6 | 5;3;4;3 | 1;3;2;3 | 2;2;4;2 | 2;3;3;3 | 4.25 | 3.75 | 2.25 | 2.5 | 2.75 | -0.406181 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- **Q1:** Use of a `pretrained diffusion model` is mentionned. I can see this type of models for images, but for complex data with varying domain of definition, like in many scientific applications, the diffusion model needs to be retrained for each type of data no? IT is not completely clear how reusable these models are outside computational imaging.\n\n- **Q2:** The prediction-correction scheme strongly relate to the usual Plug-and-Play methods classicaly used in inverse problem resolution. The proposed scheme is related to the Forward-Backward Splitting (FBS) method, which is typically used in the DiffPIR [B]. From what I understand, equation (9) in this paper corresponds to equation (13) in [A]. Could you elaborate on the main difference of between the proposed method and the DiffPIR framework? (except the derivation free aspect). Other missing references are [B], a survey on using diffusion models for inverse problems, and [C], which also consider diffusion prior for inverse problems, with gradient based method but applied to the Black-hole imaging problem. Adding comparison with their method would be very useful.\n\n- **Q3:** To make is possible to evaluate how good the results are compared to methods that are able to access the gradient of the forward operator, it is necessary to add a few methods that have access to the forward operator's gradients. Indeed, even though ODE solvers are not always natively differentiable, there are more and more works that consider making them differentiable, for instance using `jax` for Navier stokes using pseudo-spectral solver [here](https://github.com/google/jax-cfd). The interesting question is: should we spend some time making them differentiable or do we not gain much by doing so. Therefor, quantifying how much is lost on simple cases as the ones presented here is necessary to make the case of derivative free methods. In particular, adding the results of the DPS method and the DiffPIR method would be very useful at least for the imaging task. Note that these models are both implemeted in the [`deepinv`](https://deepinv.github.io) library (see [here](https://deepinv.github.io/deepinv/deepinv.sampling.html)). Also, adding the DPS base line and PnP-DM form [C] for black hole imaging would better illustrate how much we loose by note considering the gradient of this differentiable operator. Adding them for Navier-Stokes would also be very interesting but probably more challenging.\n\n- **Q4**: The value of $J$ and $Q$ in the experiments are not reported. Could the author provide them? From equation (16), I understand that we need to compute the forward operator $J$ or $Q$ times at each iteration. From the NAvier stokes experiment, assuming that the procedure is run for 1k steps, I guess $J = 295$ and $Q=2048$? How were these value chosen? What are their impact on the results? Also, would having access to the gradient mean going roughly 100 times faster than EnKG? (The computational cost of computing the gradient through autodiff is approximately x2/3 times the cost of evaluating the forward operator). Note that the metrics chosen (# of evaluation of Fwd/DM, Seq) are not clear. Better definition of what they represent mean would be useful. Adding the total runtime of the method would help a lot to assess the computational cost.\n\n- **Q5:** In the black-hole experiment, how many simulation are used to train the diffusion model?\n\n### Minor comments, nitpicks and typos\n\n- Missing ref: \n- l.066: \"One more challenging\" -> \"On\"\n- l.069: \"More computationally efficient\" -> than what?\n- l.078: \"often Gaussian\" -> They are almost never Gaussian, but they are modeled as such and this gives reasonable results.\n- l.141: \"and Nelder-Mead simplex methods\" -> Extra `,`.\n- l.198: The drop of the subscript $x$ for the gradient is confusing\n- Eq (7) -> The notation $\\Delta x_i$ is not defined. As it is not used anywhere else, I would recomment using $\\|x_{i+1} - x_i\\|$ instead.\n- Eq (12) -> $x_i'$ should have a super script $(j)$. It is also not completely consistent for $x_{i+1}$.\n- l.302: \"instead\"\n- l.316: `our approach outperforms the standard strong DPS baseline` -> The DPS result are not present in the table, so I think they are missing?\n- l.335: What is the `EDM` framework?\n\n\n- l.847: The change from \"l\" to \"j\" should be made explicit as it is not immediately clear and can appear as a typo. \n\n### References\n\n[A] : Daras, Giannis, et al. [A survey on diffusion models for inverse problems.](https://arxiv.org/pdf/2410.00083) arXiv preprint, 2024. \n[B] : Zhu, Yuanzhi, et al. [Denoising diffusion models for plug-and-play image restoration.](https://arxiv.org/pdf/2305.08995) Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. \n[C] : Wu, Zihui, et al. [Principled Probabilistic Imaging using Diffusion Models as Plug-and-Play Priors.](https://arxiv.org/pdf/2405.18782) arXiv preprint, 2024."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Interesting approach to solve inverse problems.\n- Derivative-free approaches can be useful in many cases and have received much less attention than gradient-based methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new approach to solve inverse problems using a derivative-free optimization method based on the Ensemble Kalman Filter. The core idea is to approximate the data-fidelity term gradient with a statistical\nlinearization from the ensemble Kalman methods. The method is applied to three types of inverse problems: computational imaging problems, the Navier-Stokes equation, and the black-hole imaging problem. The method is compared other derivative-free baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Positioning relative to the state-of-the-art is not clear, in particular with respect to the proposed framework, which seems to be a variant of the existing ones (see **Q2**).\n- The evaluation is not completely satisfactory (see **Q3, Q4**)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) How does EnKG perform when applied to larger-scale problems or high-dimensional data? Are there specific limitations to its computational efficiency?\n2) Is there any dependence on the performance and the pertrained model? How sensitive is the method to the quality and type of pre-trained diffusion model used? What are the implications if a suitable model is not available?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1)\tEnKG operates without gradients, needing only black-box access to forward models, making it highly applicable to complex inverse problems with unknown or undefined derivatives. The proposed PC framework generalizes existing methods, enabling adaptability across various inverse problems without retraining.\n2)\tThe current work demonstrates strong performance, notably in complex tasks like the Navier-Stokes equation, outperforming gradient-based solutions\n3)\tProvides deeper understanding and new interpretations of diffusion-based approaches, contributing to the field of inverse problem-solving."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents Ensemble Kalman Diffusion Guidance (EnKG), a novel derivative-free method for solving inverse problems using pre-trained diffusion models. Traditional approaches often require detailed knowledge of the forward model, such as derivatives, which limits their applicability. EnKG overcomes this by relying solely on black-box evaluations of the forward model and a pre-trained diffusion model.\nKey contributions are twofold: 1) EnKG operates solely with black-box access to forward model evaluations and a pre-trained diffusion model, making it particularly useful in scenarios where derivative information is inaccessible. 2) The authors introduce a prediction-correction (PC) framework that utilizes the empirical covariance matrix of ensemble particles during the correction step. This innovation allows EnKG to effectively bypass reliance on gradients, enhancing its applicability in non-linear inverse problems.\nThe paper demonstrates the effectiveness of EnKG across various inverse problems, including applications in fluid dynamics. These examples highlight the method's capability to handle complex, non-linear scenarios that are common in scientific research.\nIn summary, this work expands the toolkit for addressing inverse problems in machine learning by introducing a flexible and robust approach that maintains the generative power of diffusion models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) The method may face challenges when scaling to very large models or high-dimensional data, as ensemble-based approaches can become computationally expensive. A further insight along this line would be useful. Also, it relies on pre-trained diffusion models, which might limit effectiveness if high-quality models are not available for certain tasks.\n2) The empirical validation focuses on specific problem sets; broader testing across diverse applications would strengthen the generalizability claim. \n3) A further analysis on the algorithm complexity would be beneficial as the combination of the prediction and correction steps might introduce additional computational and implementation complexity due to ensemble covariance estimation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Line 316: Is the DPS baseline included in the table?\n- Black-Hole Imaging Problem: In the black-hole imaging problem, how is $G(\\phi(x_i, t_i))$ computed if $G$ is unknown and $\\phi(x_i, t_i)$ differs from the observed data? Could you provide a step-by-step explanation of how the black-box forward model is handled in the case? Additionally, please clarify if any assumptions are made about $G$ or generated samples in this scenario.\n- Proof of Lemma 1: The proof shows a monotonic decrease in $\\text{tr}(C_{xx}^{(i)})$. Why does this quantity converge to zero? From another perspective, a vanishing covariance implies the trajectories converge to a single point. Does this implication contradict the ill-posed nature of inverse problems, which typically have many possible solutions?\n\nPossible Errata\n- Lines 225 and 284: Remove \\Gamma .\n- Line 731: Remove the extraneous ‘(‘.\n- Line 847: Replace with $\\approx$."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tNovel Approach: The paper introduces statistical linearization within ensemble Kalman methods to diffusion-based inverse problems, a novel concept in this context.\n2.\tInnovative Guidance Term Formulation: The authors present a unique formulation for the guidance term, with a clever trick that replaces the derivative of the forward model with covariance from forward evaluations.\n3.\tComprehensive Validation: The effectiveness of EnKG is demonstrated across three different scenarios: (1) cases where the forward model is known and differentiable, (2) cases where the forward model is known but differentiating it is impractical (e.g., PDE-based models), and (3) cases where the forward model is a black box, with observations as the only available information."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Diffusion models have been used to address inverse problems, with numerous diffusion-based solvers that avoid retraining existing diffusion models. These approaches typically rely on pseudo-inverses or derivatives tied to the forward model. This paper introduces a novel diffusion-based inverse solver designed for cases where the forward model is unknown.\n\nThe authors propose Ensemble Kalman Diffusion Guidance (EnKG), a derivative-free method that utilizes only forward model evaluations along with a pre-trained diffusion model.\nIn the proposed method, guidance term is computed as follows:\n1. Particles are initialized to compute covariance in following steps.\n2. During the diffusion trajectory, the particles are pushed by ODE solver.\n3. Then, synthesized samples are applied to the forward model and compute covariance of them.\n4. Diffusion trajectory is updated by the formula given the EnKG.\nThe proposed method replace the derivative of the forward model by computation of covariance and ODE solving.\n\nThe empirical results demonstrate the effectiveness of EnKG across diverse inverse problems, including cases where the measurement operator is treated as a black box. Specifically, the method is applied to image inversion problems with explicit forward models, Navier-Stokes inverse problems where the forward model is computed by solving PDEs, and black hole imaging, where the forward model is a black box."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited Discussion of Related Work: The paper lacks depth in interpreting and explaining related works.\n- The motivation behind the weighting matrix $w_i C_{xx}^{(i)}$ is unclear. Could you provide further explanation on the intuition and reasoning behind the choice of weighting matrix?\n- Although the Kalman method is a core component, the paper does not provide a thorough explanation of its role and mechanics in this context. Specifically, which parts of the method are directly applied from existing literature and which represent novel contributions of this paper? For example, the introduction of weighting matrix, the derivation using local linearity of the operator in the proof, and the convergence claim. Related to above questions, please clarify the intuition and motivation provided by the literature versus that introduced by the authors.\n2. Overstated Contribution: The claimed contributions seem somewhat overstated. The concept of Predictor Corrector interpretation in guidance-based methods is not entirely new."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Plot (b) in Figure 3 is difficult to interpret. Could you clarify what the values 0.2, 0.4, ..., 0.8 represent? Additionally, what do \"Seq # DM\" and \"Seq # DM grad\" refer to in this context?\n- Plot (c) reports the runtime of the proposed algorithm, with an average of approximately 140 minutes. This is a considerable computational cost (about 2 hours) for solving one inverse problem.\n * How does this runtime compare to other algorithms (aside from EKI) ?\n * Can you comment on the practical applicability of the method given this runtime?\n * Considering the high computational cost, how does the algorithm perform relative to methods that fine-tune or train smaller network components for the guidance term, as seen in [1, 2, 3, 4]?\n\n\n---\n.. [1] Black, Kevin, et al. \"Training diffusion models with reinforcement learning.\" arXiv preprint arXiv:2305.13301 (2023).\n\n.. [2] Uehara, M., Zhao, Y., Black, K., Hajiramezanali, E., Scalia, G., Diamant, N. L., ... & Levine, S. (2024). Fine-tuning of continuous-time diffusion models as entropy-regularized control. arXiv preprint arXiv:2402.15194.\n\n.. [3] Fan, Ying, et al. \"Reinforcement learning for fine-tuning text-to-image diffusion models.\" Advances in Neural Information Processing Systems 36 (2024).\n\n.. [4] Denker, Alexander, et al. \"DEFT: Efficient Finetuning of Conditional Diffusion Models by Learning the Generalised $ h $-transform.\" arXiv preprint arXiv:2406.01781 (2024)."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "Introduction of an algorithm that solves inverse problems with diffusion models prior that only requires point-wise access to the forward model"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors address inverse problems using diffusion models as priors within a restrictive setup where the forward model is accessible only point-wise.\nThey employ a Predictor-Correct framework, where in the prediction stage, samples are drawn from the prior using the ODE describing the diffusion model.\nIn the correction stage, a MAP problem involving the forward model at the current diffusion step is solved through gradient-based updates.\nUnlike previous approaches, the authors approximate the intractable term in this MAP formulation by evaluating it on samples generated via the ODE.\nSince only point-wise access to the forward model is available, the authors estimate the gradient through statistical linearization and ensemble Kalman methods.\nThis approach maintains a set of particles and uses them, along with their centroids, to approximate the gradient.\nThe authors validate their algorithm on three different inverse problems."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Methodological concerns**\n\n- The authors’ motivation for their \"Derivative-free correction step\" in Lines 246–263. The matrix $C_{xx}$ appears without adequate justification, and in Equation (12), the authors invert $C_{xx}$ even though it is a singular matrix, as the number of samples used to compute it is less than the dimensionality of the problem. Consequently, $C_{xx}$ cannot be treated as a preconditioning matrix in this context.\n- In the appendix, the authors base their proofs on Equation (20), which is introduced without sufficient explanation. This equation appears to correspond to the ensemble update, which assumes an estimation of the gradient, the very objective of the lemmas and propositions that follow. This circular reasoning raises concerns about the validity of the proofs.\n\n\n**Technical concerns**\n\n- In Lines 125–127 (paragraph following Equation (4)), the authors suggest that the guidance term depends on the noise scheduler $\\dot{\\sigma} \\sigma$. However, this dependence result from their specific formulation. As verification, the authors can review Equation (5) in [1] or write DPS's algorithm in terms of the score. The guidance term is not scaled by the noise scheduler.\n- The statement in Lines 194–195 is misplaced. Specifically, $\\log \\hat{p}(y | x_{i+1})$ is a composition of the simulated ODE and the forward model, making it highly non-convex. Therefore, the hypothesis of convexity is unrealistic. Besides, this term varies at each diffusion step, as it is composed with the ODE at different time steps, which diverges from the requirements outlined in [2], Chapter 4. Hence, the iterative updates may not converge to a true MAP estimate within this setups.\n\n\n**Errors and clarifications**\n\n- Equations (12)–(14) lack clarity. The variable $x_{i+1}'$ is undefined, and while the argmin is specified with respect to $x_{i+1}^{(j)}$, this variable does not appear in the equations.\n- In Lines 897–903, the gradients of $p$ are missing a logarithmic term.\n- The first part of Assumption (3) regarding $C_{xx}$ is redundant. Since $C_{xx}$ is a positive semi-definite matrix, its trace is non-negative, being the sum of positive eigenvalues. Therefore, if the trace of $C_{xx}$ is zero, $C_{xx}$ must be the zero matrix.\n- In Lines 316–317, \"DPG\" should replace \"DPS,\" as DPS is not included in the experiments.\n- The term \"guidance\" is repeated twice in Line 52.\n\n\n---\n\n.. [1] Chung, Hyungjin, et al. \"Diffusion posterior sampling for general noisy inverse problems.\" arXiv preprint arXiv:2209.14687 (2022).\n\n.. [2] Parikh, Neal, and Stephen Boyd. \"Proximal algorithms.\" Foundations and trends® in Optimization 1.3 (2014): 127-239."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024ensemble,\ntitle={Ensemble Kalman Diffusion Guidance: A Derivative-free Method for Inverse Problems},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ykt6I21YQZ},\nnote={under review}\n}"
},
"abstract": {
"value": "When solving inverse problems, it is increasingly popular to use pre-trained diffusion models as plug-and-play priors. This framework can accommodate different forward models without re-training while preserving the generative capability of diffusion models. Despite their success in many imaging inverse problems, most existing methods rely on privileged information such as derivative, pseudo-inverse, or full knowledge about the forward model. This reliance poses a substantial limitation that restricts their use in a wide range of problems where such information is unavailable, such as many scientific applications. To address this, we propose Ensemble Kalman Diffusion Guidance (EnKG) for diffusion models, a derivative-free approach that can solve inverse problems by only accessing forward model evaluations and a pre-trained diffusion model. We study the empirical effectiveness of our method across various inverse problems, including scientific settings such as inferring fluid flows and astronomical objects, which are highly non-linear inverse problems that often only permit black-box access to the forward model."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"inverse problem",
"diffusion model",
"derivative-free"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3da29777b80ef4b3784cee06c0329a5d8f04d3ed.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/e9918658d4fe1d1e0440c2f7e877f1f68437545d.zip"
},
"title": {
"value": "Ensemble Kalman Diffusion Guidance: A Derivative-free Method for Inverse Problems"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ykuc5q381b | BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval | main | Active | Retrieval benchmark;Reasoning | datasets and benchmarks | 3;5;6;8;10 | 4;3;4;4;4 | 3;2;3;3;4 | 3;3;3;3;4 | 3;4;3;3;4 | 6.4 | 3.8 | 3 | 3.2 | 3.4 | 0.289662 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The only two clarification issues I had are:\n- Details regarding human annotator guidelines, annotator recruitment, and compensation. This would help in better understanding the reliability of the human annotations.\n- More specifics regarding licensing (i.e., a table with licensing terms for each dataset being used)."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "A strong benchmark paper should satisfy some of the following dimensions (along with some commentary): \n- the task is useful\nDifficult document retrieval is a long- and widely-studied problem. It is both more important in the era of LLMs due to increased reasoning capabilities, but potentially less important as more information is encoded in the parameters (modulo time-sensitive, etc.). Thus, additional motivation could help justify the significance of the work. \n\n- the dataset is large enough and non-trivial to construct\nThis is also mixed; the dataset isn't particularly large and is ostensibly of varying quality between Stack Exchange, Coding, and Math. That being said, it is clearly more complex than many existing benchmarks for at least a subset of the questions. For some questions, the quality is seemingly higher (i.e., more human validation) than existing datasets.\n\n- there are sufficient details regarding the construction of the benchmark\nIncluding the appendices, there are a lot of details -- to the point where I am confident I could replicate most of the results. However, the amount and clarity of the procedure for different data sets (Stack Exchange, Coding, and Math) isn't as detailed for all cases. Also, it isn't clear in general what the human annotation guidelines were, how annotators were recruited, and how they were compensated (unless it is just the authors and volunteers). However, the details are solid overall.\n\n- the tools provided reduce friction for new people to work on this\nCode is provided and was used to run several experiments. I didn't dig through the code and thus do not know how easily it is to conduct experiments. However, I am reasonable confident it is sufficient.\n\n- the baseline models tested on the benchmark are non-trivial\nThe authors conduct several experiments over several different retrieval engines including state-of-the-art systems on related datasets.\n\n- the benchmark answers new questions or enables new solutions\nThe authors did conduct experiments beyond just IR performance and were able to address some of these questions using this dataset. The discussion in these sections could be strengthened, but it is solid in this regard overall.\n\nEvaluating the paper with respect to the stated dimensions,\nOriginality: There are multiple 'hard QA/IR' datasets, but the emphasis here is on IR for reasoning-heavy scenarios -- which is timely and a useful contribution. Many have likely considered such datasets, but the execution here is better than a first attempt.\nQuality: Overall, the work is well-motivated, well-executed, and sufficiently rigorous. My primary concern in this regard is variance in quality between different benchmark types (QA, Math, Coding) and that this is a relatively small dataset.\nClarity: Overall, the paper is easy to understand and has sufficient details, especially when considering the appendices. The figures are helpful. My two suggestions in this regard are a Table comparing Bright to the most related datasets and more discussion regarding the empirical results including specific references to cells in the tables (i.e., I didn't always know which cells I was looking at when validating quantification claims).\nSignificance: I am fairly certain that at least part of this benchmark will be used, but not sure if all parts will be used. Additionally, it would have more potential impact if it was a larger dataset (or there was clear evidence that it covers some expected 'production' distribution)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose Bright, an information retrieval benchmark that focuses on questions that require 'reasoning-intensive' retrieval to provide relevant content. Whereas most existing benchmarks focus on factoid questions where there is significant semantic overlap between the query and the retrieved content (or some 'translation', but not 'reasoning'). Specifically, Bright is actually 12 datasets including 7 datasets from StackExchange covering {Biology, Earth Science, Economics, Psychology, Robotics, Stack Overflow, Sustainable Living}, 2 coding data settings from {LeetCode (python), Pony}, and 3 math reasoning datasets {TheoremQA-Question Retrieval, TheoremQA-Theorem Retrieval, Art of Problem Solving}. Details are provided regarding the procedures to collect each dataset. In short, for StackExchange, human annotators browse recent posts and select a post with at least 5 upvotes and contain at least one URL link -- which are human validated to produce questions, answers, and relevant documents. Negative documents are collecting via identifying semantically similar but irrelevant documents (i.e., negatives) via a Google search powered method. These are human validated for unanimity. Pony coding adapts a code generation dataset to retrieving pages from manuals to cover syntax and LeetCode via a fairly straightforward crawling procedure. Math reasoning adapts TheoremQA to retrieve math queries that either use the same theorem as the query's solution (question retrieval), theorems from ProofWiki (theorem retrieval), or Math Olympiad problems (Art of Problem Solving) matching other problems that use the same theorems. Experiments are conducted based on 13 different retrieval engines including sparse retrieval, open-source dense retrieval, and proprietary models. The important findings is that nDCG@10 has variance amongst different systems (i.e., improvements can be made) while being relatively low as compared to other benchmarks (i.e., it is difficult). Additional experiments show that querying with LLM reasoning (i.e., chain-of-thought) improves performance (i.e., reasoning is needed for retrieval, irrespective of underlying retrieval method), retrieval improves RAG-based results (i.e., retrieval is an important problem). They also demonstrate that reranking with increasingly powerful LLMs improves retrieval performance, Bright appears robust with respect to data leakage in pre-training (i.e., pre-training doesn't cover reasoning requirements as much as most tasks), and that long-content retrieval (e.g., legal, medical) is more difficult."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "On the other hand, below are some of my concerns regarding this paper (some previously mentioned):\n- I would like to see a statistics-level comparison between Bright and competing datasets in a table (\"Benchmarking retrieval\" (line 104) and \"Benchmarking reasoning\" (line131) in Related Work section)\n- I would like clarification regarding the annotation guidelines, recruiting experts, compensation for lines 248-253.\n- To get the details for coding and math, one pretty much has to read the appendices and from what I can tell, the significance and quality of the Stack Exchange questions is the strongest aspects of the paper.\n- The appendices are more detailed (to the point where they actually seem different from the text). However, still no details regarding annotation guidelines, recruiting experts, and compensation (unless the authors did all of this)\n- The dataset seems relatively small; if I am incorrect, I would recommend a table contrasting this with other datasets (along with other aspects).\n- For reasoning steps, a bit more from the appendix (e.g., StackExchange vs. coding vs. math stratification) would be helpful in the main text with discussion.\n- In general, it isn't clear that ordering matters for RAG settings, so NDCG-based results may not be that useful as in IR settings. I also would recommend rank-biased precision and recall (i.e., evaluations similar to 'needle-in-a-haystack' settings.\n- As implied in other areas, there are a lot of results, so more specific interpretation would be helpful (but I am aware of the page limit).\n- While the authors claim that there are not licensing issues, I wasn't able to verify this. Obviously, if there are licensing issues (for academic research within commercial organizations?)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "### Questions \nPerhaps I missed something, but do you think any of the following analysis will be helpful for future works - \n- A quantitative analysis that shows the rank of the gold document for the best retriever (QWEN).\n- A qualitative analysis that examines why strong LLMs (e.g., GPT-4 in Table 3) fail to correctly rank relevant documents. The gold document can be added as an oracle to when it was not retrieved. If not, do you think that a different analysis regarding model errors can be helpful? \n- The ratio of evaluator failures described in lines 454-458 (although this is minor).\n\nAgain, maybe I am missing something, but did you also experiment with the StackExchange answers as positive passages?\n\n### Suggestions \n- The abstract mentions that “The leading model on the MTEB leaderboard which achieves a score of 59.0 nDCG@10,1 produces a score of nDCG@10 of 18.0 on **BRIGHT**”. Consider adding which model this is, because the MTEB leaderboard is changing constantly.\n\n- The example for “Level 1: Keyword-based Retrieval” only states that the part of highway 401 is *one of the widest*, while the question asks for the widest highway. I understand this is from the NQ dataset, but a positive passage that directly answers the query might be simpler for the reader.\n\n- The appendix is sometimes referenced as Appendix (e.g., Appendix C) and sometimes as § (e.g., §B.3). Consider using one for consistency (this is of course very minor).\n\n- The subsets of **BRIGHT** are relatively small, with the smallest including only 78 examples. Adding error bars or variance for the results, either in the paper or in a leaderboard, could help future researchers focusing on a specific subset."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The data collection pipeline is thorough and includes verification by two domain-expert PhD students. \n- Lots has been done to ensure diversity by focusing on several different StackExchange domains, two coding tasks, and an additional effort for including theorem-based questions.\n- The experiments are extensive and cover 13 different retrievers, in addition to two re-rankers.\n- The paper is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents **BRIGHT**, a benchmark for evaluating retrievers on reasoning intensive tasks. **BRIGHT** contains 1,398 realistic tasks over 12 domains, including Biology, Psychology, Coding, and Math. By experimenting with 13 retrievers, the paper shows that **BRIGHT** is challenging for current systems - the best model reaches an nDCG@10 score of 22.1. Moreover, enhancing queries with reasoning traces improves retrieval performance, retrieval-augmentation with current retrievers improves QA accuracy, re-ranking with LLMs can increase retrieval accuracy, and pre-training on documents in **BRIGHT** does not further increase performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- It could have been helpful to add a quantitative analysis in the analysis section. For example, an analysis that examines when models err could be useful for future research (see the Questions section for further discussion and some suggestions).\n\n- There are a few details in the appendix that are not referenced from the main paper (e.g., the Limitations section), and I found the appendix a bit hard to follow. Consider verifying all main sections are referenced, or alternatively adding a small Table of Contents in the beginning of the Appendix."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- Have you tried finetuning retrievers on this benchmark?\n- In figure 1, level 2 queries (NQ, MSMARCO) are outdated. It would be better to compare BRIGHT’s “intensive” reasoning to recent retrieval benchmarks, such as RAR-b or subset of BeIR and MTEB that goes beyond semantic matching."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The benchmark is challenging and has the potential to drive future research toward developing retrievers that handle difficult queries more effectively.\n- The dataset is human-collected, ensuring authenticity rather than relying on artificially generated data. \n- I liked the comprehensive appendix, which provided valuable insights into the annotation process and the dataset's structure."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents BRIGHT, a challenging retrieval benchmark designed to require deep, beyond-surface-form matching between queries and documents. Human annotators constructed a dataset of 1,398 instances spanning diverse domains. Current retrieval models perform poorly on this benchmark, highlighting its difficulty. The authors also propose several techniques aimed at improving retrieval performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Detailed analysis of results is lacking and some (RAG, reranking) are not surprising.\n- In line 413, how can LLM-reasoning queries enhance the reasoning capabilities of retrievers? If the primary effect is an increase in lexical similarity (BM25's strong performance), should models be specifically trained to leverage this feature to perform well on BRIGHT? Additionally, the results for Coding and Theorem-based datasets (Table 38) appear inconsistent.\n- In line 428, regarding retrieval-augmented generation, the QA performance seems to be already quite high when paired with current retrievers that are reported to perform poorly."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper focuses on creating a new dataset to benchmark reasoning-intensive queries, which is a very important type of queries for RAG systems and search engines, and there does not exist such a benchmarking dataset so far.\n\n2. The dataset covers a variety of domains, including economics, psychology, mathematics and coding, which is quite comprehensive.\n\n3. The retrieval process in the experiments incorporated explicit reasoning about the query and achieved an up to 12.2 points improvement, which demonstrated that reasoning is indeed a bottleneck for such types of queries, which aligns very well with the motivation of the datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper argues that for retrieval tasks, many real-world queries require in-depth reasoning to identify relevant documents, which goes beyond traditional keyword or semantic-based matching focused by existing benchmarks.\nTo benchmark retrievers on such reasoning-intensive queries, this paper created a new dataset, Bright, which consists of 1398 queries spanning diverse domains. \nThe benchmarking on this new dataset demonstrated that state-of-the-art retrieval models perform poorly on this dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. There is no unified definition of either relevance or required reasoning across data from different domains in this dataset. Though the author wrote different \"relevance\" in different subsections, they are actually not definition of relevance but how the corpus is created.\nIt is acceptable as a dataset, however, there is a lack of deep scientific understanding about what exact (reasoning) ability is required for a retriever model to success on this dataset. Instead what we can learn from this dataset is that the retriever model may need to overfit some ad-hoc collection process.\n\n2. Due to a lack of unified relevance definition, the dataset is more like a collection of different benchmarking applications where different relevant (can be either closely or loosely) information to the query can be helpful. Therefore, rather than using it as a retrieval benchmark to evaluate retrievers, it is more suitable to use it to evaluate the application. For these applications, the retrieval results are just intermediate results and can be difficult to judge whether they can actually help the final results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. (Lines 248-253): Why not use all three annotators independently? It seems you biased the expert annotators with annotations performed by a non-expert\n2. Section 5.2 claims that continued pre-training does not help. However, you mention in Appendix A.3 that the average number went to 21.0! Why was 20.4 picked as the final number in Table 5. It is possible that continued pre-training in this fashion (only showing new documents from Stack Exchange) is making the model lose its generalization abilities. There are two options to do this ablation study -- train a model with a replay buffer, which samples data from the original training data along with stack exchange. Train a model where all data is used together (original training data and stack exchange)\n3. How does an LLM (such as GPT-4o) work on the downstream task. That is give it a question, and evaluate its performance on answer quality (without giving any document)"
},
"rating": {
"value": 10
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. A high quality, relevant and challenging benchmark for information retrieval tasks\n2. Comprehensive evaluation on a wide variety of models\n3. Love the section on how chain of thought reasoning helps improve the models -- especially BM25!\n4. Downstream task evaluation is also very helpful -- confirms the documents are relevant"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "## Summary\nThe paper presents a challenging and novel retrieval benchmark which requires reasoning to answer queries\n\n## Contributions\n1. The first benchmark to focus solely on reasoning intensive retrieval. The benchmark is constructed from diverse domains and carefully curated to remove possible noise\n2. Thorough experiments are done with respect to a variety of retrieval models -- sparse, open source small models, open source large models and even proprietary models\n3. The authors list the data creation and curation process in detail. This would further help in creating similar such benchmarks"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I could not think of any questions I had which were not answered in the paper. I have some observations which are mentioned in the questions"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024bright,\ntitle={{BRIGHT}: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ykuc5q381b},\nnote={under review}\n}"
},
"abstract": {
"value": "Existing retrieval benchmarks primarily consist of information-seeking queries (e.g., aggregated questions from search engines) where keyword or semantic-based retrieval is usually sufficient. However, many complex real-world queries require in-depth reasoning to identify relevant documents that go beyond surface form matching. For example, finding documentation for a coding question requires understanding the logic and syntax of the functions involved. To better benchmark retrieval on such challenging queries, we introduce BRIGHT, the first text retrieval benchmark that requires intensive reasoning to retrieve relevant documents. Our dataset consists of 1,398 real-world queries spanning diverse domains such as economics, psychology, mathematics, coding, and more. These queries are drawn from naturally occurring or carefully curated human data. Extensive evaluation reveals that even state-of-the-art retrieval models perform poorly on BRIGHT. The leading model on the MTEB leaderboard (Muennighoff et al., 2023), which achieves a score of 59.0 nDCG@10,1 produces a score of nDCG@10 of 18.0 on BRIGHT. We show that incorporating explicit reasoning about the query improves retrieval performance by up to 12.2 points. Moreover, incorporating retrieved documents from the top-performing retriever boosts question answering performance by over 6.6 points. We believe that BRIGHT paves the way for future research on retrieval systems in more realistic and challenging settings."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Retrieval benchmark",
"Reasoning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f8d579bcbed2e911c91ea37cb767128ab6930a75.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/b21e98fbc277aaa47c3b6a11f9e18b5e1e83c1c4.zip"
},
"title": {
"value": "BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ylgg2RE7ub | IF-MODGS : INITIAL FREE MONOCULAR DYNAMIC GAUSSIAN SPLATTING | main | Withdraw | novel view synthesis;4D rendering;camera pose estimation;3D reconstruction | applications to computer vision, audio, language, and other modalities | Yeomsuwoong;Jimin Roh;Eunho Shin;Kyeongbo Kong;Joonsoo Kim;Songju Na;Suk-Ju Kang | ~Yeomsuwoong1;~Jimin_Roh1;~Eunho_Shin1;~Kyeongbo_Kong1;~Joonsoo_Kim2;~Songju_Na3;~Suk-Ju_Kang1 | 3;3;5;5 | 4;4;5;4 | 3;3;2;3 | 2;2;1;2 | 3;3;2;2 | 4 | 4.25 | 2.75 | 1.75 | 2.5 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "We thank the reviewers for reviewing our paper. After careful consideration, we think our paper is inappropriate for ICLR, and we decided to withdraw our paper."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In lines 260–261, the authors mention using Mask R-CNN to extract a motion mask. Given that Mask R-CNN is primarily an object detection and segmentation method, how is it specifically utilized to obtain the static mask in this context?\n2. In section 3.2, the authors propose to use the scale and shift estimated from the static part for the dynamic part. If a particular frame contains an excessively large moving part (which are common in object-centroid dataset), resulting in a small static part and insufficient static depth, how does this affect the accuracy of the estimated scale and shift and how to handle this? Given that the same scale and shift are applied to the dynamic part, the motion could complicate optimization. Without accurate scale and shift estimates, how can the authors ensure successful optimization in such scenarios?\n3. It is evident that the initial module is crucial, as it provides both point clouds and the initial pose, with the core of this module being optimization based on view consistency. However, the depth information provided for optimization is inconsistent in scale and shift, and the pose estimation begins from scratch, espectically under synamic condition. How easy is it to optimize under these conditions? Is there a risk of falling into local optima during the optimization process?\n4. In lines 294–296, the authors mention using 5% and 10% thresholds. Why were these specific percentages chosen? They appear intuitive. Additionally, what criteria are used to sort the depth values within a frame to derive the 5% and 10% results?\n5. How is the proposed CSM network trained? Where does the ground truth for the Gaussian splats (GS) come from? \n6. How is the canonical space for each object defined? Can this be applied to each object individually? Additionally, is the static part also transformed into the canonical space? The authors mention using CSM to transfer points to the canonical space, but how are these points converted back to global space before rendering?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. the paper is easy to understand.\n2. the results show the effectiveness of proposed pipeline."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a dynamic Gaussian reconstruction method. The proposed pipeline consists of two steps: the initial module and the reconstruction module, which handle static and dynamic components separately. The static initial module initializes point clouds and estimates the pose based on static elements, while the dynamic module focuses solely on initializing dynamic point clouds. In the reconstruction phase, the static reconstruction leverages pose estimation and the point clouds, whereas the dynamic part employs a deformation network for dynamic reconstruction and pose estimation. Finally, both static and dynamic components are combined for the final rasterization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In the Methods section, the authors should provide succinct descriptions of the methods employed, such as COGS and monocular depth, rather than solely citing them. Including specific aspects—such as key features/primary steps/crucial settings. This would enhance clarity and allow readers to understand the context and relevance of these techniques within the proposed framework.\n2. In line 292, the authors claim that the dynamic point cloud obtained from the process is unstable due to adjustments made with the static scale and shift values. If the results of the dynamic part are indeed affected by an overall incorrect scale and shift, then simply removing outliers (top 5% and bottom 10%) may not adequately address this issue. While outlier removal can eliminate extreme values, it does not rectify the underlying problem of an incorrect overall scale and shift. The authors should discuss any additional steps taken to correct scale and shift for stability or acknowledge this as a limitation if unresolved.\n3. In line 299, how are the parameters (r and s) for the Gaussian representation obtained? The authors only describe the point cloud in the previous sections. To improve completeness, the authors should provide the mathematical formulation or algorithm used to derive these parameters from the point cloud data.\n4. Figure 3 and the lower part of Figure 1 share the same framework, which is redundant. I recommend removing the lower portion of Figure 1 to streamline the presentation.\n5. Considering NeRF is also a good 3D representation, the authors should also compare to the state-of-the-art NeRF-based dynamic methods, such as DynNeRF[1], CTNeRF [2] , DynPoint [3], MonoNeRF [4]… .\n6. I have some doubts regarding the novelty of this work. Previous dynamic reconstruction efforts typically involve first segmenting the scene into dynamic and static components, using depth prior knowledge to assist in the process. This paper appears to be more of an engineering implementation, resembling a combination of existing approaches rather than presenting novel research contributions. To clarify, the authors could explicitly articulate the unique technical contributions of their approach and explain how it advances the field beyond engineering implementation.\n\n[1] Chen Gao, Ayush Saraf, Johannes Kopf, and Jia-Bin Huang. Dynamic view synthesis from dynamic monocular video. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5712–5721, 2021.\n[2] Xingyu Miao, Yang Bai, Haoran Duan, Yawen Huang, Fan Wan, Yang Long, and Yefeng Zheng.\nCtnerf: Cross-time transformer for dynamic neural radiance field from monocular video. arXiv preprint arXiv:2401.04861, 2024.\n[3] Kaichen Zhou, Jia-Xing Zhong, Sangyun Shin, Kai Lu, Yiyuan Yang, Andrew Markham, and Niki Trigoni. Dynpoint: Dynamic neural point for view synthesis. Advances in Neural Information Processing Systems, 36, 2024.\n[4] Fengrui Tian, Shaoyi Du, and Yueqi Duan. Mononerf: Learning a generalizable dynamic radiance field from monocular videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 17903–17913, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the above section for my questions and suggestions."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. A novel approach to handling dynamic scenes without requiring traditional SfM-based initialization, and proposes to use the separation of static and dynamic components for independent optimization.\n2. Enhance the quality of complex spatiotemporal scenes through a combination of high-dimensional feature loss and annealing frequency loss.\n3. Well-structured presentation with clear pipeline illustrations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents IF-MODGS, a novel approach for reconstructing and rendering dynamic scenes using only monocular camera input without requiring pre-computed camera poses or point clouds. The key contributions include a pipeline that separates static and dynamic regions to estimate camera poses from static backgrounds and generate point clouds for dynamic objects; a Canonical Space Mapper (CSM) that defines a canonical space and applies deformation to link it with different viewpoints and timestamps; enhanced quality in complex spatio-temporal scenes through a combination of high-dimensional feature loss and annealing frequency loss."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The reviewer appreciates the authors' efforts in addressing dynamic scene reconstruction without relying on SfM points and camera parameters as known priors. However, there is considerable room for improvement in the experimental design and comparison of results:\n\n1. First, it appears that Table 1 and Table 2 are redundant, as both seem to display quantitative results on the NVIDIA dataset. Out of all 7 samples in this dataset, the proposed method outperforms the baselines in only 3. Additionally, the quantitative results for the UCSD dataset are missing from the paper.\n\n2. Similar to RoDynRF, which employs two separate neural radiance fields to represent static and dynamic regions of a scene, the proposed method uses two distinct sets of 3D Gaussians for modeling these regions. The reviewer wonders about the performance of the baseline if the same motion mask were applied, optimizing static and dynamic scenes independently.\n\n3. How does the computational efficiency of the proposed method compare to the baselines in terms of training time and testing speed? This aspect is highlighted as one of the main advantages of using GS.\n\n4. The proposed method depends on the motion mask predicted by Mask R-CNN. How well would the method perform if the motion mask were imperfect? How robust is the motion mask prediction method?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Why was the method not tested on the HyperNeRF dataset?\n- In Table 1, what does “w/o initial” mean? Does it omit an initialization module?\n- Could you explain the motivation behind introducing CSM? I am unclear on why two warping fields (CSM and HexPlane) are necessary. A clearer explanation and an ablation analysis in this regard would be helpful.\n- How do you define the static region based on the Mask R-CNN output? For example, in the upper row of Fig. 4, the window could potentially be dynamic as well."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The first dynamic 3DGS approach that operates without pose initialization.\n- Superior performance compared to baseline methods.\n- The paper is clearly written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a method for dynamic 3D reconstruction without pose initialization using 3D Gaussian Splatting (3DGS). By leveraging color, depth prediction, and motion mask images as inputs, the method decomposes the scene into static and dynamic parts, achieving competitive results on benchmark datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The method is overly engineered, utilizing multiple priors (depth maps, motion masks), loss functions, and multi-stage processing. This complexity can obscure the core idea and contribution of the work. Specifically, the method includes separate \"initialization\" and \"reconstruction\" modules, despite claiming to be an initialization-free approach. Why can’t these modules be unified?\n- Limited novelty: The approach consists of existing modules. The static initialization component is essentially the same as COLMAP-free 3DGS, while the warp field parameterization resembles 4DGS. The method simply breaks down the problem into smaller parts, supported by monocular predictions.\n- The method heavily depends on Mask R-CNN’s motion mask, making it susceptible to errors in network predictions. An ablation analysis assessing the sensitivity to motion mask accuracy from different mask inputs would be valuable."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The separation of processing static and dynamic regions in a scene is indeed a reasonable approach. However, it's important to note that many existing methods in SLAM have adopted similar strategies for handling static and dynamic elements."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors present IF-MoDGS, a novel approach for scene reconstruction and novel view synthesis (NVS) that eliminates the need for precomputed camera poses and point clouds from Structure-from-Motion (SfM). The method divides the scene into static and dynamic regions, using the static background to estimate camera poses and a specialized dynamic module to handle moving objects. To enhance spatio-temporal consistency, the authors introduce a high-dimensional feature loss and an annealing frequency loss, which improve rendering quality across complex dynamic scenes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are some concerns about the paper:\n\n1. I agree that SfM often struggles to extract accurate camera poses and obtain sparse point clouds in certain scenarios. However, the proposed method has only been tested on the NVIDIA and UCSD datasets, which utilize multi-view camera setups with significant camera angles. In such settings, SfM methods generally perform well in estimating camera poses. If the authors wish to emphasize their contribution to camera pose estimation, it would be beneficial to test the method on casually captured monocular videos, such as those from the DAVIS dataset, where camera angle changes are minimal and pose estimation is more challenging for SfM.\n\n2. What is the accuracy of camera pose estimation compared to the ground truth? How effective and reliable is the proposed pose estimation method?\n\n3. The provided renderd videos exhibit noticeable visual artifacts, such as the car with weird moving speed. Additionally, the videos are rendered from a fixed camera position and angle. How does the rendering performance vary with different viewpoints and positions? Could the authors provide more analysis on this?\n\n4. More importantly, the videos focus solely on the authors' method. Could the authors provide additional video comparisons with baseline methods?\n\n5. Given that the proposed method uses masks, do the baseline methods also utilize masks? How can fair comparisons be ensured when additional information is involved? Furthermore, how critical is the accuracy of the masks? If the mask is inaccurate at the boundaries between static and dynamic regions, what issues might arise?\n\n6. Why are the results for D3DGS so poor? This seems unusual. Could the authors provide more details on the implementation of each baseline method listed in Table 2?\n\n7. Please add citations for the referenced methods in Table 1."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Monocular Dynamic Scene Rendering and Camera Pose Estimation using 4D Gaussian Splatting"
},
"_bibtex": {
"value": "@misc{\nyeomsuwoong2024ifmodgs,\ntitle={{IF}-{MODGS} : {INITIAL} {FREE} {MONOCULAR} {DYNAMIC} {GAUSSIAN} {SPLATTING}},\nauthor={Yeomsuwoong and Jimin Roh and Eunho Shin and Kyeongbo Kong and Joonsoo Kim and Songju Na and Suk-Ju Kang},\nyear={2024},\nurl={https://openreview.net/forum?id=ylgg2RE7ub}\n}"
},
"abstract": {
"value": "In the field of scene reconstruction with moving objects, recent studies have utilized 3D Gaussian Splatting (3DGS) for spatial representation. This method typically relies on camera poses and point clouds obtained through the Structure-from-Motion (SfM) algorithm. However, in scenes captured with monocular viewpoints and containing moving objects in each frame, the SfM algorithm struggles to obtain accurate camera poses and points clouds. As a result, it often either removes point clouds of dynamic objects or fails to find camera poses for each frame, thereby leading to sub-optimal rendering of dynamic scenes. We propose a novel approach, Initial-Free Monocular Dynamic Gaussian Splatting (IF-MoDGS) which does not require precomputed camera poses and point clouds in dynamic scenes with moving objects. Our approach estimates camera poses using the static background, separated from dynamic objects by a motion mask, and generates point clouds specifically for the dynamic objects. To handle dynamic objects, we define a canonical space and apply deformation to link it with each viewpoint and timestamp. Then, to improve quality in complex spatio-temporal scenes, we utilize a high-dimensional feature loss and an annealing frequency loss. Extensive experimental results demonstrate that our method can effectively render dynamic scenes without relying on precomputed camera poses and point clouds, achieving the state-of-the-art performance in dynamic scene rendering tasks using a monocular camera. Our project will be available at:https://anonymous.4open.science/w/IF-MODGS-67F5/"
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Yeomsuwoong1",
"~Jimin_Roh1",
"~Eunho_Shin1",
"~Kyeongbo_Kong1",
"~Joonsoo_Kim2",
"~Songju_Na3",
"~Suk-Ju_Kang1"
]
},
"authors": {
"value": [
"Yeomsuwoong",
"Jimin Roh",
"Eunho Shin",
"Kyeongbo Kong",
"Joonsoo Kim",
"Songju Na",
"Suk-Ju Kang"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"novel view synthesis",
"4D rendering",
"camera pose estimation",
"3D reconstruction"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "yeomsuwoong|ifmodgs_initial_free_monocular_dynamic_gaussian_splatting"
},
"pdf": {
"value": "/pdf/e4b6f188d0ef636dc4f4a82ef2018ad047123f8c.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "IF-MODGS : INITIAL FREE MONOCULAR DYNAMIC GAUSSIAN SPLATTING"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
ylhKbwJrjC | Mechanism design with multi-armed bandit | main | Active | mechanism design;incentive compatibility;efficiency;individual rationality;budget balance;multi-armed bandit;probably approximately correct | other topics in machine learning (i.e., none of the above) | 3;5;6 | 3;2;2 | 2;3;3 | 1;2;3 | 3;2;3 | 4.666667 | 2.333333 | 2.666667 | 2 | 2.666667 | -0.944911 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "> Although the paper provides an approach with computational efficiency, the LP studied in this paper differs from and looks more accessible than the prior work (Osogami et al., 2023).\n\nThe LP studied in Osogami et al. (2023) is a special case of the LP studied in this paper. Specifically, letting $\\theta\\equiv 0$ and $\\rho=0$ in our LP gives the LP in Osogami et al. (2023). Notice that even the LP in Osogami et al. (2023) is unlikely to admit analytical solutions in general, and this is the key technical challenge. Our Lemma 1 gives a sufficient condition that allows us to analytically solve the LP under consideration. Lemma 2 shows that this sufficient condition is necessary whenever types are independent, so our analytical solution is optimal for a wide range of interesting cases in mechanism design.\n\n> The theoretical results' organization is not easy to follow.\n\nWe appreciate your suggestion. We agree that the presentation can be improved. However, some of the intermediate results are of independent interest and are worth being formally stated. For example, Lemma 1 contains essential information that is not contained in Corollary 1."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Thank you very much for your review.\n\n> I would love to hear the author's opinion on the novelty of the BME design in this work.\n\nBME has not been studied as a problem of MAB (i.e. from the perspective of sample complexity), although related problem of estimating the best mean from given sample (i.e. bias correction) has been widely studied in machine learning (as we discuss in Line 161). As is acknowledged in your review, a key novelty is in the connection from mechanism design to multi-armed bandits, where we reveal that the sample complexity of BME is a relevant problem. Although we only discuss its relevance to mechanism design, BME is a fundamental problem that may find a wide range of applications, such as those that require estimating the worst case expected cost (Worst Mean Estimation). For this new problem of BME, we show that a simple approach can achieve the best possible sample complexity. Although the approach is simple, it is nontrivial that this simple approach matches the lower bound (as we discuss in paragraphs staring at 154, 365, and 404). As you suggest in your review, there may be more efficient approaches that improve constant factors, have better instance-dependent complexity, etc., and we expect that our results will provide a basis for such extensions."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Thank you very much for your review.\n\n> [W1] In this regard, in Section 5, the authors are essentially stating the fact \"LP has a solution only when the feasible region of the constraints is non-empty\", which is really trivial.\n\nAlthough it is trivial that every LP has a solution only when the feasible region of the constraints is non-empty, this does not mean that one can analytically derive optimal solutions to all LPs. Our Lemma 1 gives a sufficient condition that allows us to analytically solve the LP under consideration. Lemma 2 shows that this sufficient condition is necessary whenever types are independent, so our analytical solution is optimal for a wide range of interesting cases in mechanism design. Note also that the (special case of) LP under consideration has been solved numerically in the prior work of mechanism design (e.g. Osogami [2023]), which also indicates the nontriviality of our results in Section 5.\n\n> [W2] Similar to the first point, the method described by the authors in the Section 6 is essentially just the basic mean estimation of each arm's reward in stochastic MAB.\n\nThe suggested approach of estimating the mean reward of each arm with $O((1/\\varepsilon^2) \\log(1/\\delta))$ samples can only guarantee that the best mean is estimated within error \\varepsilon with probability at least $(1-\\delta)^K$. To provide a $(\\varepsilon,\\delta)$-PAC guarantee, one would need $\\Omega((1/\\varepsilon^2) \\log(K/\\delta))$ samples from each arm, resulting in the suboptimal sample complexity of $\\Omega((K/\\varepsilon^2) \\log(K/\\delta))$. The novelty of our MAB results lies in proving that $O((K/\\varepsilon^2) \\log(1/\\delta))$ samples are sufficient for best mean estimation, and that this is the best possible sample complexity (Theorem 1).\n\n> [W3] However, in Section 3, the authors do not introduce any information regarding MAB.\n\nThe background on MAB is not needed until Section 6, and we are concerned that some of the readers would not remember what has been stated in Section 3 when they read Section 6. However, we would appreciate the reviewer's guidance on what specific information about MAB would be most helpful to include in Section 3.\n\n> [W4] These two statements seem to conflict.\n\nThe two statements do not conflict. As is stated in Section 3, the types ($t_1, t_2, ..., t_N$) are generated from a fixed distribution $P$. For this fixed distribution $P$, we consider a conditional distribution $P(\\cdot\\mid t_n)$ and take i.i.d. sample from $P(\\cdot\\mid t_n)$ in Section 6.\n\n> [Q2] Prior to line 346, the paper does not mention MAB at all. Are the authors assuming that $t_n\\in[K]$ for all $n\\in[N]$ here?\n\nYes. Note that we assume a finite number of players $N$ and a finite size of each type space $K$ (line 185). In Section 6, we state our results on general MAB problems (using the language of MAB), which are then connected to mechanism design at the end of Section 6 (Theorem 2 and Proposition 3). Specifically, for each player $n\\in[N]$, the type space $\\mathcal{T}_n$ corresponds to the set of arms $[K]$."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. See weakness.\n\n2. Prior to line 346, the paper does not mention MAB at all. Are the authors assuming that $ t_n \\in [K] $ for all $n\\in [N]$ here?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is well written and the theoretical results appear to be correct.\n2. The paper improves the previous results in Osogami [2023].\n3. The paper proposes numerical experiments to show the advantages of their designs.\n\n\nOsogami [2023]: Takayuki Osogami, Segev Wasserkrug, and Elisheva S. Shamash. Learning efficient truthful mechanisms for trading networks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies mechanism design problem under multi-armed bandit framework. The authors analytically derive a class of optimal solutions for such an LP that gives mechanisms achieving standard properties of efficiency, incentive compatibility, strong budget balance (SBB), and individual rationality (IR), where SBB and IR are satisfied in expectation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I hold reservations about the contributions in the paper. In Section 3, the authors introduce four properties that the mechanism needs to satisfy: Dominant Strategy Incentive Compatibility (DSIC), Decision Efficiency (DE), $\\theta$-IR, and $\\beta$-WBB/SBB. Such properties should be the key challenges in the mechanism design. However, as the authors stated, directly using the VCG mechanism can satisfy the first two properties. Furthermore, regarding the other two properties, they can be represented as two linear constraints of the optimization problem. In this regard, in Section 5, the authors are essentially stating the fact \"LP has a solution only when the feasible region of the constraints is non-empty\", which is really trivial. In summary, I am not convinced that the method proposed in this paper is innovative or makes sense.\n\n2. Similar to the first point, the method described by the authors in the Section 6 is essentially just the basic mean estimation of each arm's reward in stochastic MAB. The novelty of the proposed method should be further clarified.\n\n3. The title of the paper is \"Mechanism design with multi-armed bandit\". However, in Section 3, the authors do not introduce any information regarding MAB.\n\n4. In Section 3, the authors assume that the types are generated from a fixed distribution. However, in Section 6, the authors state that the algorithm can access to an arbitrary size of the sample that is independent and identically distributed (i.i.d.) according to $P(\\cdots|t_n)$ for any $t_n$. These two statements seem to conflict."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- I would love to hear the author's opinion on the novelty of the BME design in this work. I understand that it serves as a tool for the overall mechanism design; thus it is acceptable if the novelty of this part is limited (in that case, I might need to rely on other reviewers to get an assessment for the novelty in mechanism design)."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The automated mechanism design is an interesting problem. While I do not have exact background in this direction, I believe the efforts provided in this work are of relevance and importance to the community.\n\n- The connection from mechanism design to multi-armed bandits is inspiring. With my background in MAB, I largely appreciate such intersection that leverages MAB techniques to faciliate other domains.\n\n- The overall presentation and writing is clear. It has been a smooth reviewing experience for me."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work studies automated mechanism design. First, a class of optimal solutions is derived that requires an exponentially smaller number of essential variables than the previous version of linear programming. To resolve the computational issue, a connection is drawn towards best mean reward identification in MAB. Then, provably efficient design to perform best mean reward identification is provided, which is further plugged back in the original mechanism design problem."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- As I do not have a strong background in mechanism design, I would leave the further judgement of the significance and novelty of this part to other reviewers.\n\n- For the MAB part, while the connection is interesting, I found the adopted technique is a bit straightforward. In particular, while best mean identification (BMI) and best arm identification (BAI) have their differences (e.g., the example in line 380), the upper bound is obtained in Theorem 1 is from an algorithm that perform BAI first while following up with additional samples to do BMI. I, in general, have doubts that this can be done in a more efficient way."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "### Minor Comments\n\n- Line 170, notation $\\mathcal N=[1,N]$ is confusing; how about $\\{1,2,\\dots,N\\}$?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The numerical simulation section is designed to verify several theoretical results, which are good paper complements."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies how to solve an LP for mechanism design. It first formulates this LP, which can satisfy four conditions, and then illustrates that the solution of this LP enjoys an exponentially smaller variable size. Then, to approximate the solution, the paper proposes to use the MAB algorithm and shows that this approximation is asymptotic optimal. Numerical simulations are also reported."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Unclear contribution. Although the paper provides an approach with computational efficiency, the LP studied in this paper differs from and looks more accessible than the prior work (Osogami et al., 2023). So, it is hard to evaluate this paper's contribution from the aspects of significance and methodology. It would be helpful if the author could discuss the technical challenges they encountered in this paper. \n2. The theoretical results' organization is not easy to follow. This is a theoretical paper, providing a lot of lemmas and corollaries in Sections 5 and 6, where the essential parts are. However, the authors should put more effort into revising the presentations in these two sections. For example, in Section 5, the Lemmas 1 and 2 composes the Corollary 1. Why not directly give Corollary 1 and move Lemmas 1 and 2 to the appendix? This could help the reader quickly understand the meat of this paper.\nAnother example is that Corollaries 3, 4, and 5 are all on different conditions; why not just have one corollary with three bullets? For Section 6, Lemmas 4 and 5 are components to support Theorem 1. Why not use a proof sketch to posit Lemmas 4 and 5 so that readers familiar with these materials can directly skip them?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We analytically derive the optimal solution to a mechanism design problem and evaluate the solution via a bandit algorithm with theoretical guarantee."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024mechanism,\ntitle={Mechanism design with multi-armed bandit},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ylhKbwJrjC},\nnote={under review}\n}"
},
"abstract": {
"value": "A popular approach of automated mechanism design is to formulate a linear program (LP) whose solution gives a mechanism with desired properties. We analytically derive a class of optimal solutions for such an LP that gives mechanisms achieving standard properties of efficiency, incentive compatibility, strong budget balance (SBB), and individual rationality (IR), where SBB and IR are satisfied in expectation. Notably, our solutions are represented by an exponentially smaller number of essential variables than the original variables of LP. Our solutions, however, involve a term whose exact evaluation requires solving a certain optimization problem exponentially many times as the number of players grows. We thus evaluate this term by modeling it as the problem of estimating the mean reward of the best arm in multi-armed bandit (MAB), propose a Probably and Approximately Correct estimator, and prove its asymptotic optimality by establishing a lower bound on its sample complexity. This MAB approach reduces the number of times the optimization problem is solved from exponential to linear. Numerical experiments show that the proposed approach finds mechanisms that are guaranteed to achieve desired properties with high probability for environments with up to 128 players, which substantially improves upon the prior work."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"mechanism design",
"incentive compatibility",
"efficiency",
"individual rationality",
"budget balance",
"multi-armed bandit",
"probably approximately correct"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/839b4b38995f2d874fb0800392878cbad62efc75.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/9ec9bed47b10aa51e10aadb63b1806ec08e42564.zip"
},
"title": {
"value": "Mechanism design with multi-armed bandit"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ym1dS37mZE | Efficient Multi-modal Large Language Models via Visual Token Grouping | main | Active | Large Language Model;Multi-modal Learning | applications to computer vision, audio, language, and other modalities | 3;5;6 | 4;5;4 | 2;3;3 | 2;2;3 | 2;2;3 | 4.666667 | 4.333333 | 2.666667 | 2.333333 | 2.333333 | 0.188982 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* LLaMA-VID [1] can encode images and videos using only two tokens, whereas the proposed method requires a minimum of 64 tokens and does not support video input. What advantages does the proposed method offer over LLaMA-VID?\n\n[1] Li, Y., Wang, C., & Jia, J. (2025). Llama-vid: An image is worth 2 tokens in large language models. In European Conference on Computer Vision (pp. 323-340). Springer, Cham."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Despite reducing computational demands, the method maintains 98.1% of the original performance, indicating that it is highly efficient without compromising on accuracy.\n* By reducing the number of visual tokens processed by the model, the method is more scalable and flexible than original MLLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To reduce the computational costs associated with MLLMs, the authors propose a method that leverages pre-trained vision encoders to group similar image segments into semantically related concepts without the need for segmentation masks. Besides, the method employs isolated attention to preserve the integrity of original image representations while allowing semantic tokens to aggregate image features into meaningful regions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* My main concern is the novelty of the proposed method. The use of clustering algorithms or q-formers to reduce the number of vision tokens fed into LLMs has been examined in several previous works, including Chat-UniVi [1]. Additionally, the concept of isolated attention is not novel. I recommend that the authors provide a more in-depth analysis of the proposed method to strengthen the paper.\n* To demonstrate the generalizability of the proposed method, I suggest that the authors validate it across a broader range of MLLM architectures or base LLMs.\n\n[1] Jin, P., Takanobu, R., Zhang, W., Cao, X., & Yuan, L. (2024). Chat-univi: Unified visual representation empowers large language models with image and video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 13700-13710)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I don't think there is any necessity for ethics reviews for this paper."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The biggest concern should be the motivation of the two designs, and the current experiments cannot support to prove the effectiveness of them. I think the authors should well figure out these points before claiming the necessity of introducing this method."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The overall design is technically sound, which can be easily implemented. \n2. This paper offers a clear background on why we need vision token compression in vision-language models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces VisToG, a vision token compression method for efficient inference in large vision-language models. The basic idea is to group the similar vision tokens with the token similarity computation, and leverage the vision encoder to initialize the group tokens. The training configurations adopt the LLaVA-v1.5 style. The experiments cover the comparison with previous methods on vision token compression and some ablation studies, where VisToG achieves promising lower inference cost."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The overall pipeline is highly similar with Q-Former based vision token compression methods. After going though the paper, I feel the only differences are two particular designs: \n- The first one is encouraging the vision encoder to initialize the query tokens (i.e., the group tokens in this paper) by learning some tokens to abstract the vision information in patch tokens of the vision encoder; \n- The second one is using Gumbel-SoftMax based operation to calculate the Q-K similarity, resulting in the one-hot selection on vision tokens rather than the weighted sum of vision tokens. Also, it only requires a single grouping layer. \n\nHowever, the motivation of the above two designs remains unclear. I cannot find any ablation results in the paper to validate the effectiveness of these designs. Especially, I am wondering the model performance if using common cross attention rather than Gumbel-SoftMax based vision token selection. To be honest, it is really hard to comprehend the motivation of the second design. \n\n2. As shown in Table 1, the proposed method does not appear fully comparable to the vanilla LLaVA-v1.5 baseline, especially with a notable drop of approximately 5 points on TextVQA. In my experiences, a 5% decline in multi-modal benchmark scores often signals a disproportionately larger impact on a model's multi-modal capabilities. Therefore, it is unconvincing to claim that the model ‘retains xx% performance’ just based on the percentage drop in benchmark scores. \n\n3. And, It is still hard to judge the superiority of VisToG compared with Q-Former. Maybe the authors could conduct a fair comparison on the LLaVA-v1.5 backbone, with the same data and training settings. That will be much better. \n\n4. Please show the full results of table 2 rather than the average results. \n\n5. Not good presentation with some typos like \"8×NVIDIA100-40G\" in Line 314."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Have you tried using a visual token compression method similar to QwenVL on LLava1.5?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The method has a certain level of innovation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a method for compressing visual tokens in the multimodal large model by adding additional learnable tokens in the vision transformer. It designs a attention mask of the vision transformer to keep original output of vision transformer and compresses visual tokens into learnable tokens using a method similar to cross attention. This approach compresses the size of the visual tokens while maintaining a certain level of model performance reduction."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "More comparative experiments on visual feature compression under the same experimental settings are needed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024efficient,\ntitle={Efficient Multi-modal Large Language Models via Visual Token Grouping},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ym1dS37mZE},\nnote={under review}\n}"
},
"abstract": {
"value": "The development of Multi-modal Large Language Models (MLLMs) has significantly advanced various downstream applications, including visual question answering and image captioning. However, the substantial computational costs associated with processing high-resolution images and videos pose a barrier to their broader adoption. To address this challenge, compressing vision tokens in MLLMs has emerged as a promising approach to reduce inference costs. In this paper, we introduce \\methodname, a novel grouping mechanism that leverages the capabilities of pretrained vision encoders to group similar image segments without the need for segmentation masks. With the isolated attention we adopt, \\methodname can identify and eliminate redundant visual tokens, which effectively reduces computational demands. Extensive experiments demonstrate that the effectiveness of\\methodname , maintains over 98.1% of the original performance while achieving a reduction of over 27% in TFLOPS."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Model",
"Multi-modal Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a50cb35cb716f9a2b69fcb151cedb690e35e2d41.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Efficient Multi-modal Large Language Models via Visual Token Grouping"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ym7pr83XQr | DenoiseVAE: Learning Molecule-Adaptive Noise Distributions for Denoising-based 3D Molecular Pre-training | main | Active | 3D Molecular pre-training via denoising;Molecular property prediction | applications to physical sciences (physics, chemistry, biology, etc.) | 5;5;6;6 | 5;2;4;3 | 2;2;3;4 | 2;2;3;3 | 4;2;3;4 | 5.5 | 3.5 | 2.75 | 2.5 | 3.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The authors are encouraged to provide more details concerning the above-mentioned weaknesses. For example,\n1. What is the significance of the PCQM4Mv2 dataset in the pretraining of the NoiseVAE model?\n - How does the dataset help in learning atom-specific noise distributions?\n1. How are the noise distributions generated by DenoiseVAE used in the downstream molecular property prediction tasks?\n - What is the architecture of the property predictor(s) used in the downstream tasks? What type of model is used?\n - How are the noise distributions used in the property prediction tasks, in addition to the input 3D molecular geometry?\n1. What are the correct metrics/criteria for the results in Tables 1 and 2?\n\nAdditionally, the authors may consider addressing the following questions:\n1. The paper provides proof that the DenoiseVAE provides a higher theoretical evidence lower bound (ELBO) guarantee for the real conformation distribution of isoenergetic molecules. Is there a more direct way to show the correctness of the noise distributions generated by DenoiseVAE with the ground truth by first-principles calculations?\n - For example, in Figure 4, the authors showed the generated noise distributions for two molecules. How close are these distributions when benchmarked against first-principles simulations?\n1. How does the DenoiseVAE model help with conformational sampling in quantum chemical simulations?\n - The DenoiseVAE model generates noise distributions that are specific to a 3D molecular geometry (to the best of my understanding). By sampling from these noise distributions, are the sampled conformations more likely to be energetically favorable/stable than randomly sampling from a preset Gaussian distribution?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "### Originality\n*My expertise in related literature is limited.*\n1. The NoiseVAE model leverages a VAE for a more robust atom-specific and physics-related generation of noise distribution.\n\n### Quality\n1. The experiments/results are very comprehensive.\n1. The shown results on the downstream tasks suggest the robustness of the generated noise in providing quantum chemical information for accurately predicting molecular properties.\n\n### Clarity\n1. The pretraining objective and method for NoiseVAE are well introduced with details.\n\n### Significance\n1. Exploring the conformational space efficiently is of significance in quantum chemical simulations. NoiseVAE provides a way to efficiently sample conformations around a given geometry."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces NoiseVAE, a novel molecular pre-training method, for learning atom-specific noise distributions in different molecules. The variational autoencoder (VAE) based model consists of a Noise Generator and a Denoising Module that are jointly trained. The pretraining objective is to learn atom-specific noise distributions that align better with quantum mechanical theories than empirically derived strategies. The results show that the pretraining DenoiseVAE can generate noises specific to atomistic environments and improve the prediction accuracy on molecular properties such as the HOMO-LUMO gap and potential energy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While I appreciate the amount of information and results presented in the paper, I find the experiments and results a bit difficult to follow.\n\n- The authors showed proof that the proposed DenoiseVAE provides a higher theoretical evidence lower bound (ELBO) guarantee for the real conformation distribution of isoenergetic molecules.\n\n- Subsequently, to show the effectiveness of the DenoiseVAE, the authors conducted downstream tasks with the pre-trained DenoiseVAE for predicting molecular properties.\n\n- However, the authors did not provide a clear explanation of how DenoiseVAE was used in the downstream molecular property prediction tasks. To the best of my understanding, the DenoiseVAE was used to generate noise distributions for the input molecules. However, how were these noise distributions leveraged was not clear to me.\n\n- The tables with results are also a bit misleading. For example, tables 1 and 2 mentioned \"force prediction\", but the 12 properties in Table 1 were not force-related and the values in Table 2 seemed more energy-related than force-related.\n\n- In addition, the paper lacks details on the training process of the NoiseVAE model. Specifically, the pretraining is performed with the PCQM4Mv2 dataset, but the paper only briefly mentions that the dataset has 3.4 million organic molecules."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Major:\nIt’s a bit unclear how the downstream predictive tasks are performed after pretraining. Specifically:\n\n(1) Is the force prediction achieved by a task-specific prediction head? If so, what is its architecture and how is it trained (e.g. end-to-end fine tuning)?\n\n(2) What are the features used for the prediction (e.g. pooled EGNN outputs or latent embedding)?\n\nMinor:\n(1) Following the previous questions, as the model is considered a VAE, how is the latent space obtained?\n\n(2) Is it possible to visualize the learned energy landscape (e.g. for MD17 samples)?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. This work addresses an important problem in molecular representation learning. The learnable and molecule-specific per-atom noise distribution is a reasonable setup and aligns better with the physical intuitions.\n2. The authors provide comprehensive theoretical analysis of the rationale of their method.\n3. The proposed method shows consistent improvement over the baselines in the majority of the tasks, while also being more parameter efficient.\n4. The method has good interpretability and the learned noise patterns are physically relevant."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The This work proposes a 3D molecule pretraining method with learnable atom-specific noise generation, as opposed to the common practice of hand-crafted and shared noise. The pre-trained representation shows higher performance than baselines in most of the downstream energy prediction tasks. Furthermore, the authors show the learned noise intensities correspond with the atomic reactivity and rigidity."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The results are overall solid. However, some additional experimental and implementation details about the downstream tasks should be provided to help better understand and assess the results. See Questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How can different formulations of noise distributions affect the result? Currently, the noises are Gaussians with diagonal covariance, what if the covariance is non-zero at off-diagonal positions? E.g., the neighboring atoms connected by a bond will have non-zero covariance.\n\n2. How well will the model perform if the training and testing are on molecules of different sizes? Say the model is trained on small molecules but is tested for large molecular structures such as proteins."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The use of adaptive noise generation is novel. Previous works use hand-crafted or uniform noise across molecules, but DenoiseVAE uses an automated, atom-specific noise generation method with stochastic parameterization, which respects the unique structural and chemical properties of each molecule.\n\n2. The paper proposes theoretical derivations based on PES which aligns with quantum chemistry principles. The derivation for Theorem 1 based on ELBO offers theoretical insights into the effectiveness of the proposed method.\n\n3. Experimental results are promising and convincing: the proposed method achieved state-of-the-art performance across all benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents DenoiseVAE for 3D molecular pre-training that adapts to the anisotropic nature of molecular force fields. The authors proposed Noise Generator, which learns molecule-specific noise distributions for each atom, allowing DenoiseVAE to generate conformations that align closely with a molecule's potential energy surface. The network architecture is a variational autoencoder, where noisy molecular conformations are denoised using a denoising module. The authors claim that this method leads to more accurate representations for downstream molecular tasks. Experiments on molecular property prediction tasks including QM9 and MD17 are conducted to demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The use of DenoiseVAE introduces additional computational complexity. The training of the VAE with molecule-specific noise sampling could be computationally expensive or impossible for large datasets or complicated molecules. A more thorough analysis of training time and resource requirements could be more helpful.\n\n2. It is unclear to me how the proposed DenoiseVAE can be adapted across different dataset scales. It would be great if the authors could offer a series of studies on the performance of DenoiseVAE on different sizes of datasets.\n\n3. I can imagine the training of DenoiseVAE can be very unstable and will be sensitive to initialization and optimization settings. It would be great if the authors could show convergence of loss curves across all datasets."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "<Question>\n\n - The prior distribution constrains the atom coordinate distribution to follow a predefined Gaussian form. But what is the range of these coordinate distributions? Are they confined within a narrow range close to the prior distribution, or are they spread across a finite range? If so, what is that range?\n\n - The article addresses 3D coordinate noise by adding Gaussian noise based on a learnable distribution, but what about rotations?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "<Strengths>\n\n - DenoiseVAE employs a learnable noise generation strategy that allows for adaptive, atom-specific noise distributions across different molecules.\n\n - The authors introduce a variational approach to jointly optimize the Denoising Module and Noise Generator of DenoiseVAE. Here, the denoising objective encourages generated noise to adhere more closely to physical constraints.\n\n - A KL divergence-based regularization term is applied to prevent low noise intensity and increase the diversity of sampled conformations.\n\n - Optimizing pre-training objective is proven to be equivalent to maximizing the evidence lower bound (ELBO) of the log-likelihood.\n\n - DenoiseVAE outperforms existing denoising methods across various datasets for both molecular and complex property predictions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "<Summary>\n\n - The authors propose a denoising-based 3D molecular pre-training method, called DenoiseVAE, which employs a learnable noise generation strategy instead of existing hand-crafted strategies. This allows for adaptive, atom-specific noise distributions across different molecules.\n\n - The authors introduce a variational approach to jointly optimize the Denoising Module and Noise Generator of DenoiseVAE. Here, the denoising objective encourages generated noise to adhere more closely to physical constraints, enabling the recovery of equilibrium conformations. Additionally, a KL divergence-based regularization term is applied to prevent low noise intensity and increase the diversity of sampled conformations.\n\n - Theoretically, the authors demonstrate that optimizing their pre-training objective is equivalent to maximizing the evidence lower bound (ELBO) of the log-likelihood.\n\n - Extensive experiments reveal that DenoiseVAE outperforms existing denoising methods across various datasets for both molecular and complex property predictions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "<Limitations>\n\n - The Boltzmann distribution is commonly used in the classical regime. However, for precise coordinate computation of interacting atoms, a quantum approach is necessary. This involves, for example, treating the energy function as a quantized operator and finding energy eigenvalues for a given basis. However, this article does not carefully consider such approaches.\n\n - This approach essentially adopts a force field method, which may be inadequate for precise atomic-level computations.\n\n - Additionally, the energy is modeled as a simple harmonic potential, lacking any exchange correlation or other accurate potential forms."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a novel denoising method for 3D molecular pre-training."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024denoisevae,\ntitle={Denoise{VAE}: Learning Molecule-Adaptive Noise Distributions for Denoising-based 3D Molecular Pre-training},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ym7pr83XQr},\nnote={under review}\n}"
},
"abstract": {
"value": "Denoising learning of 3D molecules learns molecular representations by imposing noises into the equilibrium conformation and predicting the added noises to recover the equilibrium conformation, which essentially captures the information of molecular force fields. Due to the specificity of Potential Energy Surfaces, the probabilities of physically reasonable noises for each atom in different molecules are different. However, existing methods apply the shared heuristic hand-crafted noise sampling strategy to all molecules, resulting in inaccurate force field learning. In this paper, we propose a novel 3D molecular pre-training method, namely DenoiseVAE, which employs a Noise Generator to acquire atom-specific noise distributions for different molecules. It utilizes the stochastic reparameterization technique to sample noisy conformations from the generated distributions, which are inputted into a Denoising Module for denoising. The Noise Generator and the Denoising Module are jointly learned in a manner conforming with the paradigm of Variational Auto Encoder. Consequently, the sampled noisy conformations can be more diverse, adaptive, and informative, and thus DenoiseVAE can learn representations that better reveal the molecular force fields. Extensive experiments show that DenoiseVAE outperforms the current state-of-the-art methods on various molecular property prediction tasks, demonstrating the effectiveness of it."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"3D Molecular pre-training via denoising",
"Molecular property prediction"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f25583612c43755ed41b08af71bc6064f585b2d5.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "DenoiseVAE: Learning Molecule-Adaptive Noise Distributions for Denoising-based 3D Molecular Pre-training"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ymqLAmqYHW | K&L: Penetrating Backdoor Defense with Key and Locks | main | Active | backdoor attack;backdoor defense;AI security | alignment, fairness, safety, privacy, and societal considerations | 1;5;5;6 | 5;4;5;3 | 1;3;3;2 | 1;3;2;2 | 1;2;3;2 | 4.25 | 4.25 | 2.25 | 2 | 2 | -0.667308 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please check my previous comments."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The topic of backdoor attacks and defenses is important.\n- The writing in the paper is easy to follow.\n- The experimental setups are comprehensive, and the results look good with appropriate replications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the problem of backdoor attacks and defenses. The authors begin by listing three requirements that a successful backdoor attack should satisfy. Next, they point out the 'high binding' effects and propose a new attack algorithm named Key-Locks to generate backdoor data. Experiments are conducted to evaluate the effectiveness of the proposed algorithm."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have two major concerns:\n\n1. First, I suspect that there are already previous theoretical results [1, 2] that can explain your proposed \"high binding\" phenomena and why your method works well. Considering that the literature on backdoors in computer vision is well developed, I think all the points listed in your paper have been somehow mentioned (although in different forms) in previous work; however, there seems to be a lack of discussion on this.\n\n2. Second, more defenses are needed. I think your proposed method can potentially be defended against detection-based algorithms (originally designed for OOD detection). I would like to see how your proposed methods perform under the two more recent strong defenses [4, 5].\n\nRefs:\n[1] Manoj et al., \"Excess Capacity and Backdoor Poisoning\"\n[2] Wang et al., \"Demystifying Poisoning Backdoor Attacks from a Statistical Perspective\"\n[3] Guo et al., \"SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency\"\n[4] Xian et al., \"A Unified Detection Framework for Inference-Stage Backdoor Defenses\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "This paper introduces an interesting backdoor attack mechanism using keys and locks. The authors highlight that the vulnerability of existing backdoors stems from their strong binding to model parameters, making them susceptible to defenses based on normal model updates. To enhance robustness, the paper proposes a new backdoor attack. The core technique is similar to adversarial attacks, where a keyhole is generated and embedded using gradient descent towards the target class. During inference, a trigger (the key) is generated using a similar algorithm. The paper includes extensive evaluations, comprising various analyses and ablation studies, to demonstrate the effectiveness of the proposed attack, which outperforms existing methods.\n\nHowever, I have several concerns regarding the design and evaluation:\n\n(1) The attack appears to focus on reducing the class distance between the target class and other classes, as indicated by the poisoning loss in Equations (3) and (4). Without a fixed backdoor generation function, the model learns perturbations from any class towards the target class in each iteration. Once convergence is achieved, generating an effective trigger using a similar function, as described in Equations (5) and (6), becomes straightforward. While this is intriguing, it raises certain issues.\n\nFirst, the necessity of the poisoning process is questionable. Why not directly generate natural backdoor sample, e.g., [1]? This would simplify the process while maintaining a reasonable ASR.\nSecond, the corrupted target class might be easily detectable using existing trigger inversion methods, such as [2] and [3], since generating adversarial perturbations to flip the label to the target class becomes easier than for other classes.\n\n(2) The paper lacks sufficient support to validate its hypotheses, particularly those in Lines 211-215. Recent work [4] models backdoor learning as a special case of continual learning, identifying the orthogonal nature between backdoor behaviors and normal classification. There appears to be a connection between this claim and the authors' hypothesis, which should be explored and discussed.\n\n(3) The baseline attacks and defenses used in the paper are not actually up-to-date. For instance, state-of-the-art attacks such as [5,6,7,8] demonstrate robustness against various defense methods. It is important to clarify how the proposed attack differs from these approaches. Additionally, FST [9], a latest defense effective against a wide range of attacks, should be included in the comparison.\n\n(4) The triggers illustrated in Figures 19-30 appear misaligned with the original images, even if the L-2 distance metrics are favorable. These measurements may not align with human perception. Moreover, adversarial perturbations might be challenging to implement practically, requiring careful tuning of pixel values. They may also be vulnerable to input purification techniques, such as those used in [10], where diffusion models are leveraged to reconstruct inputs and reduce trigger effectiveness. It is suggested to provide a discussion about this issue.\n\n(5) The paper's structure could be improved. Several important details are relegated to the appendix without proper referencing in the main text. For example, Line 237 mentions Algorithm 1 without a clear link and description to it. Properly integrating and referencing content from the appendix would enhance the paper's clarity and flow.\n\n--------------------------\nReference:\n\n[1] Tao, Guanhong, et al. \"Backdoor vulnerabilities in normally trained deep learning models.\" arXiv preprint 2022.\n\n[2] Wang, Bolun, et al. \"Neural cleanse: Identifying and mitigating backdoor attacks in neural networks.\" IEEE S&P 2019.\n\n[3] Wang, Zhenting, et al. \"Unicorn: A unified backdoor trigger inversion framework.\" ICLR 2023.\n\n[4] Zhang, Kaiyuan, et al. \"Exploring the Orthogonality and Linearity of Backdoor Attacks.\" IEEE S&P 2024.\n\n[5] Zeng, Yi, et al. \"Narcissus: A practical clean-label backdoor attack with limited information.\" CCS 2023.\n\n[6] Qi, Xiangyu, et al. \"Revisiting the assumption of latent separability for backdoor defenses.\" ICLR 2023.\n\n[7] Cheng, Siyuan, et al. \"Lotus: Evasive and resilient backdoor attacks through sub-partitioning.\" CVPR 2024.\n\n[8] Huynh, Tran, et al. \"COMBAT: Alternated Training for Effective Clean-Label Backdoor Attacks.\" AAAI 2024.\n\n[9] Min, Rui, et al. \"Towards stable backdoor purification through feature shift tuning.\" NeurIPS 2024.\n\n[10] Shi, Yucheng, et al. \"Black-box backdoor defense via zero-shot image purification.\" NeurIPS 2023."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Proposes a novel and effective backdoor attack strategy.\n\n2. Provides in-depth analysis of the backdoor properties contributing to vulnerability against defenses.\n\n3. Includes extensive evaluations across various models, datasets, and ablation studies."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel backdoor attack mechanism based on keys and locks. The primary concept is to reduce the high binding of backdoors to model parameters, thereby enhancing robustness against defenses. Extensive experiments demonstrate the effectiveness of this approach, showing its superiority over existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Insufficient support for design choices and key hypotheses.\n\n2. Potential vulnerability to detection by existing trigger inversion methods.\n\n3. Lack of latest baseline attacks and defenses in the evaluation.\n\n4. Concerns regarding the practicality of the trigger pattern.\n\n5. Some writing issues."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Why was clipping excluded during training? Has the potential impact of significant perturbations during the embedding locks of K&L been evaluated against in-training defenses such as Anti-Backdoor Learning?\n2. Could you clarify how K&L differs from adversarial training?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ Analysis of the concept of high binding between backdoor triggers and model parameters.\n+ Extensive experiments using various datasets and neural network architectures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces the Key-Locks (K&L) backdoor attack algorithm, designed to bypass existing defenses. Unlike traditional backdoor attacks that show high binding with model parameters, making them vulnerable to defenses, the K&L algorithm decouples the backdoor process from model parameters. This decoupling allows the method to adjust the unlocking levels of backdoor activation, enhancing its robustness against a diverse set of defenses. The paper also introduces the Accuracy-ASR Curve (AAC) as a new metric for evaluating backdoor attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- First of all, the paper requires improvements in editorial quality. There are instances where terms more characteristic of language models are used, such as \"assaults\" instead of the more appropriate \"attacks.\" A thorough human proofreading is recommended to ensure precise usage of terminologies and enhance overall clarity. In addition, the paper excessively relies on the appendix, making it difficult to follow without constant cross-referencing.\n\n- In Algorithm 1 for embedding locks, the backdoor samples are iteratively modified at each training epoch, where the learning rate $\\eta$ is added or subtracted to the backdoor inputs based on the gradient's sign in the direction of the backdoor target. Unlike the inference stage, clipping is not applied during training. This lack of clipping over multiple iterations could result in less stealthy and significantly perturbed images. Such perturbations may potentially lead to distinguishable loss patterns compared to benign inputs. Such characteristics could be susceptible to in-training defenses, such as Anti-Backdoor Learning, which, notably, has not been considered in the evaluation.\n\n- The paper attempts to distinguish K&L from adversarial attacks in Section 3.4. The K&L algorithm generates adversarial perturbations similar to the FGSM method using gradient signs and incorporates these examples during training. The claim is that this approach reduces the binding between the backdoor trigger and model parameters. However, as K&L uses adversarial perturbations as backdoor triggers, this approach shares similarities with adversarial training, which enhances the robustness of models against adversarial attacks. A more precise explanation would help to understand why this method reduces robustness to adversarial perturbation instead of improving it.\n\n- The paper does not fully discuss the rationale behind introducing the new metric, the Accuracy-ASR Curve (AAC), alongside existing backdoor evaluation metrics. It would be beneficial to elaborate on AAC's added value to backdoor attack evaluations. Additionally, the meanings of AAC1, AAC3, and AAC5 are not clear without a proper definition of AAC, making it difficult to understand their relevance in the evaluation. Also, the definition of AAV is not sufficiently explained.\n\n- The backdoor sample similarity rates presented in Table 2 could be potentially misleading, given that they are based on only two input examples, as illustrated in Figure 2. To ensure a robust and reliable evaluation, it would be more appropriate to compute the similarity attribution using multiple images across multiple runs to achieve statistical significance.\n\n- All the results presented in the paper must be evaluated across multiple runs to ensure the statistical significance of the reported values."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "NA"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "Large set of experiments: The authors re-implement eight defense methods and six attack backdoors across four datasets. Through these experiments, they demonstrate that the proposed approach can outperform the six existing attacks in bypassing defenses."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an adaptive attack designed to bypass existing backdoor defenses. However, the paper is underprepared, and several concepts are not well explained."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tPresentation needs improvement: Key concepts such as \"high binding,\" \"key,\" and \"lock\" are not explained clearly. For instance, what does \"high binding\" represent? On line 75, the authors state that it refers to the tight coupling between a backdoor and a specific trigger. However, on line 181, it appears to mean something different, namely, the binding between the backdoor and model parameters.\n2.\tUnclear limitations of existing backdoor attacks: It is not clear why current defenses are successful against these attacks. Since \"high binding\" is ambiguous, it is difficult to understand why existing attacks fail. Additionally, according to Table 1, some existing attacks can also bypass these defenses. The authors should clarify why attacks sometimes succeed and other times fail in bypassing defenses.\n3.\tDistinction from adversarial examples: In the KL backdoor attack, the poisoned test sample is generated via gradient descent, similar to adversarial examples. The authors should explain how their approach differs from adversarial examples. Moreover, I suggest that the authors conduct further experiments to distinguish the effects of backdoor attacks from those of adversarial examples. For example, when the poisoning ratio is zero, they could apply the KL attack during testing to check the ASR (Attack Success Rate). If the ASR is not too low, this would suggest that adversarial examples, rather than backdoor attacks (which rely on poisoning training data), are the primary factor leading to successful backdoor injection.\n4.\tSummary of backdoor attack requirements: Summarizing the three requirements for a backdoor attack should not be considered a primary contribution, as many other papers have already discussed this."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024kl,\ntitle={K\\&L: Penetrating Backdoor Defense with Key and Locks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ymqLAmqYHW},\nnote={under review}\n}"
},
"abstract": {
"value": "Backdoor attacks in machine learning create hidden vulnerability by manipulating the model behaviour with specific triggers. Such attacks often remain unnoticed as the model operates as expected for normal input. Thus, it is imperative to understand the intricate mechanism of backdoor attacks. To address this challenge, in this work, we introduce three key requirements that a backdoor attack must meet. Moreover, we note that current backdoor attack algorithms, whether employing fixed or input-dependent triggers, exhibit a high binding with model parameters, rendering them easier to defend against. To tackle this issue, we propose the Key-Locks algorithm, which separates the backdoor attack process into embedding locks and employing a key for unlocking. This method enables the adjustment of unlocking levels to counteract diverse defense mechanisms. Extensive experiments are conducted to evaluate the effective of our proposed algorithm. Our code is available at: https://anonymous.4open.science/r/KeyLocks-FD85"
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"backdoor attack",
"backdoor defense",
"AI security"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1fbd959d796ccf58481f79b45530f9b428f82437.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "K&L: Penetrating Backdoor Defense with Key and Locks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ymt4crbbXh | AutoBencher: Towards Declarative Benchmark Construction | main | Active | automatic evaluation;language models | foundation or frontier models, including LLMs | 3;5;6;8 | 2;5;3;3 | 2;3;3;4 | 2;3;3;4 | 3;3;4;4 | 5.5 | 3.25 | 3 | 3 | 3.5 | 0.190885 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see Weakesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The problem of automatic benchmark generation in a guided manner is an important one. While LMs have been used as judges to automatically evaluate other LM's answers, this work proposes using LMs to also generate questions.\n\n2. The problem is formalized and packaged in an elegant and extensible way in the AutoBencher framework, and two important instances of the framework are studied. The two-step division (first generate topics and then generate datasets per topic) is especially novel and effective.\n\n3. The proposed approach has two primary novel aspects: the use of a tool (such as a Wikipedia knowledge database or a Python library for mathematical calculations) to avoid the evaluating LM from generating answers that could be incorrect, and an algorithm to generate the benchmark in a guided rather than brute-force manner.\n\n4. The empirical evaluation shows the promise of the proposed approach in generating datasets that are more novel and difficult than even hand-crafted ones like MMLU."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents AutoBencher, a declarative framework for automatic benchmark construction, and use it to scalably discover novel insights and vulnerabilities of existing language models. Specifically, it is instantiated in two scenarios: 1) capability evaluation and 2) safety evaluation. In each scenario, a set of desirable characteristics is first formally defined. For capability evaluation, these are salience, difficulty, separability, and novelty. For safety evaluation, they are harmfulness and attack success rate. Then, a language model is used to automatically construct descriptions of topics along with datasets in those topics, where a dataset is a set of (question, answer) pairs. Empirical results using GPT-4 as the evaluator show that the created datasets are on average 27% more novel and 22% more difficult than existing benchmarks. AutoBencher also helps identify specific gaps not captured by existing benchmarks: e.g., Gemini-Pro has knowledge gaps on Permian Extinction and Fordism while GPT-4o fails to decline harmful requests about cryptocurrency scams."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. AutoBencher is currently stand-alone. The paper would be stronger if it integrated AutoBencher into existing popular evaluation frameworks like Stanford's HELM or HuggingFace's Open LLM. Adoption of AutoBencher by one of these frameworks would make a more convincing case for its usefulness and viability.\n\n2. There seems to be a mismatch between the capabilities of the evaluating LM and the evaluated LMs: the former has access to tools whereas the latter does not. The paper does not make a convincing case why future LMs should be expected to have such capabilities without using tools themselves.\n\n3. The evaluation is rather limited. I would expect a benchmark/evaluation focussed paper to be more comprehensive and derive more insights than those currently presented. For instance, AutoBencher currently only generates one-turn (question, answer) pairs; it would be interesting to see it extend to multi-turn data, chain-of-thought data, etc. Another direction to extend it could be in the domain of multi-modality.\n\n4. I found the safety evaluation less convincing than the capability evaluation. While the paper does use recent baselines such as XSTest and HarmBench, it would be more convincing if the paper would report on how AutoBencher could be integrated into a mainstream framework for safety evaluation and discuss challenges that were overcome in such an integration (this is related to item 1 above but is more specific to safety evaluation)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In Section 3.1 on separability, are you taking the mean over $acc(\\cdot) \\in V_c$? If so, this should be clarified. Additionally, I am not convinced that high deviation alone ensures that rankings between any two models, $LM_i$ and $LM_j$, with $i \\ne j$, are robust to noise.\n- Using vector notation could help make the equations clearer.\n- In Section 3.1 on novelty, it seems that there is a coefficient for each model being evaluated. Could you elaborate on why there are coefficients for each model rather than for each dataset?\n- How are the salience and harmfulness binary variables obtained in practice?\n- I am unclear about the procedure for qualitative assessment in Section 6.3. What does \"high quality\" mean, and how many question-answer pairs were sampled? What do the \"ranks\" in this section refer to?\n- What language model was used to generate Table 1? Adding this information to the caption would improve clarity.\n- Are there any ablation experiments on the following: the language model used, the criteria for evaluating dataset descriptions, and the impact of including privileged information?\n- Are the human benchmarks also grouped into distinctive categories and descriptions similar to those proposed by the language model?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper conducts experiments across a wide range of knowledge domains.\n- The presentation is generally clear, with well-structured sections and a logical flow.\n- Equations and figures are used effectively to enhance clarity.\n- The automatic generation of benchmarks is a topic of strong interest to the community.\n- The proposed method demonstrates significant performance gains over human-generated benchmarks, particularly in terms of novelty, difficulty, and separability metrics as defined in the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a method, Autobencher, to automatically construct benchmarks for evaluating language model (LM) capabilities and safety. Specifically, based on predefined criteria, the authors prompt an LM to propose dataset descriptions and generate questions, which a candidate LM then attempts to answer. The dataset is subsequently scored according to these criteria, enabling the selection of high-quality dataset descriptions. The authors aim to demonstrate that Autobencher generates dataset topics/descriptions that are both more novel and more challenging than existing benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- It is not entirely clear to me what methodological novelties the paper introduces in dataset construction, aside from its use of retrieval during generation. Optimizing for various criteria objectives is not a particularly significant contribution.\n- The introduction and abstract could benefit from revisions to be more specific and descriptive.\n- The method and criteria rely on accuracy scores computed across both existing models and preexisting datasets, which can be computationally intensive, especially if generating multiple datasets across a large set of models.\n- Additionally, language models (LMs) can exhibit high variance across different runs. A robust scoring criterion should account for this issue by incorporating variance between different runs, both for generating questions and answers. Introducing a statistical method to handle this variance would strengthen the contribution.\n- Ensuring that the dataset description is clear and unharmful does not guarantee that all generated questions are equally clear and unharmful. For instance, a dataset labeled as \"Nuclear technology\" might contain questions of varying levels of harmfulness.\n- There are concerns regarding the practicality of the method. While the paper claims high automation, many of the criteria still seem to require potentially time-intensive manual crafting. Furthermore, LM generations can be compute-intensive; a discussion on the computational resources required for these generations would be beneficial.\n- Additional ablation studies could help demonstrate the impact of various components of the method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "There are ethical issues with the topic under study as it enables attack vectors on LLMs; however, the authors take due diligence to properly discuss their study process and acknowledge the broader impact."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. To the knowledge of this reviewer, it should be possible to have a hyper-/hyponymy graph of Wikipedia topics. Did you consider using such a graph, or a similar topic relationship graph to further guide the adaptive search?\n2. The optimisation problem for case (1) is multi-objective, is there a reason why multi-objective optimisation was not employed and instead the loss is linearised by adding hyperparameters? (evolutionary/population-based methods?)\n3. Related to W1, how is the math/science salience set constructed? Unless I missed it in the paper, the inclusion-exclusion criteria were not clear or indeed the exact methodology. Did you perform, for example, open card sorting or a similar methodology?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "+ On novelty, the approach generates human interpretable topics where model ranks exhibit surprising results.\n+ On safety, the qualitative examples align with the experience of this reviewer when trying to manually jail-break LLMs: pose the question as a hypothetical or philosophical debate; this vector being auto-discovered is encouraging.\n+ Well-executed research methodology with manual validation of the discovered benchmarks"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose to use an LM-guided adaptive search to construct benchmarks that optimise (1) difficulty (benchmark headroom), separability (existing model variance), and novelty (rank correlation with other benchmarks) and (2) attack success rate (when trying to extract harmful information from models). They demonstrate the need for this by showing that existing benchmarks exhibit low values in novelty (especially) even for benchmarks such as MMLU where there is sufficient headroom. Further, they use privileged information to enable a weaker LM to assess a stronger LM, use translation tools to enable multi-lingual data and employ source code/python to evaluate numerical and symbolic expressions. When optimising, the authors employ an adaptive search strategy that uses the history of explored subjects to guide the new candidates restricted to a salience set. To assess the quality of the generated data, the authors employ Mechanical Turk and find sufficiently low error rates with high salience on the questions. The value of adaptive search is demonstrated via an ablation on the use of the history vector. Both regimes demonstrate improvement over HumanEval on the desired metrics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The approach to the math and science categories suggests an open vocabulary problem that is not clearly tackled. For other categories, this is tackled via Wikipedia and a popularity metric.\n- The translation induces an issue for low-resource language: such languages are both less likely to be tackled by LLMs and by translation tools, creating a catch-22. (I feel it would be sufficient to acknowledge the issue as a limitation since tackling the issue requires significant manual effort, future work can consider specialised models for specific language pairs.)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"Yes, Discrimination / bias / fairness concerns"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1- How does AutoBencher differ from previous work on automatic benchmarking, such as that in [2]?\n\n2- Why is a comparison with the baseline included?\n\n\n\n\n\n[1] Taskbench: Benchmarking large language models for task automation."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1-The paper is well-written and very easy to understand.\n\n2- The paper provides an in-depth empirical evaluation, uncovering vulnerabilities and assessing model performance. It also includes results from human evaluations to further validate the quality and relevance of the generated benchmark.\n\n3- AutoBencher automatically constructs datasets, making it highly scalable and reducing reliance on costly human labor. This is useful for evaluating LLMs across various domains."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces AutoBencher, a declarative framework that automatically generates benchmarks to evaluate language model performance by defining specific desiderata, such as difficulty, novelty, and safety. The proposed approach, AutoBencher, leverages language models to iteratively construct benchmarks that reveal weaknesses and safety vulnerabilities in LLMs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1- The details of the qualitative analysis are missing.\n\n2- AutoBencher uses GPT-4 as the evaluator, which may introduce potential bias in datasets that could favor LLMs from the same family (e.g., OpenAI's LLMs)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present AutoBencher, a declarative approach to constructing new datasets, revealing model weakness and safety vulnerabilities."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024autobencher,\ntitle={AutoBencher: Towards Declarative Benchmark Construction},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ymt4crbbXh},\nnote={under review}\n}"
},
"abstract": {
"value": "We present AutoBencher, a declarative framework for automatic benchmark construction, and use it to scalably discover novel insights and vulnerabilities of existing language models. Concretely, given a few desiderata of benchmarks (e.g., question difficulty, topic salience), we operationalize each desideratum and cast benchmark creation as an optimization problem. Specifically, we experiment with two settings with different optimization objectives: (i) for capability evaluation, we declare the goal of finding a salient, difficult dataset that induces novel performance patterns; (ii) for safety evaluation, we declare the goal of finding a dataset of unsafe prompts that existing LMs fail to decline. To tackle this type of optimization problem, we propose to use a language model to automatically construct datasets and iteratively revise the dataset to optimize for the declared desiderata. We use AutoBencher (powered by GPT-4) to create datasets for math, multilinguality, knowledge, and safety. The scalability of AutoBencher allows it to test fine-grained categories and tail knowledge, creating datasets that are on average 27% more novel and 22% more difficult than existing benchmarks. AutoBencher also helps identify specific gaps not captured by existing benchmarks: e.g., Gemini-Pro has knowledge gaps on Permian Extinction and Fordism while GPT-4o fails to decline harmful requests about cryptocurrency scams."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"automatic evaluation",
"language models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2b55ac880bfa78148082c3e5dccd7a61a651bcf8.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "AutoBencher: Towards Declarative Benchmark Construction"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yougZBoUY3 | Attacking Audio Language Models with Best-of-N Jailbreaking | main | Active | adversarial robustness;jailbreaks;audio language model;speech language model;multimodal;adversarial attack;audio jailbreak;safety;trustworthy;robustness | alignment, fairness, safety, privacy, and societal considerations | 3;3;5 | 4;4;4 | 3;1;3 | 2;1;3 | 2;1;4 | 3.666667 | 4 | 2.333333 | 2 | 2.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- How might the ASR be affected if the safety filter remained active? Including this would provide a clearer picture of BoN’s practical effectiveness and real-world implications. Would the authors consider conducting additional experiments with the safety filter enabled to better align the study with real-world conditions?\n\n- Audio quality after attack modifications is an important factor for stealthy and practical attacks. Could the authors provide details on the perceptual quality of the modified audio samples, perhaps using a metric like the ViSQOL score?\n\n- The term \"ASR\" is commonly used to denote \"Automatic Speech Recognition\" in the speech research community. Could the authors consider using an alternative term?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Proposed Best-of-N (BoN) Jailbreaking is novel, specifically tailored for attacking ALMs. BoN is unique in its application of audio augmentations to create high-entropy inputs, exploiting the ALM's sensitivity to continuous input variations. The combination of BoN with other jailbreak methods, such as the PrePAIR technique, showcases an innovative blend of audio augmentations and iterative refinement for more effective attacks.\n\n- The discovery and application of power-law scaling to predict ASR with increased samples indicate a high-quality analysis, providing valuable insights into the scalability and potential impact of the proposed BoN method.\n\n- The paper is well-structured and effectively explains complex ideas, making the novel BoN method accessible. Visualizations such as graphs that show the ASR progression with sample size and the power-law behavior, enhance clarity by illustrating key results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces \"Best-of-N (BoN) Jailbreaking,\" a black-box algorithm designed to exploit weaknesses in Audio Language Models (ALMs) by extracting harmful information through audio-based attacks. The BoN method uses repeated audio augmentations, such as changes in pitch, speed, and background noise, to malicious prompts until it elicits harmful responses from the target ALM. The study shows that BoN achieves a high attack success rate (ASR), with results exceeding 60% ASR on several top ALMs, including a preview version of GPT-4o’s voice mode. Additionally, the authors discover power laws that allow them to predict ASR based on the number of samples, helping forecast the efficiency of BoN jailbreaking. The approach becomes even more effective when combined with other jailbreak techniques, reaching up to 98% ASR in some models. This paper highlights the difficulty of securing ALMs due to their sensitivity to continuous input variations, proposing BoN as a scalable and effective method for targeting ALM vulnerabilities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- A significant limitation of this study is the decision to turn off Gemini’s safety filter. In real-world applications, LLMs and ALMs rely on both alignment techniques and safety filters for safeguarding against misuse. By disabling these filters, the study's attack success rate may be artificially inflated, making the findings less practical and relevant in environments where safety filters are essential. Including experiments with the safety filter enabled would provide a more realistic assessment of the BoN method's effectiveness and its relevance to real-world deployments.\n\n- The study does not evaluate the audio quality of the modified samples post-attack, which is an important aspect for assessing the stealthiness of these attacks. A low audio quality in the altered samples could make the attacks easily detectable or unnatural. A quality metric, such as the ViSQOL score, would allow for a quantitative comparison between the original and post-attack audio samples. Without such an analysis, it is unclear if the BoN attacks are feasible in scenarios where high-quality audio is essential for covert operations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See above"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "This paper successfully introduces a new black-box jailbreaking algorithm called Best-of-N (BoN) that effectively extracts harmful information from Audio Language Models (ALMs) by exploiting their sensitivity to variations in audio input. The audio domain is in general underexplored so I appreciate this effort.\n\nThe paper looks into different aspects such as combinations with other attacks. In addition, the paper provides detailed insights into the workings of BoN jailbreaking, including analysis of the transferability and semantic meaningfulness of the augmentations used. The research highlights the challenges of safeguarding ALMs with stochastic outputs and continuous input spaces."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This research paper explores the vulnerabilities of Audio Language Models (ALMs) to audio-based jailbreaking attacks. The authors introduce a novel algorithm called Best-of-N (BoN) Jailbreaking, which leverages random audio augmentations to elicit harmful responses from ALMs. They demonstrate the effectiveness of BoN jailbreaking on several state-of-the-art ALMs, including Gemini and GPT-4o. The authors also uncover power laws that predict the effectiveness of BoN as a function of the number of sampled augmentations. Finally, they investigate the composition of BoN with other jailbreaking techniques, demonstrating that combining these methods can lead to more potent attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concerns regarding this paper are that the methodology of the attack is not adequately described, and the evaluation begins without sufficiently introducing the approach and settings.\n\nFor example, to understand the experiments, the reader needs to know what is defined as \"harmful information.\"\n\nIt is difficult to assess the novelty of this work. Although the authors propose an attack against an alternative domain, the method used is unclear. Additionally, we do not gain many modality-specific insights and could potentially derive the same findings from other, text-only models. It would be beneficial if the paper included some audio-specific insights.\n\nFor instance, what is the signal-to-noise ratio (SNR) of the input? Would a human notice the attacks? Does the attack also work if it is played over the air?\n\nIn the introduction, the paper describes the \"robustness\" of the models (third paragraph). However, the method used is not described, and the findings there are not particularly useful.\n\nIt would also be appreciated if the authors uploaded a few audio samples for listening."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. See weakness 1. Are there any more insights and discussion?\n\n2. See weakness 2. Do the PrePAIR have any relationship with the audio part in ALMs?\n\n3. See weakness 3."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The research question, studying the jailbreak vulnerabilities of ALMs, is relatively interesting.\n\n2. The experimental part is comprehensive and complete. The appendix is detailed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces BoN, a new jailbreaking technique for ALMs. Based on various audio augmentation methods, BoN bypasses the safeguards of SOTA ALMs through repeated sampling with higher ASR. Authors also show that BoN jailbreaking can be composed with other jailbreak techniques."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. As stated in line 157-159, PAIR and TAP are classical jailbreak techniques, and both of them achieve good performance on jailbreaking text-only LLMs. However, when applying single audio augmentation and TTS engine, the ASR on ALMs decrease to below 5%. This result looks counterfactual and lacks of detailed discussion. The authors should provide a more detailed analysis of why this decrease occurs, or discuss the potential reasons when transferred from text to audio.\n\n2. Technical contribution is limited. (1) Lack of detailed description of the proposed algorithm PrePAIR. (2) From the limited description of PrePAIR, while universal, this algorithm is not highly relevant to the previous sections and the “audio” part in language models, which makes the motivation unclear. The authors should clarify the connection between PrePAIR and the audio aspects of language model, and explain how this algorithm relates to the overall motivation of the paper. \n\n3. As a supplementary section of the pre-experiments, Section 3 (especially Section 3.1 & 3.2) does not give clear and valuable insights, which also makes the pre-experiment part lengthy. The authors should highlight the key findings more clearly, which would help this reviewer and other readers understand the value of this section.\n\n4. The presentation of the paper needs to be improved. For example, the duplicated part in upper left figure 1."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce BoN Jailbreaking: a composable, and highly effective black-box algorithm for attacking state-of-the-art ALMs."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024attacking,\ntitle={Attacking Audio Language Models with Best-of-N Jailbreaking},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yougZBoUY3},\nnote={under review}\n}"
},
"abstract": {
"value": "In this work, we investigate the susceptibility of Audio Language Models (ALMs) to audio-based jailbreaks and introduce Best-of-N (BoN) Jailbreaking, a black-box jailbreaking algorithm to extract harmful information from ALMs. To craft jailbreak inputs, our approach samples audio augmentations and applies them to malicious prompts. We repeat this process until we find a set of augmentations that elicits a harmful response from the target ALM. Empirically, we find that applying BoN with 7000 sampled augmentations achieves an attack success rate (ASR) of over 60% on all models tested, including the preview model for the released GPT-4o. Furthermore, we uncover power laws that accurately predict the ASR of BoN jailbreaking as a function of the number of samples. These power laws allow us to forecast the effectiveness of BoN jailbreaking as a function of the number of sampled augmentations over an order of magnitude. Finally, we show that BoN jailbreaking can be composed with other black-box attack algorithms for even more effective attacks—combining BoN with an optimized prefix attack achieves 98% ASR on Gemini Pro and Flash. Overall, by exploiting stochastic sampling and sensitivity to variations in a high-dimensional input space, we propose a scalable, composable, and highly effective black-box algorithm for attacking state-of-the-art ALMs."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"adversarial robustness",
"jailbreaks",
"audio language model",
"speech language model",
"multimodal",
"adversarial attack",
"audio jailbreak",
"safety",
"trustworthy",
"robustness"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c5a6020ec0ed18c4f098ef984b84144796cf0d4d.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/840e5a8474267456ae2457ecdb23e3a3b3ddd198.zip"
},
"title": {
"value": "Attacking Audio Language Models with Best-of-N Jailbreaking"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yp95goUAT1 | SiReRAG: Indexing Similar and Related Information for Multihop Reasoning | main | Active | Retrieval-augmented generation (RAG);RAG indexing;Multi-hop question answering | applications to computer vision, audio, language, and other modalities | 3;5;6;8 | 4;4;4;4 | 2;3;4;3 | 2;2;3;3 | 2;2;3;3 | 5.5 | 4 | 3 | 2.5 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1)\tThe motivation example in Figure 1 is a bit misleading. The authors suggest that the question “Who is the father of the artist who painted Head I?” has three hops of reasoning, whereas it only requires two hops, i.e. finding the artist in 1st hop and then identifying the author in the 2nd hop. This example can be a bit confusing to the reader on what the overall motivation of the approach is.\n\n2)\tIt is very surprising to see almost similar performance for models of highly different capabilities such as GPT-3.5-turbo and GPT-4o. The authors should provide more insight/analysis on why this is happening? Also, what is the performance when the LLM used is an open-source model such Llama-3 8B.\n\n3)\tIt would be interesting if the authors could show performance separately based on the number of reasoning hops present in the question. Also, does the approach show any benefits over RAPTOR for single hop/simple queries?\n\n4)\tThe authors should also evaluate more recent multi-hop QA datasets, such as FanOutQA or FreshQA. All the datasets considered are pre-2022, raising concerns about leakage into LLM pretraining data. \n\n5)\tThe authors should also consider adding a few qualitative analysis examples that demonstrate how and “why” (i.e. which part of the method helps) SiReRAG improved over RAPTOR due to incorporating the relatedness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1)\tThe approach shows considerable performance improvements over RAPTOR on a variety of multi-hop QA datasets.\n\n2)\tThe experimental ablation settings are thorough and show the benefit of different design choices made by the authors for the SiReRAG approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces SiReRAG, which proposes to consider both similar and relatedness information when creating retrieval indices that address queries with multi-hop reasoning. The paper gives motivations to demonstrate the bottleneck of solely modelling relatedness or similar information only. For similar information, the paper constructs a similarity tree based on recursive summarization, while for relatedness, SiReRAG extracts propositions and entities from text, and groups them via shared entities to construct a relatedness tree. Experimental results show SiReRAG considerably improves over other baselines like HippoRAG, GraphRAG and RAPTOR when evaluated on various multi-hop QA datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1)\tThe paper does read like incremental work over RAPTOR, and it is hard to be convinced that the paper has enough novelty for acceptance at ICLR.\n\n2)\tSome important baselines such as the closed book approach, i.e. directly getting the final answer from the LLM without any retrieval, or iterative retrieval, such as Self-Ask [1] or DSPy [2] are missing. Also, the authors should include these baselines when doing the inference latency comparison.\n\n3)\tThe writing needs to be improved in Sections 1 and 3 of the paper since it’s not easy to grasp the main intuitions or motivations of SiReRAG. \n\n[1] Measuring and Narrowing the Compositionality Gap in Language Models\n\n[2] DSPY: COMPILING DECLARATIVE LANGUAGE MODEL CALLS INTO SELF-IMPROVING PIPELINES"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The reasoning behind why HippoRAG does significantly better for 2Wiki dataset is not very clear in my opinion: I think what might be happening here is the following: when the RAPTOR text clustering pipeline is applied to the extracted entity-relatedness propositions, it treats the full propositions as text, completely ignoring the underlying structure of the proposition/fact (which is usually of the form Subject/Predicate/Object). That destroys certain information from the triple. My guess is that if we modify the RAPTOR encoder to encoder these features separately (e.g. with SPO markers tokens or with different encoders altogether), then we could see that we're able to encode the triple structure better, and we might be able to recover that difference.\n\n- It would be very informative to share more information about the average number of candidates considered by the method compared to the baseline methods per query. This would show the computational increase compared to the baseline."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Originality: The work presented in SiReRAG is original in the sense it combines ideas from two existing methods/philosophies about indexing, similarity and entity-relatedness into one solution.\n\nQuality: The thought process and reasoning of the paper is mostly intuitive and simple (merge two ideas that have been shown to improve performance), the experimental section supports this reasoning and is somewhat comprehensive with high quality and well-studied datasets. \n\nClarity: The paper is mostly well-written and easy to follow, with certain exceptions are included in the Weaknesses sections. The claims of the authors about coverage of relatedness or similarity only in table 1 serves as a good benchmark and motivation for the work.\n\nSignificance: I consider this method to be an incremental addition on top of RAPTOR, The main metrics reported show significant improvement over the baseline methods that use only one signal."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents SiReRAG: a new indexing and retrieval technique that takes into account both \"semantic similarity\" and \"entity relatedness\" for answering complex multi-hop queries.\n\nThe suggested technique is built mostly on top of another technique called RAPTOR (which indexes text chunks on similarity only), but augments it by: first, extracting entity-level facts or propositions s (which are single facts/statements about entities), using LLMs like chatGPT or LLama-instruct, then applying the RAPTOR pipeline to extracted propositions to create relatedness trees akin to the original RAPTOR similarity trees. The final method merges the output from the two trees (one from original RAPTOR built on text chunks, and another from RAPTOR built on extracted and aggregated propositions).\n\nThe paper includes comprehensive evaluations and shows significant improvement over baseline methods that use only similarity or relatedness features."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper is not very clear regarding how to merge the result from the two trees (similarity and relatedness trees), but I might have missed this.\n\n- The metric \"time per retrieval pool size\" which is used to quantify the time of the SiReRAG vs other baselines method seems very contrived and not at all relevant, since it seems to say more about the number of candidates (the denominator) than about the time spent answering a query. Which means that the number of candidates generated by this method can be an order of magnitude more than the baseline methods in some cases, which can be prohibitive in many use-cases.\n\n- The main body of the paper has only two examples, without much detail: I recommend adding a full simple example for how the relatedness tree would look for a simple paragraph, as it's more difficult to visualize the entity-related tree as opposed to the similarity/summary between text chunks.\n\n- The suggested method ends up performing badly on the more structured datasets (2Wiki): this indicates that the way it's encoding the relatedness is not representative enough to capture the structure of the graph/triples/facts."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. It is unclear whether you re-ran the baseline or obtained the results from the original papers. I could not find the exact numbers reported elsewhere in Table 4.\n2. What was the retrieval size of the baselines?\n3. Overall, I found that TPRS was not meaningful. Wasn't it simply a result of whether the documents were short or long?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "1. The paper's main idea is well-motivated and supported by its preliminary experiments (Table 1). I found that its arguments were well-articulated and supported by existing literature, yet the hard evidence provided in Table 1 was a helpful addition.\n2. The paper acknowledged an alternative approach in some steps and some results to reject the alternatives (Sections 4.1 and 4.3).\n3. The method proposed in the paper consistently improved over three multi-hop reasoning datasets (Tables 4 and 6). In addition, ablation analysis also justified the need for the relatedness tree (Table 5).\n4. The paper included an inference time experiment, showing that inference time increased with pool size."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the retrieval component of the retrieval-augmented generation. It argues that existing RAG methods tend to focus exclusively on semantic similarity or relational links, leading to suboptimal performance in complex, multi-hop reasoning tasks. The paper motivates this with a coverage study to show that similarity or relatedness can only return a few correct entity connections, and the return results are half overlapped. The paper proposes SIRERAG, which is designed to optimize information retrieval by indexing data based on similarity and relatedness. The index is based on a similarity and relatedness tree using a a method similar to RAPTOR's.\n\nExperiments with three datasets (MuSiQue, 2WikiMultiHopQA, and HotpotQA) showed improvements in EM and F1 scores compared to previous indexing methods, which include similarity or relatedness. The gain in performance had a slight negative impact on the inference time compared to a similarity-only method due to a larger retrieval pool size. Further analyses showed that the relatedness tree was indeed helpful."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although not related directly to the paper, the paper should still acknowledge the state-of-the-art multi-hop reasoning method such as [Open-RAG](https://openragmoe.github.io/).\n2. Section 4.3, which described an alternative method rather than the main method, did not explain flattened indexing well. Figure 2 showed an \"+\" sign but was not technical enough to confirm what was being done.\n3. A few prompts were not provided, such as the summary prompt (maybe similar to RAPTOR?), the topic extraction prompt (Section 3), and the hierarchy prompt (Section 4.1).\n4. Some missing details of the baseline might be crucial to interpret the results (see Questions)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. The experimental results on other multi-hop reasoning tasks.\n2. The performance using other relatedness-based trees (e.g., entity-based, or entity pair-based)"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "In general, I think it is helpful to leverage multiple types of relevance for better retrieval results, and the experimental results also verify its effectiveness. It is also helpful to investigate and identify that similarity-only is not enough for complex QA tasks (although this is not a new discovery as many previous studies in traditional IR studies)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Indexing is critical for IR or RAG systems. In this paper, the authors propose a RAG indexing method which considers both semantic similarity and relatedness for better retrieval, named SIRERAG. Specifically, for similarity-based indexing this paper constructs a similarity tree via recursive summarization, and for relatedness this paper constructs a relatedness tree via entity and proposition extraction and grouping. Experimental results show some performance improvement on multi-hop QA datasets over previous similarity-based baselines and relatedness-based baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Because the similarity-based indexing component just follows RAPTOR (Sarthi et al., 2024), I think the main contribution of this paper maybe the relatedness-based indexing component. Unfortunately, I found the relatedness-based indexing algorithm is ad hoc. Firstly, the relatedness are modeled using entity-specific propositions, which I think is over-specialized to multi-hop QA tasks(where the hop is just entity-entity associations and this is why the proposed method can achieve performance improvements). However, I think entity-specific propositions may not a good decision for many other complex reasoning tasks, and the authors should explain and verify the effectiveness and generality in more tasks. Secondly, I found there are many heuristic decisions, such as how to filter proportions, how to resolve entity references, etc.\n2. The similarity and relatedness trees are constructed and used independently, which I think is straightforward, it is important to consider the interaction between different relevance scores.\n3. There are many multi-hop QA baselines, such as iterative RAG-based, agent-based, etc. The authors should compare them for more convincing experimental results.\n4. The writings of this paper should be improved."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce an innovative RAG indexing approach that considers both similarity and relatedness when organizing data with strong performance."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024sirerag,\ntitle={SiRe{RAG}: Indexing Similar and Related Information for Multihop Reasoning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yp95goUAT1},\nnote={under review}\n}"
},
"abstract": {
"value": "Indexing is an important step towards strong performance in retrieval-augmented generation (RAG) systems. However, existing methods organize data based on either semantic similarity or related information, but not both. As our analysis reveals, modeling only one perspective leads to suboptimal performance on complex tasks requiring multi-hop reasoning. In this paper, we propose SiReRAG, a novel RAG indexing approach that explicitly considers both similar and related information. On the similarity side, we follow existing work and explore some variances to construct a similarity tree based on recursive summarization. On the relatedness side, SiReRAG extracts propositions and entities from texts, groups propositions via shared entities, and generates recursive summaries to construct a relatedness tree. We index and flatten both similarity and relatedness trees into a unified retrieval pool, demonstrating that SiReRAG consistently outperforms state-of-the-art indexing methods on three multi-hop datasets, with an average 4.6% improvement in F1 scores. SiReRAG also enhances existing embedding and reranking methods, with an average improvement of 7.8% and 4% in F1 scores."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Retrieval-augmented generation (RAG)",
"RAG indexing",
"Multi-hop question answering"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5ab04091aef7e78e96d6255bd9eacf96bed592a5.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "SiReRAG: Indexing Similar and Related Information for Multihop Reasoning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ypBYdetYd9 | Measuring and Controlling Solution Degeneracy across Task-Trained Recurrent Neural Networks | main | Active | Recurrent Neural Network;Dynamical System;Neural Computation;Computational Neuroscience | applications to neuroscience & cognitive science | 3;3;5;5;5 | 4;3;2;4;5 | 3;2;3;2;2 | 2;1;2;2;3 | 4;3;3;3;4 | 4.2 | 3.6 | 2.4 | 2 | 3.4 | 0.080064 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Questions\nMajor questions:\n* What is the distribution used to initialize the weights of the RNNs? How does the initial weight distribution affect the results? In particular, does the DSA distance still decrease with task complexity when initializing the RNNs with small weights close to zero?\n* Why use a Frobenius invariant norm to measure weight degeneracy instead of an orthogonal invariant Frobenius norm as measured in DSA?\n\nAdditional questions:\n* In general, how does the degeneracy evolve during training? Does it increase or decrease with training? This might depend on the weight initialization.\n* It would be interesting to look at other more commonly used similarity measures than DSA such as CKA or Procrustes distance."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The study addresses an important question on diversity and degeneracy in RNNs.\n* The authors investigate this question across multiple levels: behavior, dynamics, and connectivity, and across multiple settings: different loss functions, network sizes, and weight regularization methods.\n* The paper is clearly structured and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Summary\nThe authors study the variability in RNNs across four tasks: a flip-flop task, a delay discrimination task, a sine wave generation task, and a path integration task. For each of these tasks, the authors propose a way to vary the task complexity, as measured by the entropy of the inputs and outputs. They found that the average DSA pairwise distance between 50 RNNs trained to reach a certain performance threshold, decreases as the task complexity increases. On the other hand, the degeneracy of the RNN weights, measured with a permutation invariant Frobenius norm, was found to increase with task complexity. RNNs trained on more complex tasks also showed decreased variability in their outputs when evaluated on the original tasks with longer time durations. Additionally, the authors showed that the weights and dynamics degeneracy varies depending on the task loss, the network size, and the weight regularization method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weaknesses\nIt seems that the central claim of the paper is: “while harder tasks lead to more consistent neural dynamics across task-trained networks, their underlying recurrent weight matrices are more degenerate/variable.” (Section 4). However, this statement may be overly simplistic and may significantly depend on factors that were not considered in the proposed work:\n(a) Dependence on the initial weight distribution of the RNNs\n(b) Dependence on the specific distance metrics used to quantify dynamics and weights degeneracy\n(c) Dependence on the specific notion of complexity\n\n(a) Prior work has shown that the initial weight distribution can have a large influence on the training dynamics and the solutions found by the RNNs. However, there is little information on how the RNN weights were initialized and there is no analysis on how the main claims may depend on their initial distribution. In particular, the decrease in dynamics degeneracy with task complexity may not hold for RNNs initialized with small weights with values close to zero.\n\n(b) The authors measured weights degeneracy with a stricter Frobenius norm than the dynamical degeneracy. The norm used for DSA is invariant under orthogonal transformation, which is less strict than the permutation invariant norm used to quantify the weights degeneracy. The authors found that dynamics degeneracy decreases with task complexity but weights degeneracy increases. How much does this result depend on the specific norm and its invariance class used to measure degeneracy? Would the weights degeneracy decrease with task complexity when considering a similar matrix norm than for the dynamics degeneracy, i.e. orthogonal transformation invariant Frobenius norm?\n\n(c) The authors quantify task complexity as the entropy of the task inputs and outputs. However, this captures only a specific aspect of the task complexity. For example, as stated in section 4: “Moreover, as we modified the task characteristics to study its effect on degeneracy, we observed that degeneracy remains invariant under certain transformations of the task, e.g., changing the delay duration in Delayed Discrimination or altering the environment size in Path Integration tasks.”. It would be really interesting to see these results because increasing the duration of the delay period or increasing the size of the environment can be seen as ways to make the tasks more complex, in the sense that it would probably take longer for the RNNs to learn the tasks. The amount of training required to learn the task is another metric that can be used to measure task complexity. It would be interesting to study how it relates to the entropy based task complexity considered by the authors.\n\nMeasuring task complexity as the entropy of the task inputs and outputs implies that it is invariant under shuffling of the timesteps, which completely changes the task structure. Does the relationship between degeneracy and task complexity still hold when considering time shuffled variants of the tasks?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How does your permutation-independent distance contrast with the distance provided in Generalized Shape Metrics on Neural Representations paper Eq. 6?\n\nIt is written that \"within each task, larger marker size and less opacity indicate task variant with higher complexity.\" Did the authors mean larger markers size and _more_ opacity? If not, I don't think your figure is interpretable.\n\nFor out of distribution tasks: What was the reasoning behind showing the CV but not mean error? Can you also show mean error and standard deviation separately? \n\nWhat is the reasoning behind doubling the delay period or entire trial for OOD part?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors tried their method in various tasks and settings. \n\nThe authors goal of measuring and controlling the degeneracy across different scales is very important and I think an important question.\n\nThe figures are very explanatory and the paper is written well, easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper provides ways of how to measure the behavior, dynamics, and weight-space degeneracies in RNNs, and then they provide methods to control these degeneracies. They show their findings using various tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "\"We chose this measure because it directly reflects the amount of information present in the task’s inputs and outputs, which the network must process and represent through its hidden state (Tishby et al., 2000).\" In the experiments done in paper, the dimensions of the RNNs are most of the time higher than the dimensions of your input and output. I do not understand how does this method (of measuring the entropy of input and output _separately_) works, in the case when the goal of an RNN is to reproduce its input? Say one presented a highly complex input signal. And since the goal is to reproduce, then the output signal is also highly complex. But, the RNN only needs to implement an identity function, but the method suggested would classify this as a difficult task. I think this clearly demonstrates the problem of taking into account the input and output signal separately. At the end of the day, the goal of an RNN is to _map_ its inputs to outputs, so to the best of my understanding, there lies a significant problem in this approach. Since authors base all of their claims to this methods, I am highly confused and would like to have a discussion about this. \n\nControlling methods are only tried in one task but the results are written in general form. The claims in the paper, I believe could only be made if the authors have tried these methods on different tasks.\n\nIn the task settings in the paper I think it is true that increasing the number of channels makes the task more difficult, but I don't think it is always the case. Consider the delayed match-to-sample task. One can either have two different input channels to represent your signals or one. I think in this case representing with two channels makes the task easier. Does this show that this method is _not_ task agnostic?\n\nFig 4A, third column does not support your claim but you did not discuss it.\n\nAuxiliary loss: I do not believe the defined auxiliary loss is in fact _auxiliary_ in the delayed discrimination task. When the task of a RNN is to output f1-f2 together with sign(f1-f2), I believe the main task becomes the former one."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. As hinted at in the weaknesses, it's unclear if this measure relates to the true underlying generative process. Can they authors provide additional details on the generative processes' of these tasks (i.e where the uncertainty is, for instance in the delay period, or the probability of the bit-flip) and instead provide increased/decreased complexity of tasks based on this, and if the central claims of the paper still hold?\n2. As detailed in 3.1, entropy calculated is discrete, however some tasks are continuous, yet no detail is provided on how binning can affect the final measure of H_{task}. Can the authors provide some results on this, and how this might change the 3 clusters of tasks they observe?\n3. Can the authors provide any justification for why a joint/conditional/marginal entropy on input & output wasn't used? \n4. Fig 2: while the general trend of more PCs are needed for more complex tasks, how can this trend be reconciled across tasks? For instance DD (high input, low output) needs more PCs than the most \"complex\" n-bit task. Similarly, the opposite is observed for path integration (fewer PCs) and n-bit which have similar complexity measures. \n5. Fig 2: For panel E can the authors use either % or decimals consistently for variance?\n6. Fig 3: Can the authors provide some justification or discussion for the trends observed, as the current writing goes it is unclear why such a trend makes intuitive sense. For instance, why does the output task complexity provide this trend v/s the input task complexity?\n7. Fig 3: While the trend stated holds for same tasks with higher complexity, can the authors provide clarification on why between tasks this isn't observed? For instance sin-N and n-bff have similar output complexity measures but vary on DSA values. Can the authors provide some insight/clarification on this?\n8. It appears the authors use different hidden dimensionalities in RNNs between tasks (i.e 64 v/s 128) while showing how their measure of task-complexity relates with degeneracy. As discussed in 3.3 this influences the capacity of the network. Can the authors provide a justification for why this was done and how the trends remain the same/change if all networks have the same dimensionality?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed method provides a unified approach to understanding the wide range of solutions in task-trained RNNs. The authors present multiple reasons/possibilities for these solutions (i.e due to behavioral, weight space or neural dynamic variability), and quantify its relationship with the amount of information present in each task. The information theoretic approach provides a way to categorize more complex v/s less complex tasks, which is then used to observe trends with other covariates. The authors further increase same task complexity by increasing the amount of information associated with the task (by adding independent channels). Their results suggest a similar trend in complexity with weight space degeneracy, and an inverse trend for neural dynamics and generalization. Lastly, they use these insights to increase/decrease the solution-space associated with tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an approach to explore & control the solution space of task-trained vanilla RNNs. They provide an information theoretic approach to measure task complexity and provide empirical results on its correlation with behavioral, weight space and neural dynamic degeneracy. Lastly, the also provide ways in which such degeneracy of solutions can be controlled. Their results are performed on four neuroscience inspired tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have some main concerns/weakness which I will outline here with specifics under questions.\n\n1. The authors present an interesting approach to quantify task-complexity using entropy. However, as detailed in the paper, this is very closely tied to the probability encoding the underlying generative process for each task. Specifically, while adding channels increases the information, a more robust measure would be changing the probability of the generative process (for instance the probability of a bit flip in a single channel). While in principle adding channels increases information associated with tasks, it's unclear if this is representative of the \"randomness of the task\".\n2. The authors discuss ways to control solutions associated with task-trained RNNs. Other than increasing task-complexity and noting it's trend associated with weight and dynamical degeneracy (presented in this paper), the other four methods have been previously observed. Further, the authors do not provide any theoretical or discussion regarding how this relates to the method presented in this paper. Can the authors expand on the novelty of this section?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please see the questions in the Weaknesses before."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is structured and clear. \n- Jointly considering several degeneracy metrics (in particular DSA) and tasks is useful for the community. \n- The tasks considered are diverse and rooted in the literature, giving a considerable degree of generality to the results.\n- The related works section is thorough and bridges to the feedforward literature"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper considers the effect of task complexity on “solution degeneracy”. The latter is characterized through examining how three measures behave as a function of task complexity: 1\\) Similarity of the dynamical system between trained networks, 2a) difference in weights from initialization and 2b) pairwise similarity of weights between trained networks, and 3\\) difference in generalization performance between trained networks. \n\n### Recommendation\nOverall, I think the paper is a valuable survey of the influence of task complexity on degeneracy and am looking forward to see it published. The main issue I see is with how expected the results are, depending on the way that it is regularized: An underconstrained system (as per less input-output channels) will be more variable due to unconstrained directions in the dynamics. If regularization in training, as is common practice, would turn out to remove for example DSA variability, I would not find the results sufficiently novel for ICLR. Until this point is clarified, I cannot recommend the paper to the conference."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(more important points first)\n\n- **Regularization.** I could not understand from the paper whether a weight penalty was present when training on the task loss with BPTT. Weight decay for Adam is not specified, I assume it is the default, in case of which there would be some penalty. Was this the case, and if not, how would the measures change if the penalty got increased? For a stronger penalty, I would expect that the DSA metric shows less variation, as presumably noise directions that give rise to dynamical variability would then be penalized to decay. This would make the paper's finding much less interesting to me, because we already know that controlling task-unrelated dimensions is useful. I wonder for example whether Fig. 7 would be induced by strong regularization, biasing the network towards the presumably simpler line attractor solution.\n- **Expectedness.** As the authors state in the discussion, most of the results can be understood with the following argument: As task entropy increases, the network parameters are increasingly constrained. Hence, there are less parameters that are unconstrained and give rise to degeneracy: Effectively, the network has now to solve $N$ tasks in parallel, but has the same representational capacity. \n This makes me wonder how the metrics change when task complexity is not scaled up by replicating the same task over channels, but changing the complexity of the function that needs to be learned? As a very simple and probably too naive example, how would the network behave if task complexity was controlled by the number of Fourier modes in the target signal? It seems that the paper identifies a specific kind of task complexity whose generality is questionable: The proposed measure is not rooted in the literature apart from a loose connection to information theory.\n- **Weight degeneracy.** I find the metric not well motivated. Why should I expect a pair of networks that is non-degenerate (i.e., similar in some way) to have small $d\\_{PIF}$? Would I not rather consider an (orthogonal) similarity measure on matrices, for example $d\\_{sim} \\= min\\_O ||W\\_1 \\- O^{-1} W\\_2 O||\\_F$, where $O$ is an orthogonal or general invertible matrix. This is exactly the $d\\_{procrustes}$ the authors discuss for DSA. I expect that using this metric on the weights would essentially make it the DSA metric. It would be especially interesting if that changes Table 1\\. \n- Do the authors have a hypothesis why change weight degeneracy depends on control paradigm?\n- **Relevance.** Why should we care about whether a networks solution is degenerate or not? I understand that it is interesting descriptively to know a connection between task parameters and degeneracy metrics, but I am not sure where I would need this knowledge. The paper would benefit from discussing this. \n- **Controllability.** It is hard to generalize from the findings for controlling degeneracy from the DD task only. For example, it is hard to say what an auxiliary loss term would look like for the other tasks, possibly involving some amount of engineering. \n- **Generality.** Do the authors suspect that there a tasks where the observed trends do not hold?\n- No code is supplied, limiting transparency and reproducibility.\n\nAgain, I like the paper and think it is a valuable contribution, but these points make me question its novelty and generality."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the Weaknesses part."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper proposes a unified approach to analyze the degeneracy of solutions obtained by RNNs on several tasks from several perspectives.\n\n2. This paper incorporates certain measures from information theory to quantify task complexity. The authors conclude that an increase in task complexity will lead to a reduction in degeneracy in neural dynamics and generalization behavior. However, it will simultaneously increase the degeneracy in weight space.\n\n3. Based on their analysis, the authors propose several strategies for controlling solution degeneracy in practical applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a unified approach to quantifying and controlling the degeneracy of solutions obtained by RNNs on several tasks by analyzing their degeneracy from three levels: behavior, neural dynamics, and weight space. They introduce some measures from information theory to quantify task complexity and conclude that increasing task complexity will reduce degeneracy in neural dynamics and generalization behavior, but increase the degeneracy in weight space. Based on these discoveries, they propose several strategies to control solution degeneracy in practice."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The results in this paper are empirical and lack of theoretical analysis. For instance, it would be better if the authors could add more discussion on the reason why increasing task complexity will increase the degeneracy in weight space. Maybe some theoretical analysis of some toy examples will help. \n\n2. The strategies proposed for controlling solution degeneracy in practical applications are too general and may not be very useful in practice. For example, increasing task complexity reduces dynamical degeneracy and increases weight degeneracy, it is not clear which strategies we should use for specific tasks. More discussions on this aspect are welcome."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024measuring,\ntitle={Measuring and Controlling Solution Degeneracy across Task-Trained Recurrent Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ypBYdetYd9},\nnote={under review}\n}"
},
"abstract": {
"value": "Task-trained recurrent neural networks (RNNs) are versatile models of dynamical processes widely used in machine learning and neuroscience. While RNNs are easily trained to perform a wide range of tasks, the nature and extent of the degeneracy in the resultant solutions (i.e., the variability across trained RNNs) remain poorly understood. Here, we provide a unified framework for analyzing degeneracy across three levels: behavior, neural dynamics, and weight space. We analyzed RNNs trained on diverse tasks across machine learning and neuroscience domains, including N-bit flip-flop, sine wave generation, delayed discrimination, and path integration. \nOur key finding is that the variability across RNN solutions, quantified on the basis of neural dynamics and trained weights, depends primarily on network capacity and task characteristics such as complexity. We introduce information-theoretic measures to quantify task complexity and demonstrate that increasing task complexity consistently reduces degeneracy in neural dynamics and generalization behavior while increasing degeneracy in weight space. These relationships hold across diverse tasks and can be used to control the degeneracy of the solution space of task-trained RNNs. Furthermore, we provide several strategies to control solution degeneracy, enabling task-trained RNNs to learn more consistent or diverse solutions as needed. We envision that these insights will lead to more reliable machine learning models and could inspire strategies to better understand and control degeneracy observed in neuroscience experiments."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Recurrent Neural Network",
"Dynamical System",
"Neural Computation",
"Computational Neuroscience"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3d8fd8097a4afd121525a164ce773b1a7597e589.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Measuring and Controlling Solution Degeneracy across Task-Trained Recurrent Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yqJoqtUwSI | Collaborative Hybrid Propagator for Temporal Misalignment in Audio-Visual Segmentation | main | Active | audio-visual video segmentation | applications to computer vision, audio, language, and other modalities | 3;5;5;5;8 | 4;3;4;4;5 | 1;3;3;3;3 | 3;2;3;3;3 | 1;3;3;4;3 | 5.2 | 4 | 2.6 | 2.8 | 2.8 | 0.592927 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- Is the task or audio-visual video segmentation supervised or unsupervised? (e.g. do have ground truth video segmentation pixel maps for training or not?)\n - There is mention of a semi-supervised approach in the related works, is this paper following the same approach? Please consider adding such detail.\n- In eq. 1, what prompt is x? Why is a prompt needed at all? How do you mathematically define an audio frame? Is it a spectrogram?\n- Is the training of Keyframe Mask Generation supervised, unsupervised or semi-supervised? I am reading that it is fine-tuned on keyframes extracted from the training dataset by RCPG, fine-tuned to do what exactly?\n - Please consider adding a step-by-step description of the training process.\n- What do the scores in the Table 1 correspond to?\n- Section 4.3: what is the \"cosine similarity\" used for? Please consider more details about its use. \n- What happens when sounding event are overlapping? Please discuss how your method handles overlapping sound events, or if this is a limitation of your approach."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The core novelty of this paper is the proposal to rely on audio segmentation to decompose the audio-visual video segmentation task into sub-problems."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focusses on the task of audiovisual video segmentation: given a sounding video, generate pixel-level maps of sound-producing objects that align with the ongoing audio.\n\nExisting methods suffer from poor temporal alignement.\n\nTo tackle this issues the authors introduce a two-steps framework:\n- LLM-assisted audio event segmentation.\n- Segment-based downstream video segmentation.\n\nThe paper also proposes a new dataset and benchmark."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Although the high-level task description is clear, the paper lacks a deeper explanation of the workflow with simple terms (it is unclear whether the task is supervised, unsupervised or semi-supervised). The introduction dives into convoluted acronyms (e.g. Retrieval-augmented Control Points Generation Module (RCPG)) took quickly without telling what they are actually meant to achieve (detecting keyframes + reference masks). This makes the paper hard to understand.\n - I would suggest adding an overview of the task setup in the introduction (there is no description of the Figure 1 anywhere in the paper).\n- RCPG is meant to perform audio segmentation, it should be treated as such and compared with other audio segmentation methods.\n- Results are presented before metrics and data.\n- The training protocol for the Keyframe Processor is unclear."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See the weaknesses above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper intends to solve an important problem in AVS.\n\n2. The visualization and representation are relatively clear."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "An AVS model to solve the temporal misalignment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Temporal misalignment: It seems an important problem to me, but the author should have expressed this specific problem more clearly. How many failure cases are caused by temporal misalignment? Are there any ratios? Is there any quantitative analysis beyond the qualitative cases? Additionally, the impact of temporal misalignment in Figure 5 is not that clear. In my perspective, most of the cases in Figure 5 are caused by simply incapable segmentation networks.\n\n2. Missing important references and comparison: The major CV conferences, including CVPR, ICCV, and ECCV, have already published numerous papers on supervised AVS. However, the author does not compare any of these top works in Table 1. Here is a list of papers: [1-8].\n\n3. Propagation process: Ablation on Audio-insert Propagator only tests with 4 layers and 1 layer. The natural process involves testing more settings, selecting the best, and reporting the peak performance. Would it achieve better results if using 8 layers?\n\n4. Low amount of testing set: MOC (Multiple-sound Source Conversion) only contains 17 cases. Can these 17 cases serve as solid proof of the performance? I think it requires more data.\n\n5. Accumulation error of key frame result: The model appears to rely on the segmentation result of the first frame. What if it is incorrect?\n\n6. Temporal misalignment: In previous models like TPAVI and AVSegformer, there is no specific temporal information included in the model. It would be better to compare it with other models with temporal perception.\n\n7. Some important AVS works in the related work: Works [10-12], including unsupervised/weak-supervised AVS and open-vocabulary AVS, need to be discussed in the related work section.\n\n[1] (Cited but not compared) Chen, Y., Liu, Y., Wang, H., Liu, F., Wang, C., Frazer, H., & Carneiro, G. (2024). Unraveling Instance Associations: A Closer Look for Audio-Visual Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 26497-26507). \n\n[2] Chen, Y., Wang, C., Liu, Y., Wang, H., & Carneiro, G. (2024). CPM: Class-conditional Prompting Machine for Audio-visual Segmentation. arXiv preprint arXiv:2407.05358.\n\n[3] Ma, J., Sun, P., Wang, Y., & Hu, D. (2024). Stepping stones: A progressive training strategy for audio-visual semantic segmentation. arXiv preprint arXiv:2407.11820.\n\n[4] Hao, D., Mao, Y., He, B., Han, X., Dai, Y., & Zhong, Y. (2024, March). Improving audio-visual segmentation with bidirectional generation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 3, pp. 2067-2075).\n\n[5] Yan, S., Zhang, R., Guo, Z., Chen, W., Zhang, W., Li, H., ... & Gao, P. (2024, March). Referred by multi-modality: A unified temporal transformer for video object segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 6, pp. 6449-6457).\n\n[6] (Cited but not compared) Yang, Q., Nie, X., Li, T., Gao, P., Guo, Y., Zhen, C., ... & Xiang, S. (2024). Cooperation Does Matter: Exploring Multi-Order Bilateral Relations for Audio-Visual Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 27134-27143).\n\n[7] Sun, P., Zhang, H., & Hu, D. (2024). Unveiling and Mitigating Bias in Audio Visual Segmentation. arXiv preprint arXiv:2407.16638.\n\n[8] Nguyen, K. B., & Park, C. J. (2024). SAVE: Segment Audio-Visual Easy way using Segment Anything Model. arXiv preprint arXiv:2407.02004.\n\n[9] Li, J., Yu, S., Wang, Y., Wang, L., & Lu, H. (2024, October). SelM: Selective Mechanism based Audio-Visual Segmentation. In Proceedings of the 32nd ACM International Conference on Multimedia (pp. 3926-3935).\n\n[10] Liu, J., Liu, Y., Zhang, F., Ju, C., Zhang, Y., & Wang, Y. (2024). Audio-Visual Segmentation via Unlabeled Frame Exploitation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 26328-26339).\n\n[11] Liu, J., Wang, Y., Ju, C., Ma, C., Zhang, Y., & Xie, W. (2024). Annotation-free audio-visual segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 5604-5614).\n\n[12] Guo, R., Qu, L., Niu, D., Qi, Y., Yue, W., Shi, J., ... & Ying, X. (2024). Open-Vocabulary Audio-Visual Semantic Segmentation. arXiv preprint arXiv:2407.21721.\n\nI will consider raising my score if the authors can address the questions above."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses. I'd be happy to increase my rating if the authors address the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The use of a retrieval-augmented LLM approach to identify audio control points and address temporal misalignment is novel within the AVVS domain.\n2. The proposed Co-Prop framework addresses a pain point in AVVS, which is useful for content creation in complex audio-visual environments like AR or video editing.\n3. Experimental results show improved alignment rates, especially in scenarios with multiple sound sources, demonstrating the method’s efficacy in AVVS.\n4. The proposed method is well presented, with comprehensive diagrams and examples illustrating the misalignment issues in prior models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents the Co-Prop, a new framework that addresses temporal misalignment in audio-visual video segmentation by enhancing alignment between audio cues and visual segmentation outputs. The Co-Prop framework consists of two core modules: (1) Retrieval-Augmented Control Points Generation Module, which anchors key transition points in the audio, and (2) Audio-Insert Propagator, which propagates the segmentation frame-by-frame, integrating audio information to improve synchronization and reduce memory load. Evaluations on multiple datasets demonstrate better performance in alignment rates and segmentation precision than baseline models, particularly on multi-source audio benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I am concerned that the paper does not explore how segmentation results would vary with different audio types. For instance, if a continuous dog bark is replaced with an intermittent one, it is unclear if the dog’s mask would disappear during pauses in barking. This scenario tests the model's adaptability to temporal gaps in sound, which is vital to confirm its robustness in handling real-world examples.\n2. The paper would benefit from further experiments addressing complex audio scenarios, such as overlapping sounds or sounding objects that are off-screen. These situations are common in real-world settings, and I am curious if Co-Prop could maintain object integrity in such cases. If the model relies heavily on visual input alone when multiple audio cues overlap or when sounds lack visual sources, it may risk collapsing or misinterpreting segments, which could compromise segmentation quality.\n3. The multi-step retrieval process in the RCPG module could be explained more clearly. For instance, while the ablation study in Table 2(b) shows performance improvements with 3-step prompts, it is not that clear why these prompts outperform simpler versions. I would encourage the authors to clarify how retrieval samples are chosen and if any cases show weaknesses in control point detection, as this would provide a clearer view of RCPG’s reliability.\n4. While the authors report strong results on the evaluated datasets, I would like to see tests on additional in-the-wild audio-visual data to better gauge Co-Prop’s robustness. Applying Co-Prop to less curated datasets could validate its claims of temporal alignment across different audio contexts.\n5. Although the paper mentions memory efficiency, no concrete results are provided to quantify these improvements. Memory usage is critical in AVVS applications, especially for long videos, and I would suggest including a direct comparison of memory consumption against baselines.\n6. I am concerned that Qwen’s performance in detecting precise transition points may vary, as language models like Qwen are not specifically optimized for detecting fine-grained audio transitions. I recommend using acoustic event detectors, such as PANNs [1] or BEATs [2], which could potentially enhance the accuracy of detecting key audio transition points. These models are trained to recognize audio events and may provide more robust control point identification, leading to more consistent segmentation performance, especially in scenarios with overlapping sounds.\n\nReferences\n\n[1] Kong et al. PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition.\n\n[2] Chen et al. BEATs: Audio Pre-Training with Acoustic Tokenizers."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the weakness part."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Novelty in Addressing Temporal Misalignment: The two-stage Co-Prop framework innovatively tackles the temporal misalignment issue. By anchoring the temporal boundaries and inserting audio cues frame-by-frame, the model effectively improves synchronization between audio and visual data.\n2. The paper provides extensive experimental evidence demonstrating the framework's effectiveness on multiple datasets (S4, M3, and AVSS) and backbones (ResNet and PVT-v2). \n3. Introducing the MOC test set and a new alignment rate metric to measure synchronization accuracy."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses temporal misalignment issues in audio-visual video segmentation (AVVS), where current methods fail to synchronize audio cues and segmentation outputs. The proposed solution, the Collaborative Hybrid Propagator Framework (Co-Prop), includes two main components: Retrieval-Augmented Control Points Generation (RCPG) and the Audio-Insert Propagator. The RCPG module anchors the audio's temporal boundaries by leveraging a large language model (Qwen) to generate control points, splitting the audio into semantically consistent segments. The Audio-Insert Propagator performs frame-by-frame video segmentation, embedding audio guidance information to align audio cues with video frames effectively."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Reliance on Qwen LLM: With many multimodal models (MLMs) now capable of video understanding(Gemini, Video-llama, ...) and even reasoning segmentation(LISA), what specific advantages does the proposed approach offer over directly using these large models?\n2. Accuracy Discrepancy for AVSegFormer: The reported accuracy for AVSegFormer in this paper doesn’t align with that in the cited references."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Figure 2(a), only the audio signal is used for control point generation. Is it possible to also introduce the visual frames in this process? (In my opinion, this would be more beneficial for the LLM to better classify the correct sounding objects). Are there suitable metrics or potential measures to evaluate/guarantee the correctness of the generated object categories, as many objects have similar sounds?\n2. The authors rely on the Qwen LLM to generate control points. Have the authors tried other superior LLM models?\n3. The proposed method consists of two stages. Would it be possible to consider an end-to-end strategy to address the temporal misalignment issue?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Existing methods for audio-visual video segmentation tend to design increasingly complex architectures to enhance audio-visual interactions or pixel-level segmentation results. Unlike them, this paper highlights the temporal misalignment issue between audio and visual signals, which is a valuable research direction for the AVVS problem.\n2. The proposed method, especially the preliminary audio boundary anchoring module in the first step, seems to be interesting and well-motivated.\n3. The experimental results are extensive, and the proposed method achieves significant performance improvement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To address the temporal misalignment issue in the audio-visual video segmentation task, this paper proposes a novel two-stage method. In the first stage, the authors design modules for preliminary audio boundary anchoring. Key audio frames (control points) and corresponding visual frames reflecting the audio transition are obtained. Subsequently, in the second stage, the authors propose an audio-insert propagator module to generate pixel-level segmentation maps for normal frames by propagating masks from key frames. Experiments on three sub-benchmarks demonstrate the effectiveness and superiority of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Given the aforementioned strengths, I am basically satisfied with this work. A potential weakness could be that the ideas of key frame anchoring (first stage) and mask propagation (second stage) likely originate from existing methods in video object segmentation, despite these contributions being seamlessly integrated into the audio-visual video segmentation task.\n2. Although the proposed method achieves satisfactory performance on three datasets, the authors only compared several baselines in Table 1. A more comprehensive review of recently published works for AVVS is required. Moreover, it would be better to discuss with some typical methods from other audio-visual learning tasks, such as audio-visual event localization and video parsing, in the related work section."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper address the problem of audio-visual video segmentation with a controllable audio insertion propagation framework equipped with two designed modules."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024collaborative,\ntitle={Collaborative Hybrid Propagator for Temporal Misalignment in Audio-Visual Segmentation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yqJoqtUwSI},\nnote={under review}\n}"
},
"abstract": {
"value": "Audio-visual video segmentation (AVVS) aims to generate pixel-level maps of sound-producing objects that accurately align with the corresponding audio. However, existing methods often face temporal misalignment, where audio cues and segmentation results are not temporally coordinated. Audio provides two critical pieces of information: i) target object-level details and ii) the timing of when objects start and stop producing sounds. Current methods focus more on object-level information but neglect the boundaries of audio semantic changes, leading to temporal misalignment. To address this issue, we propose a Collaborative Hybrid Propagator Framework~(Co-Prop). This framework includes two main steps: Preliminary Audio Boundary Anchoring and Frame-by-Frame Audio-Insert Propagation. To Anchor the audio boundary, we employ retrieval-assist prompts with Qwen large language models to identify control points of audio semantic changes. These control points split the audio into semantically consistent audio portions. After obtaining the control point lists, we propose the Audio Insertion Propagator to process each audio portion using a frame-by-frame audio insertion propagation and matching approach. We curated a compact dataset comprising diverse source conversion cases and devised a metric to assess alignment rates. Compared to traditional simultaneous processing methods, our approach reduces memory requirements and facilitates frame alignment. Experimental results demonstrate the effectiveness of our approach across three datasets and two backbones. Furthermore, our method can be integrated with existing AVVS approaches, offering plug-and-play functionality to enhance their performance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"audio-visual video segmentation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/540568ba3a478288f5ebbf93bd1e628a9ccec29a.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/55c3fd213e1f53fdce9d8091401fc7abe7b3bf21.zip"
},
"title": {
"value": "Collaborative Hybrid Propagator for Temporal Misalignment in Audio-Visual Segmentation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yqST7JwsCt | Entropy-Based Aggregation for Fair and Effective Federated Learning | main | Active | Fairness;Heterogeneous Federated Learning | alignment, fairness, safety, privacy, and societal considerations | 5;5;5;6;8 | 4;3;3;4;4 | 2;2;3;4;4 | 2;3;3;3;4 | 2;2;3;3;4 | 5.8 | 3.6 | 3 | 3 | 2.8 | 0.560112 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could you summarize the (non-trivial) significance of technical contributions, compared with existing literature? \n\n2. Could you elaborate on the significance of the experimental results? The numbers (such as accuracy) appears low and gives the doubt that the paper did not use strong baselines. \n\n3. This paper severely lacks references from ICML, NeurIPS, ICLR, AISTATS, especially from recent years. The authors are encouraged to provide a thorough literature study from the mainstream ML venues especially from 2022-2024, to help convince readers on the novelty and significance of this submission."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The significance of federated learning is well presented and the motivation for ensuring fairness is well-motivated.\n\n2. The theoretical analysis appears correct.\n\n3. The experiments do confirm the improvement, to an extent."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies an important problem, i.e,. to ensure fairness of federated learning while striving to maintain the accuracy, for scenarios where heterogeneity hurts the performance of FL. Their approach was based on an entropy-based formulation and they show theoretical results such as convergence as well as experiments which to an extent verifies the effectiveness of the proposed algorithm. \n\nThe paper appears to be fairly well-written and well motivated. The main concern is the the limited novelty and significance. The experimental results are not impressive either."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the exact formulation has not been published in the literature, the formulation appears to be fairly straightforward and does not seem to be a significant contribution to the field. The analysis follows the standard analysis. \n\n2. The experimental results, while they show some improvement, do not appear impressive. The actual significance of the proposed framework is not convincing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see the questions and comments listed in the Weakness section"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. To the best of my knowledge, the idea of using a constrained entropy model for aggregation in FL to improve fairness is novel and interesting.\n2. The idea of adaptively changing the global aggregation to either prioritize global model accuracy or fairness, while heuristic, is again interesting.\n3. Theoretical results proving that the proposed FedEBA+ can reduce variance compared to FedAvg for generalized regression and strongly convex models.\n4. Experimental results look promising with the proposed FedEBA+ outperforming FedAvg and other baselines both in terms of global model performance and fairness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Existing algorithm to achieve fairness in FL often do so at the expense of global model accuracy. The paper proposes FedEPA+, an algorithm to improve fairness in FL while maintaining global model performance. The key idea here is to use entropy-based aggregation followed by a novel model alignment technique which adaptively optimizes for either global accuracy or fairness depending on the current performance of the model. Theoretical results are provided showing the convergence of the proposed algorithm in general non-convex FL setups along with guaranteed improvement in fairness in the strongly convex scenario. Empirical results show that the proposed algorithm outperforms existing fair FL baselines in both global model accuracy as well as fairness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The clarity of writing and motivation for doing FedEBA+ can be significantly improved. There were several parts which I found confusing or could not fully comprehend as I discuss below.\n\n* It is a bit surprising to me that the expression for the aggregation probabilities in Eq. (4) does not depend on the desired $\\tilde{f}(x)$ in Eq. (3). Is there an intuitive explanation for why this is the case?\n\n* Authors are encouraged to provide an intuitive explanation for why maximizing constrained entropy can improve fairness. Right now, entropy-based aggregation just appears to be a black box to improve fairness\n\n* While I could follow the construction of the proposed bi-level objective in Eq. (6) and the idea to adaptively change the ideal gradient, I was unable to follow the changes made in the local and global aggregation to account for this. In my understanding, we first solve the inner problem. i.e, find $p$ given the current global model. Then given $p$, we update the global model with either the ideal global gradient or ideal fair gradient depending on the requirement. Given this understanding, my questions are as follows:\n\n * In the alignment to improve global accuracy why are the $p_i$'s computed using the loss of the local models? (Line 312). In my understanding, since we are fixing $x$, when solving the inner problem, the $p$ should be computed using the loss of the global model, i.e, the expression in Line 351?\n * In the alignment to improve fairness why is the local optimization at clients changed (Eq. 11 and Line 11 in Algorithm 1)? In my understanding only global aggregation should change?\n\n* I don't follow how in Prac-FedEPA+ clients need to only communicate once. If we follow Algorithm 3 then it appears that communication can happen twice within a round: first in Line 6-8 and then again in Lines 18-22. I found this to be very confusing and authors should clarify what is the communication cost in their experiments.\n\n* $A$ is not defined in Theorem 5.1. Similarly $w$ is not defined, although from context it appears to be the prior aggregation weights of client objectives.\n\n* It seems a little strange to see discussion on the lower bound of an upper bound in Remark 5.2. Authors should just state the worst possible convergence rate of FedEPA+ by considering that $\\sum_{i=1}^M w_i^2 \\leq 1$.\n\n* In Remark 5.3. it appears that to show convergence of $\\alpha \\neq 0$ we need to assume $k << m$. Is this correct? If so, authors should clearly mention this and explain why this is the case. Also I don't follow the argument that the proposed alignment results in a faster convergence rate than FedAvg since both seem to be achieving $O(1/\\sqrt{mKT})$ rate.\n\n* Why is FedMGDA+, PropFair and TERM incompatible in Table 2?\n\n2. The parameter $\\theta$ appears to be crucial to the performance of the algorithm but currently there is very little discussion on how to set this parameter and its impact on convergence. Theoretically, we don't see any effect of $\\theta$ in Theorem 1, which is a bit surprising to me. In practice, I would also suggest moving Table 7 to the main text and discuss the trade-off in setting $\\theta$ in more detail."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. **Novelty of Exponential Normalization**: The core aggregation strategy is based on an exponential normalization of client losses. Could the authors clarify how this approach fundamentally differs from existing techniques that also adjust aggregation weights based on client performance? How does the use of entropy provide a significant advantage over these existing methods?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- **Clear Problem Motivation**: The paper addresses an important problem in federated learning—how to achieve fairness across clients while ensuring that the global model's performance does not degrade. The motivation behind balancing fairness and performance in a heterogeneous FL environment is well articulated.\n\n- **Theoretical Guarantees**: The paper provides theoretical analysis for the convergence of FedEBA+ in non-convex settings, which adds credibility to the proposed approach. The fairness improvements are also supported by performance variance analysis.\n\n- **Bi-level Optimization Framework**: The introduction of a bi-level optimization framework is interesting and provides a structured way to balance fairness and performance. The authors also derive a closed-form solution for the aggregation probability, which improves computational efficiency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper titled \"Entropy-Based Aggregation for Fair and Effective Federated Learning\" proposes a novel algorithm called FedEBA+ to address the fairness issues in Federated Learning (FL) while maintaining the global model's performance. The authors leverage an entropy-based aggregation mechanism, combined with model and gradient alignment, to optimize both fairness and global performance. A bi-level optimization framework is introduced to derive an efficient closed-form solution for aggregation probabilities, which is claimed to enhance fairness by adjusting weights for underperforming clients. The paper provides theoretical guarantees for the algorithm's convergence and fairness, and empirical results on several datasets demonstrate that the proposed approach outperforms existing fairness-oriented FL algorithms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Lack of Novelty in Aggregation Strategy**: The core novelty of the paper—using an entropy-based aggregation mechanism—essentially boils down to applying an exponential normalization based on client losses to determine aggregation weights. This approach is not particularly novel, as similar strategies have been used in various other contexts (e.g., softmax-based weighting). \n\n- **Overemphasis on Entropy**: The paper heavily emphasizes entropy, but in practice, the core idea is simply a re-weighting based on client loss. The connection between entropy and fairness, while valid, feels somewhat forced in its application here. The novelty of applying maximum entropy principles in such a straightforward manner might be overstated.\n\n- **Limited Innovation in Fairness-Performance Trade-off**: Although the paper claims to balance fairness and performance, the approach primarily adjusts aggregation weights based on client loss. Many existing algorithms adjust client weights in some form, and the use of entropy as a fairness mechanism does not seem to introduce substantial improvements beyond what is already available in the literature."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The work addresses a critical issue in federated learning and provides a practical solution that could have some impact on the field.\n- The paper is well-researched, with a robust theoretical foundation and comprehensive empirical validation. \n- The authors provide insightful convergence analysis and fairness guarantees.\n- The objective function and the proposed algorithm are novel contributions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces FedEBA+, a novel federated learning algorithm designed to enhance both fairness and global model performance through a computationally efficient bi-level optimization framework. The authors propose an innovative entropy-based fair aggregation method for the inner loop and develop adaptive alignment strategies to optimize global performance and fairness in the outer loop. The paper provides theoretical analysis confirming the convergence of FedEBA+ under non-convex settings and demonstrates its superiority over state-of-the-art fairness algorithms through empirical results on various datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper would benefit from a more intuitive explanation of the connection between entropy and fairness.\n- The explanation of the entropy-based aggregation method could be more detailed, particularly in terms of how it differs from and improves upon existing methods.\n- The concept of \"ideal loss\" appears somewhat confusing. Since $\\tilde{f}(x)$ represents the ideal loss, which signifies the global model’s performance under an ideal training setting, it is unclear why the gradient of the ideal loss can be estimated by averaging local one-step gradients during the model alignment procedure, yet it should be estimated by Equation.10 during the gradient alignment procedure. Could the authors give more intuition to clarify this point?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "see the weakness."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- Content is solid and comprehensive.\n- The paper is well-structured and advances step by step.\n- The theory is clear and solid.\n- The experimental section is well-developed and extensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Existing federated fairness algorithms strive to enhance fairness but often fall short in maintaining the overall performance of the global model, typically measured by the average accuracy of all clients. To address this issue, this paper proposes a novel algorithm that leverages entropy-based aggregation combined with model and gradient alignment to simultaneously optimize fairness and global model performance. The method presented in this paper employs a two-layer optimization framework and derives an analytical solution for the inner loop aggregation probabilities, making the optimization process highly computationally efficient. Furthermore, the paper introduces innovative alignment updates and adaptive strategies in the outer loop to further balance the performance of the global model and fairness. Additionally, the paper conducts a convergence analysis and numerous experiments to demonstrate the novelty and effectiveness of the proposed method. The content is very detailed, making it an excellent paper."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Please explain how the convergence rate range mentioned in lines 396 to 397 of the main text is derived."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a fair FL algorithm that addresses the underexplored challenge of improving performance fairness while enhancing global accuracy, with theoretical and empirical demonstrations."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024entropybased,\ntitle={Entropy-Based Aggregation for Fair and Effective Federated Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yqST7JwsCt},\nnote={under review}\n}"
},
"abstract": {
"value": "Federated Learning (FL) enables collaborative model training across distributed devices while preserving data privacy. Nonetheless, the heterogeneity of edge devices often leads to inconsistent performance of the globally trained models, resulting in unfair outcomes among users. Existing federated fairness algorithms strive to enhance fairness but often fall short in maintaining the overall performance of the global model, typically measured by the average accuracy across all clients. To address this issue, we propose a novel algorithm that leverages entropy-based aggregation combined with model and gradient alignments to simultaneously optimize fairness and global model performance. Our method employs a bi-level optimization framework, where we derive an analytic solution to the aggregation probability in the inner loop, making the optimization process computationally efficient. Additionally, we introduce an innovative alignment update and an adaptive strategy in the outer loop to further balance global model's performance and fairness. Theoretical analysis indicates that our approach guarantees convergence even in non-convex FL settings and demonstrates significant fairness improvements in generalized regression and strongly convex models. Empirically, our approach surpasses state-of-the-art federated fairness algorithms, ensuring consistent performance among clients while improving the overall performance of the global model."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Fairness",
"Heterogeneous Federated Learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3c1ca9107e4bfb7c5130f11960c51569af8d6673.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/5ac2b70db31c1f2f091b820f2b036217c5cee9b1.zip"
},
"title": {
"value": "Entropy-Based Aggregation for Fair and Effective Federated Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yqaN7MfkFU | Regularized Maximum Mean Discrepancy for Variable Selection | main | Active | Variable selection;Maximum mean discrepancy;Two-sample tests;Binary classification | other topics in machine learning (i.e., none of the above) | 3;3;5;6;6 | 2;4;4;3;3 | 3;2;2;2;3 | 2;2;3;2;3 | 3;2;3;3;3 | 4.6 | 3.2 | 2.4 | 2.4 | 2.8 | 0.078811 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- It seems that the first equation in sec 2.4 is obtained from (3) and the fact that the sum of omega is always a constant. In that case, (3) is equivalent to -MMD + \\sum (omega - any constant)^2, not just one?\n- There are multiple hyperparameters in the variable selection algorithms. How sensitive is the performance to the hyperparameters?\n- Is the accelerated algorithm for computing the estimated weights similar to applying gradient descent to equation (3)?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is clearly motivated.\n- The method is non-parametric and model-free, allowing a broad range of applications.\n- Experimental results show sizable improvement over the original MMD framework"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a variable selection scheme based on the maximum mean discrepany (MMD). The authors propose two variable selection algorithms, which are based on a regularized MMD objective, to assign weights to important variables and discard the unimportant ones. An accelerated scheme to compute the estimated weights is also presented. Theoretical results on the consistency of the estimated weights and the convergence of the accelerated scheme are provided. Experiments demonstrate the competitiveness of the proposed method to comparing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The literature review could benefit from a broader references. For instance, other parameter selection schemes, other variants of MMD objective etc.\n- The presentation in sec 2.4 is somewhat unclear. Equation (4) and (5) have an intricate structure. Readers could benefit from a clearer presentation/explanation.\n- In the experiments, only two standard comparing methods are considered. The experimental results could be strengthened by including more advanced baselines.\n- In the variable selection algorithms, randomness is involved. E.g., random subsets are chosen to compute the MMD value. It is generally recommended to report the mean and standard error over multiple random trials."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* The optimisation problem formulated in (4) does not seem to encourage sparse solutions. I think the weight vector obtained from (4) on a limited training set is generally non-sparse. How does this translate to a variable selection? Is there a thresholding procedure on the weight vector. If so, such procedure should be detailed in the article.\n\n* Could the authors provide some approximation guarantees for the accelerated algorithm, or/and some generalization guarantee for the original method or its accelerated version?\n\n* Why is there no error-bar in the experimental results? And why not use all the samples in the gene expression data set GSE2034 to obtain a large test set which can reduce the variance of the empirical performance, therefore allowing for a more reliable comparison?\n\n* In Lines 262-264, it says \"the training set, is used to compute the optimal weight vector $\\hat w_{\\hat\\lambda}$ by optimizing the tuning parameter $\\hat\\lambda$\", meanwhile in Lines 280-282 of Algorithm 1, it seems that $\\hat\\lambda$ is tuned in a cross validation manner by minimizing the loss $\\ell(\\lambda)$ on a test set. Could the authors clarity whether or not the tuning of $\\hat\\lambda$ involves exclusively the training set?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* Proposition of a new method, presented along with theoretical and empirical results.\n\n* The writing is clear and the paper is not hard to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The proposed method finds the importance levels of variables by optimizing a weighted version of maximum mean discrepancy (MMD). The objective function to minimize is negative weighted MMD plus a regularization term that imposes all variable weights to be $1$ when its parameter $\\lambda$ is set to be infinitely large. The regularization parameter $\\lambda$ is chosen through optimizing the performance of the task - two-sample test or classification. An approximate version of this method is presented to reduce the computational cost, obtained through a first-order Taylor expansion of the objective function. Consistency and convergence results are given for the solution of the approximate method. Experiments on two synthetic data sets and one gene expression data set show the advantage of the proposed method over the unweighted MMD for two-sample test, and the variable selection method by Lasso for classification."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The theoretical results only concern the consistency and the convergence of the approximate algorithm. There is no generalization or approximation guarantee.\n\n* This work can benefit from a more extensive empirical study. For two-sample test, only one baseline is compared, and the experiments were exclusively conducted on synthetic data. For classification, two baselines are tested, on synthetic data and one real-word data set. Moreover, there is no error-bar in the reported empirical results.\n\n* Most crucially, while the proposed method is claimed to be a method of variable selection, there is little comparison to other variable selection methods, expect some empirical results o compare the false discovery rate and the classification performance with the Lasso method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I am curious that whether it is possible to provide a theoretical guarantee that the optimal weight is beneficial for two-sample test or binary classification?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The presentation of this paper is in general good, clearly stating the background and their intuition. This paper also provides comprehensive results regarding their proposed methods, including both theory for optimization and numerical simulations. Also the idea of using weighted MMD for model selection is novel."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper considers the problem of model selection when testing whether two distributions are identical. Traditionally, people calculate the Maximum Mean Discrepancy (MMD) between two distributions, and then perform test using MMD, namely reject the hypothesis that two distributions are identical if MMD exceeds a critical value. The authors observe that, unimportant variables may even hurt the performance of this MMD test. Therefore they proposed the weighted MMD, namely assign a weight to each variable. The optimal weights are chosen so that the weighted MMD is maximized. Variable selection can be done using these weights, facilitating the downstream two-sample test or binary classification. The authors also provide an optimization method with theoretical guarantee. Numerical experiments show the efficiency of the proposed algorithm."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concern is that, although the authors claim that weighted MMD is good for selecting variables, they do not provide theoretical guarantees that the optimal weight is beneficial for two-sample test or binary classification. Therefore the proposed method seems to be a little bit ungrounded."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Why is the ridge penalty used instead of an $L_1$ penalty? Is there any theoretical justification for this choice?\n\n- Can this workflow be extended to more general settings, such as regression with HSIC?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Overall, this paper is clearly written.\n\n- The paper presents an interesting approach to nonlinear variable selection within the MMD framework and develops a faster variant for its implementation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a variable selection method using maximum mean discrepancy (MMD) under a binary-classification setting. The approach assigns and optimizes weights for each variable within a regularized MMD framework, setting some weights to zero for unimportant variables. These optimized weights act as an importance measure for identifying variables contributing to distributional differences. Simulations and real-data analysis demonstrates the empirical merit of this paper."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The term “Regularized MMD” in the title is somewhat ambiguous and does not fully capture the main idea of this paper.\n- In [1], the simplex constraint yields a sparse estimator under the least squares setting, but it remains unclear if the solution to (2) demonstrates similar properties.\n- The properties of $w_\\lambda^*$ are not sufficiently investigated.\n- Table 1: The statistical properties of both the original and accelerated methods should be included.\n- The selection of benchmarked methods is limited [2-3]; there are numerous variants of (nonlinear) Lasso that could be considered.\n- Type-I error studies should be integrated into the main text.\n- An ablation study would help clarify whether using the ridge penalty is beneficial.\n\n### Reference\n\n- [1] Meinshausen, Nicolai. \"Sign-constrained least squares estimation for high-dimensional regression.\" (2013): 1607-1631.\n\n- [2] Ravikumar, Pradeep, et al. \"Sparse additive models.\" Journal of the Royal Statistical Society Series B: Statistical Methodology 71.5 (2009): 1009-1030.\n\n- [3] Yamada, Makoto, et al. \"High-dimensional feature selection by feature-wise kernelized lasso.\" Neural computation 26.1 (2014): 185-207."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The overall procedure is sound. As far as I have checked, the proof is generally correct with detailed assumptions stated. Numerical study convince the soundness of the algorithm design."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper utilized the weighted norm with maximum mean discrepancy to solve the variable selection problem for two-sample testing, which is a very important problem in statistics and machine learning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although the overall procedure seems to be sound, I have serious concerns regarding its theoretical foundation.\n1. The authors proposed to solve Eq. (3), which is the objective combining empirical MMD statistics and $\\ell_2$ regularization. Recall for least squares problem, $\\ell_1$ regularization (lasso) helps with variable selection while $\\ell_2$ does not. I believe the similar result applies here. So why does the author prefer $\\ell_2$ instead of $\\ell_1$ regularization?\n2. For the evaluation of $\\ell(\\lambda)$ in Eq.(4) or Eq. (5), it involves the sample estimate of certain objective. So which type of data is utilized? The training data $\\mathfrak{X}^{Tr}$ or testing data $\\mathfrak{X}^{Te}$, or we need to perform training-validation split on $\\mathfrak{X}^{Tr}$ and then utilize train-train data?\n3. In the Algorithm the authors proposed to solve the optimization problem (3), which is unfortunately a non-convx problem in weight $\\textbf{w}$. Then the authors resort to solve approximation optimization problem (6). What is the optimality gap between these? It is difficult for me to convince the effectiveness of this approximation algorithm.\n4. Similarly, only the convergence analysis for solving the approximation optimization problem (6) is provided, but I cannot see the testing statistical power analysis when resorting to this approximation algorithm\n5. Only the real data instead of the implementation code is provided for this paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose variables selection method based on the maximum mean discrepancy (MMD), which can effectively screen important variables that cause differences in distributions between two samples."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024regularized,\ntitle={Regularized Maximum Mean Discrepancy for Variable Selection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yqaN7MfkFU},\nnote={under review}\n}"
},
"abstract": {
"value": "In this paper, we propose a variable selection method based on maximum mean discrepancy (MMD) to effectively identify important variables that contribute to distributional differences between two samples. We begin by assigning weights to each variable and then optimizing these weights within a regularized MMD framework. The optimized weights serve as an importance measure for each variable and can be leveraged for variable selection. Additionally, using the optimized weights, we design two algorithms aimed at enhancing test power and improving classification accuracy for two-sample tests and classification problems. Our method is model-free and makes no assumptions about the underlying structure of the data. Moreover, we propose an acceleration method to improve computational efficiency.\nWe also provide theoretical guarantees, including the consistency of the estimated weights and the convergence of our acceleration algorithms. Through numerical simulations and real-world datasets, we validate the effectiveness of the proposed method."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Variable selection",
"Maximum mean discrepancy",
"Two-sample tests",
"Binary classification"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/597d767452260fe5d3ccd77fe4bc41ea6efdfd86.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/3f0111c32ead9fa1a596c080fde94c56f5b50377.zip"
},
"title": {
"value": "Regularized Maximum Mean Discrepancy for Variable Selection"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yr0l1IoyzV | A GPU-accelerated Large-scale Simulator for Transportation System Optimization Benchmarking | main | Active | microscopic traffic simulator;transportation system optimization;GPU acceleration | infrastructure, software libraries, hardware, systems, etc. | 5;5;5;6 | 4;2;3;4 | 2;2;3;3 | 2;2;2;3 | 3;2;3;4 | 5.25 | 3.25 | 2.5 | 2.25 | 3 | 0.522233 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Thank you for your review.\nI will list the responses to your questions as follows:\n\n**To Weakness:**\nOverall, the two key designs presented in Section 3.1 are major contributors to improving efficiency and performance.\nIn other words, both designs are indispensable. They are therefore of equal importance and should be considered as a whole.\nAccording to the idea of the paper, GPUs are the hardware base for enabling computational acceleration, and the task of software system design is to best adapt to the hardware.\nWe identify two major difficulties (read/write conflict & vehicle sensing indexes) in the adaptation process and provide solutions (two-phase parallel process & linked-list based vehicle sensing indexes) that ultimately allow the GPU's performance to be fully utilized for acceleration.\n\n**To Question 1:**\nThe answer is YES.\nOur simulator supports user-configurable inputs of kinematic and IDM model parameters for each vehicle, including maximum acceleration, general acceleration, maximum braking acceleration, general braking acceleration, headway, and so on.\nUsers can build their own vehicle input data that they want to match the desired scenario, and we provide a tool chain (mosstool mentioned in Appendix A) to support such needs.\n\n**To Question 2:**\nIt is possible but not trivial.\nThe main obstacle is the knowledge of CUDA and the complicated build process of C++.\nActually, for an experienced developers, they can easily change the model by modifying the following function - `IDMCarFollowAcc(const Person& p, float target_speed, float ahead_speed, float distance, float headway)` (https://anonymous.4open.science/r/moss-AF45/src/entity/person/vehicle.cu: line 362) to do it.\nAttracting more community collaboration to improve the system was also one of our goals in opening the source code and submitting our work to ICLR."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Response to Reviewer oXrd"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": {
"value": "Thank you very much for taking your valuable time to review our paper.\n\nOur work, a large-scale microscopic traffic simulator, serves the fundamentals of transportation system optimization to support typical transportation system optimization scenarios.\nWe believe that the efficiency of urban transportation systems is closely related to each individual, and that learning-based AI technologies have great potential to optimize the efficiency of transportation systems.\nHowever, as you say, there are fewer researchers focusing on this area right now.\nWe believe this is because there is a lack of large-scale simulators and benchmarking code that can support a larger number of scenarios, leaving researchers unmotivated to investigate the application of learning methods to these problems.\nThat is our aim in accomplishing such a non-incremental, pioneering and difficult work.\n\nMoreover, memory usage analysis will be added to the supplement as part of the performance evaluation."
},
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Response to Reviewer 6yoX"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This work is a wonderful contribution for transportation system optimization, and I strongly believe many researchers can take advantage of this platform. The manuscript is very well structured and written well. The authors covers a comprehensive review of the existing simulations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a fast simulator for transportation system optimization.\nThe main contributions of this paper is a high-performance traffic flow simulation environment and the implementation of 5 transportation optimization problems. The new proposed simulation environment is GPU-accelerated and based on SIMD and programmed in CUDA. The performance of the proposed simulator is compared against some of the existing ones and has demonstrated superior performance on runtime. Different benchmark algorithms are demonstrated on the new simulator."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The scope of this work is too narrow, since there will only be a small subset of researchers that will use this in the learning community. \n\nIt will be nice to include an analysis of memory usage if possible."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please see the limitation and clarify the technical contribution of the paper better. Some additional questions include:\n1.Does the simulator allow the researcher to add uncertainty to the vehicle model or the control of traffic objects in order to validate the robustness of a method, and allow researchers to compare algorithms that reflect the simulation to reality gap?\n2. Will it be easy (or possible) to make the car-following model selectable (instead of sticking to IDM)?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.The proposed simulator is more efficient, realistic and capable compared to other candidates (sumo, cityflow, etc.). The proposed simulator is shown to be much faster than\n\n2. The work has practical and application value, it can facilitate research and application in the relevant areas of traffic & transportation system. It is more versatile such that more component (traffic objects) are enabled in the simulator to be controllable. The authors also provide a few predefined evaluation metric APIs in the simulator to facilitate usage."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors present a high-performance simulator for traffic system simulation and optimization, as well as the benchmark resulting for five scenarios with the simulator. The author clearly stresses the weakness of trending traffic system simulators and show through comparison that their proposed simulator can overcome the issues and outperform the SOTA one by running frequency and simulation realism. Overall, the presentation of the work is also clear and comprehensive, and tis work is good in its realm that from the comparison it does show the superiority in many aspects compared with existing simulators in the literature."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In Section 3, the authors described the design of each component of the simulator, however, it is now clear, which component is the main contribution of the design that improves the efficiency and performance of the traffic simulator."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could the authors provide more justification for the assertion that vehicle simulations are “highly homogeneous” and compatible with the SIMD computational model? Specifically, how are dependencies between vehicles managed in congested scenarios?\n\n2. Why does the simulator's performance appear to decline with scenarios of over 1,000 vehicles, as seen in the comparison with CityFlow? How does this relate to the scalability claims made in the paper?\n\n3. Are there plans to include more detailed explanations of the simulation scenarios and the metrics used to evaluate them? This information would aid in validating the results for scenarios like congestion pricing and dynamic lane assignment.\n\n4. Could the authors comment on the possibility of incorporating advanced driving models such as those used in CityFlowER, which have been shown to improve the realism of driving behavior?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "## Originality\n- The simulator is GPU-accelerated, which is helpful for traffic system simulation (while some GPU simulators are available for autonomous driving).\n\n## Quality\n- The paper provides benchmarks on various transportation optimization tasks, demonstrating practical applicability.\n\n## Clarity\n- The paper is well-organized and presents a clear problem statement, followed by the design choices and technical solutions implemented. \n\n## Significance\n- By supporting large-scale simulations at high speed, this simulator could enable faster and more frequent experimentation with AI-based optimization methods, including reinforcement learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a GPU-accelerated large-scale microscopic traffic simulator aimed at supporting optimization tasks within transportation systems. By leveraging GPU-based parallel computation, the simulator achieves high performance, simulating up to millions of vehicles at a significantly accelerated rate compared to traditional CPU-based simulators like CityFlow and CBLab. The simulator supports various transportation optimization scenarios, including traffic signal control, lane assignment, tidal lane control, congestion pricing, and road planning. The authors provide benchmark results across multiple cities to demonstrate the applicability and robustness of the simulator for different optimization algorithms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper lacks a comprehensive discussion of some relevant prior works. For example, recent works such as GPUDrive by Kazemkhani et al. (2024) and CityFlowER by Da et al. (2024) could provide essential insights and serve as useful points of comparison for evaluating the novelty and realism of the proposed simulator. For example, [1,3,4] are also a GPU simulators, are the techniques used for GPU acceleration in their papers the same as this paper? Is CityFlowER also as realistic as the proposed simulator?\n\n[1] Kazemkhani, Saman, et al. \"GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS.\" arXiv preprint arXiv:2408.01584 (2024). \n\n[2] Da, Longchao, et al. \"CityFlowER: An Efficient and Realistic Traffic Simulator with Embedded Machine Learning Models.\" Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Cham: Springer Nature Switzerland, 2024.\n\n[3] Saprykin, Aleksandr, Ndaona Chokani, and Reza S. Abhari. \"GEMSim: A GPU-accelerated multi-modal mobility simulator for large-scale scenarios.\" Simulation Modelling Practice and Theory 94 (2019): 199-214.\n\n[4] Jiang, Xuan, et al. \"Large scale multi-GPU based parallel traffic simulation for accelerated traffic assignment and propagation.\" Transportation Research Part C: Emerging Technologies 169 (2024): 104873.\n\n- The authors claim that individual vehicle simulation is compatible with the SIMD model, arguing that the vehicles are homogeneous and can be simulated in parallel. However, this assertion could be problematic as vehicle interactions, particularly in congestion, often exhibit dependencies on nearby vehicles. The paper would benefit from a clearer explanation of how dependencies are managed in parallel to ensure realism without sacrificing computational performance. \n\n- In the Simulator Realism experiment, the paper mentions that the simulator achieves more realistic speeds compared to CityFlow. Given that CityFlowER now incorporates more realistic driving models, it would be valuable to benchmark the proposed simulator against it to provide a more comprehensive evaluation.\n\n- When demonstrating scalability at large vehicle counts, the reported performance of CBLab appears to contradict with its original paper when simulating scenarios with over 1,000 vehicles, with CityFlow performing comparably or even outperforming in such instances. This discrepancy between claims and results needs further explanation on why CBLab is generally worse than CityFlow - is this because of your data is different from the CBLab? Can you provide more details about their experimental setup, including the specific datasets used and any differences from the original CBLab experiments? If so, can you use the similar dataset used in CBLab? It would also be helpful to request a detailed comparison of their experimental setup with that of the original CBLab paper, including specifics on hardware configurations and simulation parameters.\n\n- The absence of scenario data within the provided code limits the reproducibility of results, particularly for scenarios such as traffic signal optimization and congestion pricing. Including this data would enhance transparency and reproducibility."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "It is desirable to clarify the data sources used in the simulation.\n\n- Privacy, Security, and Safety: The data used in the simulation includes urban traffic data, necessitating considerations for privacy and security.\n\n- Potentially Harmful Insights, Methodologies, and Applications: The simulation results could have adverse effects if misused for real policy decisions, potentially disadvantaging specific regions or demographics. Ethical considerations are needed.\n\n- Responsible Research Practice: For research involving urban data, it is essential to ensure transparency in legal/ethical approval procedures and the data collection and processing process."
},
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety",
"Yes, Potentially harmful insights, methodologies and applications",
"Yes, Responsible research practice (e.g., human subjects, data release)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1. Given the numerous experimental results presented, I felt that there was a lack of motivation for their inclusion. Could you provide additional explanations regarding the analysis and significance of these results?\n\nQ2. There seem to be similar commercial services available (e.g., Aimsun). How does this work differentiate itself? There are also simulation games related to urban simulation, such as Cities: Skylines, which are known for their well-designed public transport systems, and the recent sequel has drawn significant attention. What are the commonalities and differences compared to such games? They support user-customized plugins for logging and debugging.\n\nQ3. Could you explain the technical background that led to the design of the simulator's GPU-accelerated architecture, and how this design evolved from previous research efforts in the field?\n\nQ4. Since the target is a simulator that is intended to be used alongside AI/ML-based technologies, which also use GPUs, resource constraints are expected if both the simulator and AI/ML technologies are to be used on servers that are not high-performance. Did you encounter this issue, and do you think it would be a significant problem?\n\n** Additional Comments\n\nThere are inaccuracies in references.\n\n- Unable to verify the original document. Only the citation is found. Please verify or update: HS Mahmassani. Dynamic traffic assignment and simulation for advanced network informatics (dynasmart). In the 2nd International Seminar on Urban Traffic Networks, 1992.\n\n- The cited arXiv document version has been updated. Please verify or update: Alexander I Cowen-Rivers, Wenlong Lyu, Zhi Wang, Rasul Tutunov, Hao Jianye, Jun Wang, and Haitham Bou Ammar. Hebo: Heteroscedastic evolutionary bayesian optimisation. arXiv preprint arXiv:2012.03826, pp. 7, 2020.\n\n- Duplicate reference entry. Please correct:\nQiang Wu, Ming Li, Jun Shen, Linyuan Lü, Bo Du, and Kecheng Zhang. Transformerlight: A novel sequence modeling based traffic signaling mechanism via gated transformer. Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2023a. URL: https://api.semanticscholar.org/CorpusID:260499801.\nQiang Wu, Mingyuan Li, Jun Shen, Linyuan Lü, Bo Du, and Ke Zhang. Transformerlight: A novel sequence modeling based traffic signaling mechanism via gated transformer. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 2639–2647, 2023b."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Originality: The parallel design utilizing GPU features and the introduction of a linked-list-based vehicle detection index are promising approaches to overcoming the computational limitations of existing CPU-based traffic simulators.\n\nQuality: The paper demonstrates a significant technical achievement by simulating over 2.4 million vehicles with GPU acceleration, achieving substantial performance improvement compared to CPU implementations. The system design and GPU usage are well-documented.\n\nClarity: The explanation of the system architecture is systematic.\n\nSignificance: To address urban traffic optimization, the authors propose an integrated simulation for five scenarios that were previously dealt with individually."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a microscopic simulator for large-scale urban traffic systems utilizing GPU acceleration. To overcome the computational limitations of existing simulators, the authors introduce GPU-based parallel processing, a two-phase parallel process, and a linked-list-based vehicle detection index. Through experiments simulating over 2.4 million vehicles, the proposed system achieved an approximately 88.92x performance improvement compared to existing systems. Furthermore, the authors present a framework that aims to integratively support five major traffic optimization scenarios: Traffic Signal Control, Dynamic Lane Assignment, Tidal Lane Control, Congestion Pricing, and Road Planning.\n\n- Soundness (Score: 2 - Fair): The design and performance improvement of the GPU-accelerated simulator proposed in this research are technically convincing, and the experimental methodology is systematic. However, the simplifications, such as excluding intersection interactions and various traffic participants (e.g., pedestrians, bicycles, public transport), may significantly affect the realism of the simulation, and these aspects were not clearly validated. It is unclear whether prior research justified these exclusions. Although extensive benchmarking experiments were conducted, the lack of detailed analysis and interpretation of the results leaves the motivations behind the scenarios insufficiently explained.\n\n\n- Presentation (Score: 3 - Good): The overall structure and explanations in the paper are clearly presented, particularly regarding system design and the use of GPU acceleration. However, the validation of each scenario within the same system is not clearly articulated. Additionally, it would be beneficial to provide a clear comparison of the proposed system architecture with existing studies, explaining the structural advancements and reasons for adopting this specific architecture.\n\n\n- Contribution (Score: 2 - Fair): The proposed performance improvements through GPU acceleration and the integrated traffic simulator framework are meaningful contributions to the field. However, the lack of individual effectiveness analysis for the five scenarios and insufficient validation of the simulator's realism are notable shortcomings. The experiments with real-world data are limited, which poses a constraint on the practical contribution aspect."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Scenario Validation and Motivation: It would be helpful to explain how the results from the five proposed scenarios provide insights for actual urban traffic planning or policy decisions. If empirical validation showing that similar results can be derived from simulating real-world policy changes were included, it would strengthen the motivation for policymakers to use the simulator. As a policymaker, I would want to validate planned changes beforehand with such a simulator, but the current motivation seems insufficient compared to the well-executed system implementation.\n\nLack of Realism: The simplifications, such as excluding intersection overlap, pedestrians, bicycles, and public transport, lack a clear basis for their effect on simulation outcomes. An analysis of the impact of these exclusions is necessary, including evidence from prior research or justifications for why their impact is minimal.\n\nInsufficient Long-Term Scenario Evaluation: There is a lack of experiments evaluating how the proposed simulator functions under long-term urban traffic pattern changes. For example, reproducing outcomes of real-world urban policy changes using the simulator would help demonstrate its reliability.\n\nCode Reproducibility Issues: Although the code was provided, the step-by-step explanations in the README.md were insufficient for successful reproduction. Specifically, two of the three provided source codes only included installation instructions without guidance on how to build or run scenarios. For the benchmark source code, even after locating it through references due to a missing GitHub link, the target version was not specified, preventing access to the necessary dataset for simulation. More detailed code instructions and thorough guidance on code operation are required, highlighting the lack of user-friendly considerations."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "The first open-source GPU-accelerated large-scale microscopic simulator for transportation system simulation and optimization."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024a,\ntitle={A {GPU}-accelerated Large-scale Simulator for Transportation System Optimization Benchmarking},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yr0l1IoyzV},\nnote={under review}\n}"
},
"abstract": {
"value": "With the development of artificial intelligence techniques, transportation system optimization is evolving from traditional methods relying on expert experience to simulation and learning-based decision and optimization methods.\nLearning-based optimization methods require extensive interactions with highly realistic microscopic traffic simulators.\nHowever, existing microscopic traffic simulators are inefficient in large-scale scenarios and thus fail to support the adoption of these methods in large-scale transportation system optimization scenarios.\nIn addition, the optimization scenarios supported by existing simulators are limited, mainly focusing on the traffic signal control.\nTo address these challenges, we propose the first open-source GPU-accelerated large-scale microscopic simulator for transportation system simulation and optimization.\nThe simulator can iterate at 84.09Hz, which achieves 88.92 times computational acceleration in the large-scale scenario with 2,464,950 vehicles compared to the best baseline CityFlow.\nBesides, it achieves a more realistic average road speeds simulated on real datasets by adopting the IDM model as the car-following model and the randomized MOBIL model as the lane-changing model.\nBased on it, we implement a set of microscopic and macroscopic controllable objects and metrics provided by Python API to support typical transportation system optimization scenarios including traffic signal control, dynamic lane assignment within junctions, tidal lane control, congestion pricing, road planning, e.t.c.\nWe choose five representative transportation system optimization scenarios and benchmark classical rule-based algorithms, reinforcement learning algorithms, and black-box optimization algorithms in four cities.\nThese experiments effectively demonstrate the usability of the simulator for large-scale traffic system optimization.\nThe anonymous code of the simulator is available at https://anonymous.4open.science/r/moss-AF45 and the others are shown at Appendix A.\nIn addition, we build an open-registration web platform to support no-code trials."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"microscopic traffic simulator",
"transportation system optimization",
"GPU acceleration"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5bc9baddd837ab181380a722a60ec56aff52a7bf.pdf"
},
"presentation": null,
"primary_area": {
"value": "infrastructure, software libraries, hardware, systems, etc."
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "A GPU-accelerated Large-scale Simulator for Transportation System Optimization Benchmarking"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yr7PjzmkQ6 | On the utility of Equivariance and Symmetry Breaking in Deep learning architectures on point clouds | main | Active | deep learning architectures;geometric deep learning;equivariance;group convolutional networks;generative modeling | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 5;5;6;6 | 2;4;3;3 | 2;3;2;3 | 2;3;2;3 | 2;3;3;3 | 5.5 | 3 | 2.5 | 2.5 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Providing a more detailed discussion on how the proposed hypotheses have also been investigated in prior works would improve the completeness of this work.\n- Why is it reasonable to expect that the experimental observations generalize across different model architectures?\n- Including error bars and variances of the reported numbers, given the randomness of the model's initialization and training, would allow for easier verification of the significance of the reported observations."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This work provides a detailed description of the most commonly used equivariant architectures for point-cloud processing.\n- Additionally, the clear definition of the hypotheses that the authors aim to investigate allows for the explicit design of experiments that can be used to test them, facilitating the reader's understanding of the conclusions drawn from the experimental observations.\n- The extensive experimental evaluation across diverse point-cloud tasks provides convincing evidence of the effect of difference levels of equivariance on the proposed Rapidash model, showing the influence of the dataset size and task complexity on these effects."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work investigates the utility of imposing equivariant constraints in models that perform point cloud processing. To study the effects of the application of different equivariant constraints, the authors propose a scalable architectural desing that allows for easily incorporating varying degrees of equivariance. They evaluate this architecture on different point cloud processing tasks, incorporating different levels of equivariance or symmetry-breaking factors that break the network's symmetry constraint. Experimental results are used to accept or reject a set of hypotheses regarding the effects of equivariant constraints on model performance and generalization. Specifically, this work analyzes the effects of equivariance when modifying the size of the model, the size of the dataset, the complexity of the task, or when allowing symmetry breaking by providing pose dependent information (that is not available in the typical equivariance case)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- This work lacks sufficient attribution to prior work that has investigated hypotheses similar to the ones tested here. For example, hypothesis 5, on the effect of symmetry breaking in equivariant neural networks, has already been studied in prior works, such as :\n\n\t[1] Marc Finzi, Gregory Benton, Andrew Gordon Wilson \"Residual Pathway Priors for Soft Equivariance Constraints\"\n\n\t[2] Mircea Petrache, Shubhendu Trivedi, \"Approximation-Generalization Trade-offs under (Approximate) Group Equivariance\"\n\n\t[3] Stefanos Pertigkiozoglou, Evangelos Chatzipantazis, Shubhendu Trivedi, Kostas Daniilidis , \"Improving Equivariant Model Training via Constraint Relaxation\"\n- The experimental evaluation is limited to examining the effects of the equivariant constraints on a single architecture. While it is shown that these effects generalize across tasks it is not clear how they generalize across different model architectures. \n- In the experimental results, no error bars are provided. As a result, it is hard to assess the significance of the observations that are used to accept or reject the various hypotheses."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see Weaknesses"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Novel Research Problem: The paper presents several hypotheses regarding the impact of equivariant vs. non-equivariant architectures, particularly on tasks of varying complexity. This is a novel and relevant problem, providing new insights into geometric deep learning for point cloud data.\nComprehensive Experimental Design: The authors conduct extensive experiments across multiple datasets, thoroughly evaluating the performance of different architectures, thus providing strong support for the effectiveness of equivariance.\nModular and Scalable Architecture: The proposed Rapidash architecture is extensible, supporting various forms of equivariance, and provides an efficient platform for testing, which is beneficial for future research.\nClear and Substantiated Conclusions: The experiments confirm the advantages of equivariance in geometrically complex tasks and quantify the impact on different data scales, offering practical guidance for model selection and design."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Summary\nThis paper examines the role of equivariance and symmetry breaking in deep learning architectures for point clouds, focusing on the influence of additional input information and SE(3) equivariance on model performance. Through a series of experiments on various tasks and datasets (e.g., Shapenet 3D, QM9, and CMU Motion Capture), the authors compare the effects of equivariant and non-equivariant layers, exploring the advantages of equivariance as task complexity increases. Results show that equivariance offers significant benefits for small datasets and geometrically complex tasks, though this advantage diminishes in large-scale data regimes.\n\nStrengths\n\nWeaknesses"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Limited Discussion on Symmetry Breaking: Although the paper addresses symmetry breaking by incorporating global coordinates, it lacks a detailed analysis of how this affects performance across different tasks. Additional theoretical insights or quantitative results would strengthen this discussion.\nInsufficient Evaluation of Model Complexity and Computation Costs: While Rapidash shows promise for high-complexity tasks, the paper provides limited discussion on computational costs and scalability in practical applications, which may impact its usability for large-scale deployment.\nLimited Benchmark Comparisons: While some baseline methods are included, the paper does not cover all recent benchmarks. Broader comparisons with state-of-the-art equivariant and non-equivariant methods would enhance the persuasiveness of the findings.\nLack of Practical Application Discussion: Although theoretically relevant, the paper does not discuss the potential impact of equivariance and symmetry breaking on real-world point cloud applications (e.g., 3D reconstruction or object detection), potentially limiting its practical value."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Some pilot studies (no need too much) should be conducted on real-world datasets, or at least some robustness point cloud datasets. For example, ScanObjectNN for point cloud classification, or ModelNet40-C for robustness point cloud recognition.\n2. Some \"solutions\" should also be provided and evaluated (some pilot studies are enough).\n\nIf these two questions can be addressed well, the reviewer may consider raising the rating."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Clear motivation and insightful key idea. \n2. The paper writing quality is very high, with explicit statements and clear logic links. \n3. The code is provided in the supplementary material. \n4. The analysis and theoretical conclusion may bring future insight to 3D point cloud learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the key properties of deep learning networks on 3D point clouds, especially focusing on SE(3) equivariance. It conduct extensive experiments to test the initial hypothesis of the trade-offs between flexibility and weight-sharing introduced by equivariant layers. Based on the experimental results, a scalable network called Rapidash is introduced to facilitate the comprehensive testing of the hypothesis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The major concern lies in the dataset. ShapeNet is a synthetic dataset. However, one important key challenge of modern 3D point cloud networks is their real-world performance on real-world data. Thus, whether the raised hypothesis can also be accepted in real-world 3D data needs to be explored. \n2. Although this paper verified the three important hypotheses, a **solution** derived from hypotheses such as some network design ideas or even *engineering tricks/techniques* (e.g., how to revise the current point convolution operation) should be further discussed and evaluated based on the extensive analysis of the paper, which will benefit the network design of future point cloud learning methods. \n3. The reviewer believes that some extra figure illustrations should be provided to better intuitively understand the proposed theoretical hypotheses.\n\n*Conclusion*: \nThe current version of the paper is theoretically important, but need more pilot studies to provide the **practical** values."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "None"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Detailed introduction to previous work.\n2. The relationship between task complexity and equivariance was studied, and it was found that the equivariant method has more obvious advantages in tasks that require strict equivariance. It is demonstrated that providing explicit geometric information can improve performance even in the case of symmetry breaking."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the impact of additional input information and SE(3) equivariance on the performance of models processing point cloud data. They present a series of hypotheses to study different aspects of equivariant neural networks and symmetry breaking. Extensive experiments have been conducted on segmentation, regression, and generation tasks to verify the applicability and superiority of equivariant networks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.There are so many contents from previous works that I can hardly tell the novelty. From my point of view, only Formulas 8 and 9 are new, but it is still quite easy to prove.\n\n2.Although the paper proposes that symmetry breaking may be beneficial, it provides insufficient explanation of the mechanism behind it."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024on,\ntitle={On the utility of Equivariance and Symmetry Breaking in Deep learning architectures on point clouds},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yr7PjzmkQ6},\nnote={under review}\n}"
},
"abstract": {
"value": "This paper explores key factors influencing the performance of models working with 3D point clouds, focusing on the impact of additional input information and $SE(3)$ equivariance. It is often argued that providing more information as input improves a model's performance. However, if this additional information breaks certain properties, such as $SE(3)$ equivariance, does it remain beneficial? This work explores the trade-offs between flexibility and weight-sharing introduced by equivariant layers, assessing when equivariance boosts or detracts from performance. We identify the key aspects of equivariant and non-equivariant architectures that drive success in different tasks by benchmarking them on segmentation, regression, and generation tasks across multiple datasets with increasing complexity. We observe a positive impact of equivariance, which becomes more pronounced with increasing task complexity, even when strict equivariance is not required."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"deep learning architectures",
"geometric deep learning",
"equivariance",
"group convolutional networks",
"generative modeling"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/1ea67ea0a3e0e44ef2c93150c65d8a73ba8ec60a.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/474f83412866e773da1d411b17c05d1f9650440a.zip"
},
"title": {
"value": "On the utility of Equivariance and Symmetry Breaking in Deep learning architectures on point clouds"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yrf5RmaHfG | JuxtAlign: A Foundational Analysis on Alignment of Certified Reinforcement Learning | main | Active | alignment;juxtaposition;reinforcement learning | alignment, fairness, safety, privacy, and societal considerations | 3;5;5 | 3;3;3 | 2;2;3 | 2;3;2 | 1;3;2 | 4.333333 | 3 | 2.333333 | 2.333333 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Is there some relation that can be drawn between this phenomenon and policy churn [1]? Perhaps this provides a different avenue to motivate this work.\n- Do sections 4.1 and 4.2 make different points? They have different titles (randomized vs misaligned) but the claims seem the same?\n\n[1] Schaul, Tom, et al. \"The phenomenon of policy churn.\" Advances in Neural Information Processing Systems 35 (2022): 2537-2549."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is fairly clear - the existence proof is very detailed and easy to follow. Certain areas like Figure captions need some improvement\n- The authors provide an original contribution - the misalignment of sub-optimal Q-values in adversarial training has not been observed before\n- Experiments are convincing, at least of the existence of mis-aligned Q-values with adversarial training. The construction of the experiments (i.e. using the second best action or worst action) is intuitive and well-explained/motivated"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors provide a theoretical and experimental of analysis of certified robust training of Q-values for reinforcement learning. Certified robost training is a method for adversarially training neural networks such that they are robust to small perturbations in the input values. In the case of reinforcement learning in particular, this is implemented as a regularizer added to the standard temporal difference loss, where the regularizer penalizes Q functions for which a perturbation in the state $s$ can result in a change in the action that produces the _maximum_ Q-value. The authors provide an existence proof that this style of training can produce misalignment amongst the sub-optimal Q-values, which they claim is a departure from natural intelligence which is able to properly order counterfactual actions. They demonstrate this phenomenon experimentally in several games in the Arcade learning environment, by showing that the performance drop incurred when selected the second best action some percentage of the time, instead of the optimal action, is much higher for adversarially trained RL than for vanilla RL. Additionally, they show that selecting the worst action some percentage of the time leads to a larger performance drop for vanilla RL than adversarial RL, again indicating that vanilla RL produces a better ordering over sub-optimal Q-values."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major Issues:\n- Overall, the main claim of the paper could be better motivated with a better argument for why it matters if sub-optimal Q-values are misaligned, particularly for a method that is specifically designed to be robust to changes in the _optimal_ Q-values. The authors mainly motivate the importance of this by the divergence from natural intelligence however (a) it is not entirely clear why a departure from natural intelligence is necessarily a bad thing and (2) the claims that this is a departure from natural intelligence don't seem to be fully supported (see next point). The author's suggest that misaligned sub-optimal Q-values present a vulnerability which could perhaps be a good motivation - could you provide specific examples of how misaligned sub-optimal Q-values could be problematic or exploited in practical scenarios?\n\n- The paper specifically makes the claim that adversarially trained Q-values are not well-aligned with natural intelligence compared to vanilla RL Q-values. However, this claim seems too strong for the results that are actually presented in the paper. The author’s argument for this claim seems to be that previous work in neuroscience i.e. Fig 1 demonstrate that humans _can_ assign correct ordering to counterfactual actions in a particular decision making task. But do humans _always_ assign correct ordering? Are there any limitations to this ability? The authors demonstrate theoretically and empirically that there exists cases where adversarial training produces misaligned Q-values. However, it seems like to make a claim about natural intelligence alignment, the authors would have to actually test natural intelligence on the same tasks, particularly since the proof is an existence proof. Admittedly, I am not too familiar with the neuroscience literature so if the authors could provide more comprehensive evidence from neuroscience literature to support their claim, this would be helpful. Alternatively, I do not necessarily think the claim about alignment with natural intelligence is necessary, so the language could be toned down a bit. \n\n- The captions of several figures are not very informative and need to be guessed at by the reader. For example, in Fig 2 there is no description of what the bar chart is displaying, there are images from Atari games with no description of what the reader should be paying attention to and the brain scan similarly has no context. For the bar graph, it would be helpful to provide the environment details, details of how the Q-values were estimated and the source of the natural intelligence data. Similarly, Fig 3 has no mention of what each of the three panels is - the caption says Adversarial and Vanilla, but the superscript on all the x-axis Q-values is \"Adv\". Please explain what each of the three figures represent and make sure the axes are correct. The placement of Fig 3 is also odd, since it isn't mentioned in the text until Section 4.3 - I would either move it to that section or mention it in the text earlier. \n\nMinor issues:\n- The axes and text in the plots are much too small. Fig 6 is particularly bad.\n- It might be more intuitive to plot the performance drop as a negative value, so that the plots have the worse performing curve lower than the best performing curve\n- In the first paragraph of 4.1, $\\mathcal{P}_w$ is mentioned before it is defined\n- Table 2 is never mentioned in the text"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I have listed questions within the points above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "CLARITY and QUALITY:\n1. The paper is generally well-written, presents a clear problem, and clear derivations.\n2. It help the reader to follow the main argument logic step by step and presents clear mathematical statements of the conclusions derived.\n\nSIGNIFICANCE and ORIGINALITY:\n1. The comparison with natural intelligence seems particularly interesting and likely novel.\n2. The problem treated is important and has already received significant attention in other communities (e.g., computer vision), but arguably less in RL so far. Hence I believe it is an important direction of investigation. \n3. Regarding the significance of the derived results I have doubts expressed via the points/questions below. Especially points (1) and (3) regarding related works and formal implications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper approaches critically and formally recent advances in certified RL. It shows how these novel RL schemes aiming to learn robust policies, actually induce policies that are misaligned with natural intelligence. Moreover, they perform a rigorous theoretical analysis of such misalignment thus formally proving failure cases of such RL schemes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "CLARITY and QUALITY:\n1. At points writing is not particularly sharp (e.g., the list of contributions could be significantly shorter and clearer). Often, diverse and not fully clear words are used to describe things (e.g., 'resilience' in line 137 - what does it formally mean?) or the terminology lacks cohesion (e.g., the word certified is not introduced in the introduction, where the related research line is introduced with alternative terminology). \n2. Example in pag. 4 might be hard to follow. I’d suggest to add a drawing with states/actions etc.\n3. The paper does not seem cohesive in terms of storyline. In particular, it seems it has two themes: (1) showing the relationship between natural decision-making and RL with and without certified methods, and (2) showing formally the limits of certified methods. Although both are relevant topics, I fail to see clearly the connection between the two. Maybe the authors have a clear picture in mind connecting the two themes, but currently the alternation between them renders the paper not cohesive and feels like reading two papers forced into one via unclear motivations and connections. This is related with point (2) below.\n\n\n\nORIGINALITY / SIGNIFICANCE / GENERAL:\n1. Related works mention adversarial training schemes in general (not specific to RL), then present recent works specific for RL with positive results and finally some with negative non-theoretical results towards claiming that this work is the first showing formally the limitations of these RL schemes. What I am missing is the following: since these algorithmic ideas are far more general than RL as they boil down to regularized optimization schemes (that have been for instance explored vastly in computer vision, NNs etc.), aren’t there works showcasing their fundamental limitations (formally) in general that can therefore trivially hold also for RL, where NNs are used to represent the policy?\n2. Even within the abstract the authors state ‘’This intrinsic gap between natural intelligence and the restrictions induced by certified training on the capabilities of artificial intelligence further demonstrates the need to rethink the approach…”. Why? Human/animals and machines clearly have different design spaces (i.e. have different limits and capabilities) so there is no reason to believe that machine intelligence has to be bound to schemes emerged in human/animal intelligence, which is arguably very limited and/or peculiar in terms of resources (e.g., sensors to minimize statistical complexity, and compute machinery). It seems to me that to evaluate (and compare) intelligence one should define measurable objectives rather than expecting certain specific behavior. \n3. I am having a hard time understanding the main message from Theorems 3.6 and 3.7. These seem portrayed as negative results as they show value function reordering for sub-optimal actions, but to the best of my understanding they seem also to claim that the value function properly identifies the optimal action, which seems to me what matters for achieving provably optimal policies in RL schemes (e.g., Q-learning). What am I missing (formally)?\n4. What is the point of plotting the defined Performance Drop rather than a classic RL performance measure? I understand why the Performance Drop would show results aligned with your thesis, but, as mentioned in the previous point, it seems not aligned with classic RL schemes optimality measures."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Minor comments \n1. Line 213 - typo\n2. I am bit confused about the depiction of the brain and video games in Figure 2. Why are they there?\n3. Def 4.2. The formulation is slightly confusing \"For any $\\tau >0$...\" in the begining of the sentence typically implies that the condition must be valid for all $\\tau > 0$ for the definition to hold. I think here the authors mean \"for some $\\tau >0$\", which is better to place closer to the end of the sentence."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper contains a theoretical foundation for the analysis\n2. The paper presents an experimental analysis of robustness and natural intelligence."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "JuxtAlign studies alignment of certified RL using ideas of natural intelligence from neuroscience. The authors imply orthogonality of natural intelligence and adversarial training used often in robust RL. The paper contains a theoretical and an experimental analysis of the phenomena"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation can be improved as it was hard for me to fully assess the results. Perhaps I missed something, but I didn't get the main message.\n2. It’s well known that robustness, adversarial training are blunt tools in the sense that they try to avoid all possible outcomes as designed. For example, we can take a worst-case action within a ball of possible actions. If we re-design these training tools with a more constrained scope then the results will be different. Therefore, I don’t quite understand how we can make general conclusions on certified training vs natural intelligence from this study.\n3. In extreme situations, robust training can result in trivial policies - do not do anything. Therefore, I don’t find it surprising that sometimes robust training is different from what we perceive as naturally intelligent. \n4. The theory seems to be based on the Danskin theorem, which can be applied only to C1 Q-functions. I believe this is a quite restrictive setting. In min-max problems it’s even more restrictive. \n3. Overestimation in Q-functions is a general problem in RL, not just in a robust setting. But I agree that in robust settings it will be more acute. However, using overestimation as a reason for misalignment in this context is not entirely correct and can be misleading."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024juxtalign,\ntitle={JuxtAlign: A Foundational Analysis on Alignment of Certified Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yrf5RmaHfG},\nnote={under review}\n}"
},
"abstract": {
"value": "Sequential decision making in highly complex MDPs with high-dimensional observations and state dynamics became possible with the progress achieved in deep reinforcement learning research. At the same time, deep neural policies have been observed to be highly unstable with respect to the minor sensitivities in their state space induced by non-robust directions. To alleviate these volatilities a line of work suggested techniques to cope with this problem via explicitly regularizing the temporal difference loss for the worst-case sensitivity. \nIn this paper we provide theoretical foundations on the failure instances of the approaches proposed to overcome instabilities of the deep neural policy manifolds. Our comprehensive analysis reveals that certified reinforcement learning learns misaligned values. Our empirical analysis in the Arcade Learning Environment further demonstrates that the state-of-the-art certified policies learn inconsistent and overestimated value functions compared to standard training techniques. In connection to this analysis, we highlight the intrinsic gap between how natural intelligence understands and interacts with an environment in contrast to policies learnt via certified training. This intrinsic gap between natural intelligence and the restrictions induced by certified training on the capabilities of artificial intelligence further demonstrates the need to rethink the approach in establishing reliable and aligned deep reinforcement learning policies."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"alignment",
"juxtaposition",
"reinforcement learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/895db473dac915fcfec8f1d70e9e82a31ed8830c.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/d85c76a8393cd93de01cd53609655bdea9aaacac.zip"
},
"title": {
"value": "JuxtAlign: A Foundational Analysis on Alignment of Certified Reinforcement Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yrnrvfXFaV | Low-cost Enhancer for Text Attributed Graph Learning via Graph Alignment | main | Active | Text-attributed Graphs | foundation or frontier models, including LLMs | 3;3;5;6 | 4;4;5;3 | 2;2;2;3 | 1;2;2;3 | 2;2;2;3 | 4.25 | 4 | 2.25 | 2 | 2.25 | -0.272166 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Questions:\n\n1. The authors need to provide a detailed motivation&discussion on the use of vector quantization (VQ) in the framework. Based on the manuscript, I feel like a lot of things are missing regarding this part. For example, how to learn the prototype matrix $Z_a$ during training? \n\n\n2. During inference on downstream tasks, the final node representations are generated as a weighted sum of the prototype matrix learnt from the alignment stage. Could the authors explain the rationale behind this design? I understand that the authors provide a subsection 'Effect Visualization of Annotation Prototype.', yet I'm still quite confused. \n\n\n3. Does the alignment stage need to be performed for any given graph, or can it be pretrained on some datasets and directly apply to unseen graphs?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Pros:\n\n1. The combination of LLMs and TAGs are important topics for graph research which establishes new benchmarks and poses new challenges.\n\n2. I like the idea of prompting LLMs for broader knowledge and the rationale behind the observed graph, e.g., edge formation.\n\n2. The experiments were comprehensive and covered most important baselines for the two tasks evaluated."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Summary\n\nThis paper studies how to leverage the textual information for improved performance for node classification and link prediction. It inherits the idea from existing works that utilize LLMs as data enhancers by prompting the LLMs for explainations and broader knowledge, with a special focus on reducing the costs of both training and inference time. The experimental results demonstrated the effectivenss of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The presentation needs to be improved. Several critical designs of the framework lacks motivation. Please see the Questions part for details."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How does the performance of GAGA vary with different values of the hyper-parameter 𝛼 in the loss function? Are there any guidelines for tuning it?\n\n2. How does GAGA choose which nodes (edges) to be annotated. Is there any consideration about it?\n\n3. While large language models (LLMs) are crucial to the proposed method, the experiments are solely conducted on GPT-3.5. I am curious about the method's compatibility with other open-source LLMs, such as Llama 3.2 or Qwen 2.5. Could you clarify which specific characteristics of LLMs are most impactful for GAGA? Additionally, how might one select an appropriate or potentially more advanced LLM to further enhance GAGA's performance?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. **Efficiency and Scalability**: Reducing both time and cost associated with LLM-based enhancements for TAG, making it scalable for large datasets.The experiment on the correlation between the amount of labeled data and performance in the ablation paper is also very interesting.\n\n2. **Two-Level Alignment Module**: The innovative two-level alignment module, which integrates annotations with TAG structure, allows GAGA to achieve strong generalization with limited annotations.\n\n3. **Experimental Validation**: Comprehensive experiments across six datasets validate GAGA’s efficiency and accuracy."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces GAGA, a framework for enhancing Text-Attributed Graphs by incorporating annotations efficiently. GAGA addresses this by annotating only a small set of representative nodes and edges based on information density, which significantly reduces time and cost compared with the traditional method. A two-level alignment module then integrates these annotations into the TAG structure, facilitating high-quality graph representation learning. Experiments demonstrate that GAGA achieves comparable or superior classification accuracies with little data annotated, proving its efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "See Questions."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "For the two-level contrastive learning, does it introduce additional training of the language models?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Experiments on both node classification and link prediction taks show the effectiveness of the proposed methods.\n\nThe research topic is promising in graph representation learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a lightweight framework for representation learning on text-attributed graphs. The proposed model only annotate representative nodes and edges, and introduces a two-level alignment module for structure alignment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The novelty of this paper is incremental. Specifically, regarding low-cost annotation, the approach of annotating only representative nodes via LLMs is not new. Previous work [1] has already applied this method for TAG representation learning. For model explanations, the idea of considering the annotation given by LLMs as an explanation (line 169) is similarly well-explored in [2].\n\nThe writing and presentation need refinement. For instance, the Introduction is overly lengthy and should be shortened. Figure 2 is difficult to interpret.\n\nThe usage and description of mathematical symbols are chaotic; for example, see line 340.\n\nIt is unclear how Figure 3 demonstrates the effect of annotation prototype projection.\n\n[1] Label-free Node Classification on Graphs with Large Language Models (LLMS), ICLR 24\n\n[2] Harnessing explanations: Llm-to-lm interpreter for enhanced text-attributed graph representation learning, ICLR 24"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The method explores a new aspect by examining the issues related to large language models from both the perspectives of time expenditure and cost."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the challenges faced by traditional graph neural networks (GNNs) when working with Text-attributed Graphs (TAGs), which contain rich textual information. While recent methods leveraging large language models (LLMs) have improved node representations, they often necessitate extensive annotations and fine-tuning, which can be costly and time-consuming. This paper propose a lightweight framework for TAG representation learning model which innovatively annotates only representative nodes and edges, significantly reducing annotation time and costs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The motivation for the model is not strong enough. The article points out that an important limitation of other methods is the requirement to enhance the attributes of all nodes. This assumption is not necessarily reasonable; enhancing all nodes is not mandatory, and enhancing only a subset of nodes is also a viable option. While the impact of this approach on model performance does require further experimentation, I do not believe it constitutes a limitation of the other methods.\n2. The code is not publicly available.\n3. From the experimental results (Table 2), the improvement of this method is quite limited and not very significant.\n4. The core contribution claimed by this article is the selection of representative nodes for labeling, aimed at reducing time and cost while maintaining effectiveness. Therefore, I focused on this aspect. However, I regret to say that the methods employed in the article do not convince me. The first assumption in the paper equates information density with node density, positing that areas with denser nodes have higher information density, suggesting that representative nodes should originate from such areas. In my view, nodes in low-density areas can also result in significant information loss, and defining high and low density is highly dependent on later parameter tuning. Secondly, the article employs k-means to obtain central nodes as a metric for selecting representative nodes. The algorithmic complexity of k-means is very high, and I do not believe this method can be particularly efficient, especially when dealing with large datasets. Overall, while the motivation behind this idea is appealing, the specific implementation fails to satisfy me."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024lowcost,\ntitle={Low-cost Enhancer for Text Attributed Graph Learning via Graph Alignment},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yrnrvfXFaV},\nnote={under review}\n}"
},
"abstract": {
"value": "Many graphs can be represented as Text-attributed Graphs (TAGs). Due to the rich textual information present in each node of TAGs, traditional graph neural networks (GNNs) often struggle to deliver satisfactory performance. Recent advancements leveraging large language models (LLMs) to augment new node text features have notably enhanced node representations, resulting in significant performance improvements. However, these methods typically require extensive annotations or fine-tuning on all nodes, which are both time-consuming and expensive. To address this challenge, we propose GAGA, a novel and lightweight framework for TAG representation learning. GAGA employs a more efficient strategy by annotating only representative nodes and edges, thereby reducing both annotation time and cost. It further capitalizes on these annotations by constructing an annotation graph that captures the topological relationships among them. Additionally, GAGA introduces a two-level alignment module to integrate the annotation graph with the TAG, ensuring effective alignment of their underlying structures. Experiments demonstrate that GAGA achieves classification accuracies comparable to or exceeding state-of-the-art methods while requiring only 1\\% of the data to be annotated, making it highly efficient."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Text-attributed Graphs"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/922a9c1dc30fab3d1c9b45d5ea7aa50c6401bd91.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Low-cost Enhancer for Text Attributed Graph Learning via Graph Alignment"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ys16t9FcLN | Distribution-Dependent Rates for Multi-Distribution Learning | main | Active | multi-distribution learning;distributionally robust optimization;pure exploration multi-armed bandits | learning theory | 3;5;6;6 | 3;4;2;2 | 2;3;3;3 | 3;2;3;3 | 2;4;3;2 | 5 | 2.75 | 2.75 | 2.75 | 2.75 | -0.492366 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Is it possible to extend the results to continuous spaces without a significant increase in computational complexity? For example, could pure exploration in linear bandit settings be considered?\n- Could you provide more detailed explanations on the comparison between UE and NUE? (Please refer to the first point under weaknesses for a more detailed discussion.)\n- How can the parameter $T_j$ be set in practice to ensure the theoretical guarantees?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This paper offers a new perspective by formulating multi-distribution learning as a pure exploration problem in multi-armed bandits.\n- Based on this view, gap-dependent bounds are derived for both adaptive and non-adaptive cases for the multi-distribution learning problem."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the multi-distribution learning problem, where the learner aims to optimize the model's worst-case performance across a set of distributions. The main contribution is a reformulation of this problem as a pure exploration multi-armed bandit task and obtain simple regret bounds that depend on the sub-optimal gap of actions. The first part of the paper studies the non-adaptive case, where the learner cannot interact with the environments. Here, the authors provide simple regret bounds for both uniform exploration (UE) and non-uniform exploration (NUE). The second part explores the interactive case, where environment interaction is permitted, and proposes an LCB-based algorithm that better a lower simple regret than UE."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **On the Significance of the Results:** One of my main concerns is the significance of the results achieved in this paper, as they rely on strong assumptions, and their implications are not rigorously discussed. \n - **Assumptions:** The paper appears to address a simplified case where the action space $ \\mathcal{A} $ is discrete and finite, and the data space is restricted to 1-dimension in the analysis. In contrast, continuous decision sets are more commonly studied in the literature, such as Blum et al. (2017), Sagawa et al. (2020), and Soma et al. (2022). Although Section 5 discusses an extension to infinite decision sets, the proposed approach using an $ \\epsilon/k $-cover would result in a method with prohibitive computational costs.\n - **Results for the Non-Adaptive Case:** It is not entirely clear to me why the results for Non-Uniform Exploration (NUE) would be better than Uniform Exploration (UE), as the NUE outcomes depend on $\\min_Q{n_Q}$, which could potentially be very small. The arguments in Section 3.3 are too intuitive to me, with several approximations made that require further justification. For instance, I am confused by the statement \"considers a case $\\Delta_{DR}(a) \\approx B_n$\" in line 321. It is uncertain whether we can disregard the term $\\Delta_{DR}(a) - B_n$ in the comparison, as this term varies across arms, and the value of $B_n$ differs between UE and NUE cases.\n\n- **On the Proposed Method in Section 4:** The proposed method for adaptive cases requires knowledge of $H_j$, which depends on the suboptimal gap $\\Delta_{a,\\min}$ and is generally unknown in practice. Although the authors provide some discussion in Remark 4, it remains unclear how this issue would be addressed. Additionally, the setting of $\\epsilon_t$ is also confusing. While this quantity appears to only require to be lower bounded, it is still unclear how to set this value to ensure that Condition Eq. (1) is not violated.\n- **About literature review**: The discussion of the convergence rate for related work in lines 139-152 is inaccurate. Although Soma et al. (2022) claim a result of $O(\\frac{\\sqrt{B^2 + k}}{T})$, their analysis overlooks the non-oblivious property of the learning process, rendering the result invalid. This issue was identified by Zhang et al. (2023), and the currently best-known result remains $O(\\frac{\\sqrt{B^2 + k \\log k }}{T})$ in this line of research. One may check Section 2.3 in Zhang et al 2023 for a discussion."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Following up the first point in the weaknesses, Would the story of the paper rather be, given the knowledge that the uncertainty set $\\mathcal{U}$ is fixed, one can develop exponential rates that scale with an unknown but fixed sub-optimality gap of each arm? \n- The paragraph in the introduction states “The current literature is populated with distribution-independent rates”, but there were not any relevant literature cited in this paragraph.\n- How would the proof be adjust to accommodate the case where the learner interacts with the environment but the optimal arm is non-unique?\n- Typo on Line 223: a fixed number times -> a fixed number of times"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper uses techniques from pure-exploration bandits to develop instance-dependent simple regret rates, which serves as less-conservative complements to the current instance-agnostic MDL error bounds."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies multiple distribution learning (MDL) from bandit optimization point of view. By connecting pure-exploration setting in bandits to MDL, it develops instance-dependent sharp regret rates, thereby improving the current instance-agnostic rates in the MDL literature."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper claims one of its contribution being developing problem-dependent rates for MDL, because “Oftentimes, it is more intuitive to analyze the learner’s performance in a fixed setting, as opposed to considering a worst-case instance for each sample size. When domain knowledge is available, a “one-size-fits-all” rate does not provide any insight on how to take advantage of this information”. However, the upper bound rates developed in this paper depend on the knowledge of the unknown optimality gap; how would this be integrated into domain knowledge remains unclear.\n- The problem setting seems very similar to Kirschner et al for distributionally robust online contextual bandit problem, but no discussion is provided on the differences and connections."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How does this work relates to active learning, where one would also take an adaptive strategy? What are the barriers that prevent active learning algorithm from being applied to the problem setting studied in this work? \n\nAre there relevant lower bounds available? If so, how do the upper bounds proven in this work compare to the lower bounds?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The authors clearly compared the finite sample bounds under uniform exploration against that under non-uniform exploration, and they highlighted where non-uniform exploration could have gains."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors study distribution-dependent guarantees in the multi-distribution learning framework. They prove that distribution-dependent bounds are tighter than distribution-independent bounds. Specifically, they derive finite sample bounds under uniform and non-uniform exploration and propose an algorithm that improves over non-adaptive counterparts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors compared their proposed algorithm against uniform sampling, but not non-uniform sampling. Non-uniform sampling benefits from varied sampled sizes and would be a stronger baseline to compare against. \n\nIt would be nice to provide experimental results, even in very simple set ups, to showcase the strength of their proposed algorithm. The main results of this work are in theoretical results and there are substantial theoretical contributions, and thus I understand the experimental results may not be necessary."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "There are no ethical concerns."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- What is the definition of $a^\\star$ in lines 199-203.\n\n- What is $l$ appearing in the RHS of Eq in lines 263-266?\n\n- Is $M$ a known parameter or an unknown parameter to the agent?\n\n- In lines 320-321, why $\\Delta_{\\text{DR}}(a) \\approx B_n$ induces the comparison to be $\\tfrac{M^2}{n}$ v.s. $\\sigma^2_T + \\Sigma^2_T + V_T$? Both exponential terms should be $1$ when the exponential factor becomes $0$."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written and clearly conveys the problem setting and the conclusion to the reader. This work provides a distribution-dependent bound with analysis, which has not been reported in the literature. The distribution-dependent bound enjoys an exponential decay which can be compared to the probability of identification failure in the Best-arm Identification. Furthermore, this paper does not limit itself to non-adaptive exploring but extends to an adaptive exploring strategy, which is the UCB-E algorithm."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents innovative strategies within the framework of Multi-Distribution Learning (MDL), with the primary objective of identifying the best-performed distribution. It is informed by principles from Distributionally Robust Optimization (DRO) and multi-armed bandit theory, proposing both non-adaptive and adaptive methodologies. The non-adaptive techniques, namely Uniform Exploration (UE) and Non-Uniform Exploration (NUE), yield both distribution-independent and distribution-dependent bounds. Furthermore, the paper introduces an adaptive method in the interactive environment, LCB-DR, which further optimizes performance by employing optimistic sampling strategies analogous to the Upper Confidence Bound for Exploration (UCB-E) utilized in multi-armed bandit scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Since the author mentions that MDL draws inspiration from multi-armed bandits, I have found that identifying the best-performed distribution can be viewed as an analogy to identifying the best arm (BAI) in MAB. In lines 199-203, the author also mentions a connection between this work and BAI, which is a $H_a$ term; It would be better if the author could draw more comparisons between MDL and BAI. Since several works in BAI can achieve instance-independent bound \\cite{audibert2010best, chen2017towards}. How does the objective upper bound guarantee relate to the existing bound shown in the BAI literature? I believe it is important to show whether the given bound is tight when reducing the problem setting to the existing work."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We devise adaptive and non-adaptive algorithms for the multi-distribution learning problem and provide distribution-dependent guarantees using tools from empirical process theory and drawing inspiration from pure exploration multi-armed bandits."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024distributiondependent,\ntitle={Distribution-Dependent Rates for Multi-Distribution Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ys16t9FcLN},\nnote={under review}\n}"
},
"abstract": {
"value": "To address the needs of modeling uncertainty in sensitive machine learning applications, the setup of distributionally robust optimization (DRO) seeks good performance uniformly across a variety of tasks. The recent multi-distribution learning (MDL) framework \\cite{pmlr-v195-awasthi23a-open-prob} tackles this objective in a dynamic interaction with the environment, where the learner has sampling access to each target distribution. Drawing inspiration from the field of pure-exploration multi-armed bandits, we provide \\textit{distribution-dependent} guarantees in the MDL regime, that scale with suboptimality gaps and result in superior dependence on the sample size when compared to the existing distribution-independent analyses. We investigate two non-adaptive strategies, uniform and non-uniform exploration, and present non-asymptotic regret bounds using novel tools from empirical process theory. Furthermore, we devise an adaptive optimistic algorithm, LCB-DR, that showcases enhanced dependence on the gaps, mirroring the contrast between uniform and optimistic allocation in the multi-armed bandit literature."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"multi-distribution learning",
"distributionally robust optimization",
"pure exploration multi-armed bandits"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8904c6bdbb09ce8201985223b4ebad3f478ebfd4.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Distribution-Dependent Rates for Multi-Distribution Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ys3eqxzkeN | Efficient Gun Detection in Real-World Videos: Challenges and Solutions | main | Active | Image-augmented training;transfer learning | applications to computer vision, audio, language, and other modalities | 3;3;3;5 | 4;4;5;4 | 2;2;2;2 | 2;1;2;2 | 2;2;2;3 | 3.5 | 4.25 | 2 | 1.75 | 2.25 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Is the study focused on object detection or action recognition? Please clarify to avoid confusion.\n2. How do the proposed contributions specifically address label scarcity and tiny object detection?\n3. Could you specify the novel aspects of your two-stage methodology compared to existing approaches?\n4.The methodology in Section 4.2 appears basic. Could you provide more technical details, especially on handling tiny objects?\n5. Comparisons are mostly with older methods. Could you add more recent state-of-the-art baselines?\n6. Could you include qualitative visualizations, such as feature maps or example detections, to better demonstrate the model's performance?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The empirical study highlights the limitations of existing video classification methods, providing a solid foundation for the proposed approach. Additionally, the two-stage methodology enables the model to capture both spatial features of guns and temporal dependencies across frames, contributing to its effectiveness. \n2. Given the real-world importance of detecting firearms in video data, the paper has notable significance. By addressing limitations in current methods and proposing a targeted approach, it contributes meaningful insights and methods that could inform future research for high-stakes applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the challenging task of detecting tiny objects, specifically guns, in video data. Recognizing that current video analysis methods struggle with detecting guns due to the limited availability of labeled data and the complexity of identifying small objects in real-world scenarios, the authors propose a new method with three main contributions. They introduce a two-stage detection methodology that combines image-augmented training to improve spatial feature extraction with temporal modeling to capture sequential information. They validate the method on a synthetic firearms action recognition dataset and discuss key challenges and potential future research directions in tiny object detection."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the abstract highlights the challenges of limited labels and tiny object detection, the contributions do not directly address these points. Establishing a clearer link between the identified challenges and the proposed solutions would strengthen the study’s coherence and impact.\n2. This paper does not clearly specify whether the focus is on object detection or action recognition, which may confuse readers regarding the study’s objective. A clearer explanation of the task would help define the scope and purpose of the proposed method.\n3. The contributions are presented in an abstract manner, lacking concrete results or specific findings to support them. Adding more details about the outcomes or unique aspects of the contributions would make the study’s significance more evident.\n4. The motivation section is overly lengthy and could be more concise. Streamlining this part would improve readability and make the core motivations clearer.\n5. Section 4.1 contains excessive background details, including algorithmic descriptions of existing methods, which may not be necessary. Providing only the essential comparative results would make this section more concise and focused.\n6. The proposed methodology in Section 4.2 is fairly basic, with limited technical depth, few formulas, and minimal emphasis on handling label scarcity or tiny object detection. Expanding this section to include more detailed techniques targeting these specific issues would add depth to the work.\n7. Table 4 presents results on a dataset where all methods achieve 100% accuracy, making it an unnecessary addition. Removing or replacing it with a more challenging dataset would make the evaluation more meaningful.\n8. The study does not compare against recent state-of-the-art methods and its variants from ablation studies, which limits the relevance and verification of the results. Including newer comparison methods would provide a stronger benchmark for the proposed approach.\n9. The paper lacks qualitative visualizations, such as feature maps, example detections, or interpretability-focused analyses. Adding these would provide deeper insights into the method’s strengths and its effectiveness in handling complex detection scenarios."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The optimal model combination (MobileNet + Transformer) has high complexity and computational resource requirements. Is this feasible for resource-limited practical applications, such as embedded devices and mobile devices?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. I think gun detection is an important topic. However, the academia seems underrated this task and there are not many research on this direction. Gun detection has big potential to make our world safer. However, the exsiting methods have too many false positives to avoid its wide applications. Thus, I am glad to see that this work tries to contribute this important research direction. \n2. The paper studies a valuable gun detection problem and provides a detailed motivation explanation.\n3. The results show the improvements of the proposed methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the limitations of existing video classification methods in gun detection by proposing to combine image-augmented training and temporal modeling. Experimental results demonstrate performance improvements on both synthetic and real-world datasets, particularly in firearm action recognition and the UCF crime dataset. The paper also highlights future research directions, including dataset diversity, model complexity, and ethical considerations."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The method proposed in the paper is a simple combination of existing techniques, such as common data augmentations like flipping and rotating.\n2. The application of data augmentations is one of the core ideas of this paper, but it lacks specific details, such as how to select and adjust data augmentations.\n3. The paper mentions using GRU and Transformer for temporal modeling, but it does not provide detailed descriptions of the configurations and optimization processes for these two models.\n4. The paper implements a combined model of VGG and Transformer, but does not elaborate on how these models are integrated. For example, does it use a simple concatenation or parallel approach?\n5. More importantly, gun detection typically requires a quick response in real-time monitoring systems, yet the paper does not address the model's real-time performance in practical applications."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. There are some missing details in the experiments. I did not find how the dataset is divided into training set and testing set. The authors should make it clear.\n2. What is the resolution of the real-world videos? I am curious whether the images contain sufficient features for guns.\n3. Is the training shown in Figure 2 in an end-to-end manner or in multiple stages?\n4. How to reduce the domain gap between gun images and real-world gun videos?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The method is clear and straightforward."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors propose a gun detection method in real-world videos. First, they perform an empirical study of several existing video classification methods to identify the presence of guns in videos. Then, they use transfer learning to extract frame-level features followed by a sequential model for video classification. Third, they conduct experiments on the Firearm action recognition dataset and UCF Crime dataset to show the performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper does not contain novel approaches. Typically, the authors only use transfer learning to extract image features and adopt LSTM / Transformers for video classification. The contribution is limited.\n2. As mentioned in the paper, the gun is usually a small object in real-world videos. However, frame-level feature extraction is not an efficient way to get fine-grained-level features of guns.\n3. In Table 4, it seems 100% accuracy has been achieved. I do not think it is a challenging task as the authors mentioned."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "It is not clear the relationship between the video activity recognition and gun detection from the videos in this paper. The authors should focus on the gun detection from the whole videos and propose to construct the gun detection dataset and benchmark."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1) The research topic for gun detection from videos is interesting and valuable.\n\n2) The authors did an empirical study of existing video classification methods to detect the presence of guns in videos.\n\n3) The authors also investigate the challenges of real-world gun video detection."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to address an important and interesting gun detection tasks. The authors evaluate several video classification methods for detecting guns in videos and summarize that existing approaches cannot work well to handle the gun detection task. The authors propose a novel two-stage training methodology combining image-augmented training (introduce subtle gun features into the model) and temporal modeling (capture sequential dependencies), resulting in significant performance improvements on a synthetic firearms action recognition dataset. The analysis in this paper also emphasizes the need for advanced AI-driven gun detection methods in video data, highlights current challenges and limitations of existing techniques, and suggests directions for future research."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Very out-of-dated methods and comparisons. The authors should compare some recent video classification and detection algorithms.\n\n2) limited novelty, this paper just combines existing algorithms together, such as Video MAE and different network backbones (LSTM and Transformer). Very old backbones and baselines cannot illustrate the superior performance of the existing work.\n\n3) The motivation part is too redundant. The authors should shorten this part.\n\n4) It is not convinced to include Transfer Learning in this paper since this paper did not perform corresponding experiments."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024efficient,\ntitle={Efficient Gun Detection in Real-World Videos: Challenges and Solutions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ys3eqxzkeN},\nnote={under review}\n}"
},
"abstract": {
"value": "Object detection in videos is a crucial task in the computer vision domain. Existing methods have explored different approaches to detect objects and classify the videos. However, detecting tiny objects (e.g., gun) in videos has always been a challenging and rigorous task. Moreover, the existing video analysis (detection and classification) models may not achieve high accuracy for gun detection in videos in real-world scenarios due to the lack of a large amount of labeled data. Thus, it is imperative to develop an efficient method to capture the features of tiny objects and train models that can perform accurate gun detection. To address this challenge, we make three contributions. First, we perform an empirical study of several existing video classification methods to identify the presence of guns in videos. Our extensive analysis shows that these methods may not achieve high accuracy in detecting guns in videos. Second, we propose a novel gun detection method with image-augmented training and evaluate the technique in real-world settings with different evaluation metrics. Third, our experimental results demonstrate that our proposed domain-specific method can achieve significant performance improvements in real-world settings compared to the other popular methods. We also discuss emerging challenges and critical aspects of detecting tiny objects, e.g., guns, using existing computer vision techniques, their limitations, and future research opportunities."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Image-augmented training",
"transfer learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2436ea84ae9ee2dd5746c81e9dd68b5d684bc377.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Efficient Gun Detection in Real-World Videos: Challenges and Solutions"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ysAX5ORQoX | R2C: Mapping Room to Chessboard to Unlock LLM As Low-Level Action Planner | main | Active | Embodied AI;Large Language Model;Embodied Instruction Following;Robotic Planning | applications to robotics, autonomy, planning | 3;3;5;6 | 4;4;4;4 | 2;3;3;3 | 2;1;3;2 | 2;3;3;3 | 4.25 | 4 | 2.75 | 2 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Dear authors:\n\nGenerally, the paper's idea is doable and insightful, but the solution details can be of great improvement. \n\n I will certainly raise my point if my concerns below are mitigated:\n1. Have the authors considered extending their approach to 3D action spaces? What challenges do you anticipate in such an extension?\n2. The concern of \"Weak baseline and unstable improvement\" in Weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. A practical research question and reasonable insight\n\nThe research question proposed by the authors, i.e. \"LLMs lack the spatial awareness of real-world environments\", is exactly one of the major bottlenecks for LLM-based robotic planning, many works, including this paper, are trying to mitigate the mismatch between planning space in LLM and planning space in real world. Therefore, the insight of this paper: establishing an efficient communication interface between LLMs and robots is very natural and reasonable. \n \n2. Extensive experiments\n\nThe authors conducted comparisons between different LLMs(GPT-4, fine-tuned Llama, fine-tuned Mistral) with an additional exploration on open-vocabulary tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "## Research Question\nThough possess extensive world knowledge and demonstrate generalizationability, LLMs lack the spatial awareness of real-world environments. On one hand, the Robot can hardly convey spatial information from the environment to the LLM. On the other hand, the LLM struggles to efficiently communicate low-level decisions to the Robot. \n## Method\nTo this end, the author proposed Room to Chessboard (R2C) framework to establish a “common language” between LLM and the Robot. Such a framework will unlock LLM as low-level action planner and develop an explainable chain of thought decision analysis paradigm. More specifically, a task will first be translated into sub-goals as high-level planning. Then R2C will translate the task-aware environment information into a compact chessboard, therefore LLM can perform low-level planning on the chessboard by predicting the robot’s next position. Additionally, the author formalized a Chain of Thought Decision (CoT-D) task for LLM to enhance its overall spatial reasoning, and designed a fine-tuning paradigm corresponding to the CoT-D tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The problem setting is too simple.\n\n1.1 As the final goal is to let LLM do the low-level planning, a 2D grid world setting is far from practical or useful, low-level planning, e.g. robot arm manipulation, in a 3D world is far more complicated compared with a 2D grid world. \n\n1.2 Another concern is whether LLM outperforms traditional planning algorithms (e.g. Dijkstra, A star) in step-by-step navigation tasks, e.g. a simple baseline could be an LLM-specified target position + heuristic planner. \n\n2. Weak baseline and unstable improvement\n\nThe selection of baseline is questionable since 1)some heuristic planners (e.g. A*) are also applicable to such task(navigation on 2D map), could the authors compare their approach to heuristic planners like A* for the 2D navigation tasks? ; 2) Saycan is classic enough as a baseline, can the authors provide details on their implementation of SayCan, particularly how they handled the reinforcement learning component? 3) Specific to the LLM-based high-level planning, there are more works published with code in the 2023~2024 period(e.g. Auto-TAMP, code as policy, etc..), but this is just a minor point. 4) The performance improvement according to Table 2 is not stable enough.\n\n3. Insufficient impact on communities\n\nOne question for the paper's impact could be: \"How do the authors envision their approach evolving as language model capabilities improve?\" or \"Are there aspects of the R2C framework beyond prompt engineering that would remain relevant even with more advanced models?\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The authors can consider comparing R2C with non-LLM specialist models. They can also address the increased computational cost introduced by CoT-D fine-tuning for real-time deployment and find if there are ways to streamline or selectively apply CoT-D only when necessary. How adaptable R2C is to different types of interaction tasks beyond navigation might be questionable."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The R2C framework's chessboard representation provides a concise yet effective way for LLMs to interpret spatial information, bridging the communication gap between high-level instructions and low-level execution. The Chain-of-Thought Decision (CoT-D) fine-tuning paradigm adds explainability and improves spatial reasoning, allowing LLMs to handle complex decision-making in low-level action planning.\nThe framework demonstrates state-of-the-art results among LLM-based methods and competitive results compared to specialist models, showcasing its efficacy in both seen and unseen environments. Additionally, it extends beyond traditional benchmarks by enabling LLMs to perform well in open-ended tasks, which are closer to real-world scenarios"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel framework to enable LLMs to perform low-level action planning for robotic tasks. Traditional LLM applications in robotics focus on high-level task planning, while low-level actions rely on other specialized controllers. This paper addresses the challenge of communication between the LLM and the robot by mapping a room environment into a chessboard-style grid, called Room to Chessboard (R2C). This grid provides a simplified, yet semantically rich representation of the environment that LLMs can use to generate precise, step-by-step navigation instructions.\nTo further enhance the LLM's decision-making, the authors introduce a Chain-of-Thought Decision (CoT-D) fine-tuning paradigm that strengthens spatial reasoning and interpretability. Tested on the ALFRED benchmark, R2C outperforms other LLM-based methods and achieves competitive results with specialist models in complex, long-horizon tasks. The paper demonstrates that R2C can generalize across seen and unseen environments and handle open-vocabulary tasks, underscoring its versatility and scalability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although the paper compares R2C with LLM-based baselines, a more detailed comparison with non-LLM approaches, such as reinforcement learning or specialized motion planners, would better position R2C within the robotics field. Experiments are limited to simulated environments (ALFRED benchmark), and it remains unclear how the R2C framework would perform in real-world settings with more complex and dynamic elements."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How general is the process of encoding the navigation task into a chess game? Is it confined to object centric navigation or does it extend to re-arrangement planning, task and motion planning problems (where the agent may need to push away objects to make space). For example, https://www.ijcai.org/proceedings/2018/0674.pdf and https://people.cs.rutgers.edu/~kb572/pubs/fast_object_rearrangement.pdf for examples of such tasks. Please indicate if the framework can or cannot solve such planning tasks. \n\n2. In case an action fails to execute, does the system re-plan?\n\n3. The authors mention related works such as LM-Nav. If feasible, a comparison with such an approach would be insightful. \n\n4. Is it feasible to extend your approach to incorporating robot actions that are not present in a chess game. For example, consider the \"jumping onto an object\" action for a quadruped robot since Chess does not allow pieces to be in the same cell. Similarly, if we have an object transport task where small objects are kept onto a tray and carried from location to location, would the system be able to perform such reasoning. If feasible, authors are requested to discuss specific modifications they might make to their framework to accommodate these types of actions, or to explain why such extensions might be challenging within their current approach."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper concerns an important problem of leveraging LLM planning capacity at lower-levels of granularity, in a sense bridging the gap between high-level goals and the low-level motor actions. \n\n- The central idea of relating the low-level task with a game-like representation with the aim of generalization is interesting. \n\n- The investigation into encoding the raw sensor input into the task setting is realistic and insightful."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper attempts to perform tightly coupled high-level navigation and low-level navigation in an indoor setting using LLMs. The core idea is to express low-level navigation as a grid-world game of chess, allowing the LLM to leverage world knowledge about the game to provide generalised navigation capacity for the robot. The technical approach comprises of three key steps (i) converting the raw RGB-D image data into a grid-like representation for specifying the object centric navigation task, (ii) engineering the prompt to capture the game rules, action history and the current agent-environment state and (iii) COT-D framework that guides the LLM to reason in terms of key information, direction judgement, target prediction and selection analysis. Experimental evaluation is carried out on the Alfred data set."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The primary concern is how general and automated is the process of converting the navigation task into a game. The paper makes one ponder what class of navigation tasks can be encoded in which class of games. The paper provides some indicated results with the chess game but the generality of the result and the encoding process is not clear. The authors are encouraged to provide specific examples of different types of navigation tasks and discuss how (or if) they could be encoded using their Room to Chessboard framework. Further, the authors are requested to include a more detailed discussion of the limitations of their approach in terms of task types that may not be easily represented as a chess-like game."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "(1) What is the decision frequency of the R2C method? Does the agent decide only one grid ahead after a forward prediction from the LLMs? If that is the case, would it be rather slow in completing the entire task? Because the API-based planning approach (SayCan, Instruct2Act) only needs one LLMs' calling before finishing a long sequence of actions.\n\n(2) In this map-based 2D decision-making problem, what are the most apparent advantages of using the dense grid coordinates as the action space rather than using landmark-based action space with an off-the-shelf path-planning algorithm? \n\n(3) In Table 2, even if you change the perception module into the ground truth, the R2C still fails at the rate of around 60%. Can you provide more failure case analysis and discuss which potential aspects of the design can further improve the success rate?\n\n(4) How does the dataset scale influence the fine-tuning performance in the embodied instruction following the task? And can the fine-tuned 7b LLMs still be able to maintain the original text generation abilities?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(1) This paper presents an end-to-end LLM-based framework that simultaneously generates both task-level planning subgoals and low-level planning actions. This is important for the embodied instruction following agents in order to reduce the system latency and compounding errors compared with hierarchical cascaded modular approaches.\n\n(2) This paper proposes a chain-of-thought fine-tuning method and shows that task-relevant knowledge can be effectively injected into the open-sourced LLMs (Mistral 7b, LLaMA 7b). The benchmark performance proves the fine-tuned open-sourced LLMs outperform the GPT-4 in the embodied instruction following task.\n\n(3) The proposed chessboard representation can help generalize to open-vocabulary tasks. Experiments show the R2C can outperform many baseline methods in the ALFRED benchmark."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents the R2C, which projects the partially observed spatial and semantic information into a map and then uses a grid map (chessboard) to discretize the map as the unified representation for both high-level task planning and low-level action planning. This paper shows with the discrete chessboard as the environment representation, both the open-sourced large language models (Mistral 7b, LLaMA 7b) and the close-sourced large language models (GPT-4) can become efficient embodied instruction followers. To improve the decision accuracy of the LLM, this paper proposes a chain-of-thought fine-tuning framework, which asks the LLM first to answer some task-related questions and then predict the preferred action coordinates. The functions of both chessboard representation and the designed fine-tuning framework are effectively proved in the challenging ALFRED benchmarks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1) Although the proposed approach is technically sound, the performance of R2C in the ALFRED is not satisfied. The listed baseline approaches are outdated. In the leaderboard of the ALFRED:https://leaderboard.allenai.org/alfred/submissions/public, there are many recent approaches that can achieve a better than 30% success rate in the unseen scenes, including both LLM-based approaches or not.\nPlease compare and report better approaches' performance metrics, such as EPA[1], Prompter[2], and ThinkBot[3].\n\n(2) Some important experiments and example analyses are missing. For example, as this paper presents a chain-of-though fine-tuning for 7B LLMs, what is the zero-shot performance for LLMs without any fine-tuning? Please report the frozen LLaMA-7b and Mistral-7b performance with the same chessboard-based prompt in the ALFRED benchmark. \n\n(3) The reference is not comprehensive, and most related works are from 2020 and 2023. Comparing with recent works can help highlight the contribution of this paper, for example, comparing with more topological-based planners, such as ConceptGraph[4], SayPlan[5], and VoroNav[6].\n\n(4) Some typo errors should be more carefully checked. For example, on page 9, line 475 (two commas), and page 4, line 199 (missing reference).\n\n(5) More example visualization should be added. For example, provide detailed input prompt and output answers of fine-tuned LLaMA 7B and Mistral 7B.\n\nReference:\n\n[1] Liu, X., Palacios, H. and Muise, C., 2023. Egocentric planning for scalable embodied task achievement. Advances in Neural Information Processing Systems, 36, pp.54586-54613.\n\n[2] Inoue, Y. and Ohashi, H., 2022. Prompter: Utilizing large language model prompting for a data efficient embodied instruction following. arXiv preprint arXiv:2211.03267.\n\n[3] Lu, G., Wang, Z., Liu, C., Lu, J. and Tang, Y., 2023. Thinkbot: Embodied instruction following with thought chain reasoning. arXiv preprint arXiv:2312.07062.\n\n[4] Gu, Qiao, et al. \"Conceptgraphs: Open-vocabulary 3d scene graphs for perception and planning.\" 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024.\n\n[5] Rana, Krishan, et al. \"Sayplan: Grounding large language models using 3d scene graphs for scalable task planning.\" CoRR (2023).\n\n[6] Wu, Pengying, et al. \"Voronav: Voronoi-based zero-shot object navigation with large language model.\" arXiv preprint arXiv:2401.02695 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024rc,\ntitle={R2C: Mapping Room to Chessboard to Unlock {LLM} As Low-Level Action Planner},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ysAX5ORQoX},\nnote={under review}\n}"
},
"abstract": {
"value": "This paper explores the potential of leveraging large language models (LLMs) as low-level action planners capable of executing long-horizon tasks based on natural language instructions. Although LLMs can act as the \"brain\" of robots by excelling in high-level task planning, they are not yet capable of directly guiding the \"body\" to execute low-level motion plans. This limitation stems from a communication gap between the \"brain\" and the \"body\". Specifically, LLMs lack access to rich spatial semantic information from the robot's real-time observations, hindering their ability to generate precise and actionable low-level plans.To address this, we propose a unified framework that bridges high-level and low-level planning by establishing an efficient communication interface between LLMs and robots. Our insight is to formulate the task as playing chess with LLMs. We map the room into a semantic chessboard, which we call Room to Chessboard (R2C). Each grid represents the position and size of objects inside the room. We find that chessboard is \\textbf{succinct} enough for LLMs to conduct semantic searches with global view of the room. Also, the chessboard is \\textbf{informative} enough to convey detailed environmental state for LLMs to predict executable low-level actions. Additionally, we enhance decision-making through a Chain-of-Thought (CoT) paradigm, improving LLMs' interpretability and action reasoning. We implement R2C using both fine-tuned open-source LLMs and closed-source models like GPT-4, and demonstrate its efficacy on the challenging ALFRED benchmark. Our results show that with communication based on chessboard, LLMs can serve as effective low-level action planners, and can generalizes well to open-vocabulary robotic planning tasks. View the demos on our project page: https://anonymous4cv.github.io/Room2Chessboard."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Embodied AI",
"Large Language Model",
"Embodied Instruction Following",
"Robotic Planning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0ea71a8c595c708f9683e90c0106a698bedadd82.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "R2C: Mapping Room to Chessboard to Unlock LLM As Low-Level Action Planner"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ysQiaWhnCN | Autoverse: an Evolvable Game Language for Learning Robust Embodied Agents | main | Active | open-ended learning;reinforcement learning;imitation learning;evolution;search | reinforcement learning | 3;3;3;5 | 3;4;4;3 | 2;1;1;2 | 3;2;2;3 | 2;1;1;1 | 3.5 | 3.5 | 1.5 | 2.5 | 1.25 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. How can the diversity of the generated environments be evaluated more accurately and quantitatively? Besides the current qualitative analysis, are there other more objective and comprehensive indicators to measure the differences and complexity between different environments to better prove the effectiveness of environment generation?\n2. How does the paper dynamically regulate the evolution speed of the environment to avoid being too fast for the agent to learn or too slow to cause resource waste?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The design of Autoverse combines a domain-specific language with cellular-automaton-like rewrite rules and achieves efficient computation through convolutions, providing new perspectives and methods for game environment generation and agent training.\n- Through the evolution of the environment, the complexity of the environment can be gradually increased according to the search ability of the agent, effectively avoiding the agent from prematurely falling into local optimal solutions and also providing a curriculum learning from simple to complex for the agent."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes Autoverse, an evolvable domain-specific language for single-player 2D grid-based games, as a training ground for Open-Ended Learning (OEL) algorithms. It describes game mechanics through rewrite rules similar to cellular automata, combines evolutionary algorithms to generate complex environments, employs imitation learning and reinforcement learning to train agents, and experimentally studies the impact of the observation range on agent performance and the dynamic characteristics of the evolved environments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Overall, the paper has limitations in terms of innovation. The framework of combining imitation learning and reinforcement learning used is not novel and has been involved in many current related research fields.\n2. Although the rewrite rules are highly expressive, they are difficult to understand and interpret, which may limit their promotion and further development in practical applications, especially when manual intervention of the rules is required.\n3. In the process of environment evolution, only mutation operations are involved, and crossover operations are not included. This may limit the exploration range of the diversity of environment generation to some extent.\n4. The main experiment is lacking. There is a lack of direct comparison experiments with other advanced methods, making it difficult to accurately evaluate the advantages and disadvantages of Autoverse and clearly define its competitiveness in this field.\n5. The long-term training effect experiment is insufficient. The experimental results mainly present the immediate performance data under specific settings. The performance change trend after a significant increase in the training cycle, the stability of the agent's strategy after a large number of environmental changes, and the evolution law of the long-term adaptability to new environments are not provided, making it difficult to judge the long-term comprehensive ability development.\n6. The coherence of the chapter structure is poor in some parts. For example, the transition from the method description to the experimental results is not natural, affecting the reader's understanding of the logical relationship of the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- Being able to generate a vast number of environments is very interesting from an open-endedness perspective. However, many of the features that enable such versatility and efficiency limit Autoverse to grid-like, 2D, and seemingly small environments. Can methods developed in Autoverse be applied to more realistic scenarios (that are ultimately of interest)? Examples of these alternative (also open-ended) scenarios are Craftax (Matthews et al., 2024) or MineDojo (Fan et al., 2022). If not applicable, then what is the relevance of Autoverse?\n\n- Is the grid size constant in all of the generated environments? If not, would it be possible to modify the framework to hold grids of different sizes? \n\n- How does the search method (used for the \"warm-start\") scale with the size of the grid and number of actions? \n\n- What is (approximately) the ratio of unusable and usable environments generated by the evolutive algorithm? With unusable I refer to environments so unstable that are not beneficial for the optimization process of the RL agent. \n\n- Do the reward update rules always take into account the actions of the agent? or reward can be generated by multiple cells of the CA interacting with each other with no relation to the actions of the agent?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- The idea of using CA-based environments is both original and interesting. The fact that practically limitless environments can be created by modifying the initial state and the update rules is very interesting from an open-ended learning perspective.\n\n- I also find CA environments relevant for research on foundation models for RL. This type of environment could be very valuable for generating training data for these types of models.\n\n- I think that the presentation of Autoverse (Section 2.1) is clear and can be easily understood, whereas Figure 1 also helps to visualize the types of environments that are generated by the evolution process."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel 2D and grid-like environment framework, named Autoverse, based on Cellular Automata (CA). They define environments using the initial state of the CA, and the update rules (including state transition rules and reward). This allows authors to evolve environments in an open-ended RL setting. Moreover, the authors also propose an open-ended RL method based on bootstrapping the open-ended learning by pre-training the agent using an initial set of evolved environments and expert trajectories from a search method. Experiments show that the evolution process generates many chaotic environments, stable environments, and others in between (the ones of most interest). Furthermore, the open-ended method seems to benefit from fully observing the environment and access to the update rules that update the cells of the CA."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Although I think that the main idea (evolvable CA environments) is very interesting and could be promising for areas as foundation models for RL, UED, and open-ended learning. I have important concerns on several aspects of the paper:\n\n- Although most ideas are clearly explained, the overall presentation and soundness of the paper are poor. \n + The introduction section makes many (non-trivial) statements with no references. The introduction has no references, which I found surprising. Some examples of such sentences missing references and evidence/support are:\n - L31: \"The idea of open-ended learning in virtual environments is [...]\"\n - L32: \"This idea comes in many forms, but what unites them all is that [...]\"\n - L38: \"There have been interesting results, but learning generally stops at a rather low capability ceiling.\"\n - L40: \"It has been observed that the complexity of the behavior of a living being, [...]\"\n\n- The ability to endlessly evolve and generate new environments is compelling, but as mentioned multiple times in the paper (e.g., L71, L413) evolutive process generates very unstable environments in most cases. I have serious concerns about the ratio of unusable and usable environments generated by the evolutive process. Authors claim scalability, but how much computational resources are needed to generate a fair amount of actually usable scenarios? I think that an exhaustive analysis of this topic is crucial and missing in the current version of the paper. \n\n- I think that the paper misses many experimentation details. For instance, experiments shown in Table1 and 2 are missing information on the number of evaluations, the number of generated test/train environments, the number of repetitions, etc. Seems that this information is also missing in the appendix. Moreover, none of the appendix sections are referenced in the main paper. Please, consider providing as many details as possible on the experimentation to ensure transparency and reproducibility. Moreover, I strongly believe that all sections of the appendix should be referenced at least once in the main paper.\n\n- I strongly believe that experimentation should be improved. The authors propose a \"warm-start\" method for open-ended RL, but they do not provide any evidence for why this method is relevant for the research community. I won't ask for a comparison with tens of methods from previous literature, but proposing a new method at least requires an exhaustive analysis of it. Some examples of how I think this could be improved:\n - Reward values in Tables 1 and 2 are missing interpretability. I can see that the results of some settings are better than others, but how much? Having some sort of reference value could help (e.g., results of a random agent or some well-known baselines).\n - Why is this method interesting? Does it obtain better results? Does warm-staring help to improve the results? Many experiments could be presented to answer these questions. \n\n- The paper claims computational efficiency but no evidence is provided. Experiments/benchmarks on computational performance are missing if such claims are made. \n\n- I think that the presentation has room for improvement. Some examples:\n - Figure 1 is presented is located in page 2, but is referenced on page 8 for the first time. \n - Figure 3 holds an entire page but some of its text (Fig 3.a) is **extremely** small. There is plenty of blank space on Page 3 to increase some parts of the figure or rearrange them to improve visibility.\n - The format of tables 1 and 2 is not aligned with the ICLR style. Please carefully read the style PDF before submitting the paper.\n - There are many incorrect usages of parenthesis in references. For example, parenthesis in Earle et al. 2023 L271 should be removed, and Section 4 is missing parentheses in most of its references: L470-471, L476, L482..."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Please clarify any confusing parts in the figures and tables.\n\n2. Please provide additional experiments to support each claim in the paper, such as scalability and comparisons with other environments.\n\n3. Please ensure that the figures and results are presented neatly and clearly."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Autoverse offers efficiency by leveraging GPU-based batch processing. Its rewrite rule framework enables the creation of a vast array of dynamic, grid-based game environments, enhancing agent adaptability and preventing overfitting to static setups. The method's progressive curriculum, which evolves increasingly complex environments, allows agents to improve incrementally, while the integration of imitation learning with RL provides agents with a solid starting foundation. These strengths make Autoverse an innovative and robust tool for advancing research in open-ended learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Autoverse, a special programming language for creating different types of grid-based game environments that help with open-ended learning in RL. Autoverse lets people build complex, changing game settings, especially for 2D, single-player games like mazes, Sokoban puzzles, and dungeon exploring. It uses simple rules, based on cellular automata, to quickly create these environments, which run well on a GPU. The system first teaches RL agents by having them imitate expert examples, then moves to open-ended RL in constantly changing environments to make the agents more adaptable."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "First, there are presentation issues, as figures (like 3 and 4) lack clarity—small text sizes, ambiguous elements, and layout issues reduce readability. Second, empirical evidence demonstrating scalability is lacking; additional experiments or metrics require to solidify this claims. The paper fail to do performance comparisons of Autoverse’s combined imitation and RL approach against using either method alone. While it states that Autoverse’s environments are more complex and diverse than others, this claim lacks citations or direct comparisons with other open-ended learning benchmarks. Finally, the paper does not report the performance achieved after the behavior cloning stage, leaving its contribution to the final results ambiguous. Addressing these points could greatly enhance the paper’s clarity, rigor, and persuasive power."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "In general, I would really appreciate it if the authors could provide more details and discussions of the proposed method performance and the reasons why they design the whole method and environment in this way. And also, the comparison with other baseline or ablation study will be welcomed."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1, Originality: The paper presents Autoverse, a new environment for open-ended learning, which allows more complex environment dynamics and much more environmental diversity than other open-ended learning environments.\n2, Quality: The use of JAX for implementing cellular-automaton rules allows efficient parallelization on GPUs, and at least an order of magnitude speedup.\n3, Significance: The use of imitation learning followed by reinforcement learning provides a structured way for agents to learn from expert play traces and then further refine their behavior, leading to better generalization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Autoverse, a domain-specific language (DSL) for creating 2D grid-based games, designed to enhance open-ended learning for Reinforcement Learning (RL) agents. Autoverse allows for evolving complex game environments using cellular-automaton-like rewrite rules, which can be parallelized on the GPU to speed up RL training. The authors propose a framework involving the evolution of environments, imitation learning from search-based solutions, and reinforcement learning in evolving environments to generate increasingly challenging tasks for RL agents. This approach aims to overcome the challenges of cold-starting RL in open-ended environments and produce more behaviorally complex and adaptable agents. They first evolve Autoverse environments (their rules and initial map topology) to maximize the number of iterations required by greedy tree search to discover a new best solution, producing a curriculum of increasingly complex environments and playtraces. The proposed method then distill these expert playtraces into a neuralnetwork-based policy using imitation learning. Finally, the learned policy becomes as a starting point for open-ended RL, where new training environments are continually evolved to maximize the RL player agent’s value function error (a proxy for its regret, or the learnability of generated environments), finding that this approach improves the performance and generality of resultant player agents."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1, Presentation Improvement: Provide more detailed feedback, such as clarifying ambiguous elements like the meaning of the gray box in Figure 4 and addressing layout issues with full-page figures such as figure 3 and figure 4. And the texts in figure 3 are too small.\n2, Scalability Evidence: Request empirical evidence or additional explanations to demonstrate scalability, including experiments or metrics.\n3, Justification for Greedy Tree Search: Maybe adding references in related work and experimental validation to justify the choice of greedy tree search.\n4, Comparison of Methods: Recommend including a comparison between the proposed approach and using only imitation learning or reinforcement learning.\n5, Performance Results: The paper mentioned \"Once the behavior cloning algorithm has converged, we continue training the agent with reinforcement learning\", but there is no results show that what kind of performance can this method reach when behavior cloning algorithm has converged and what is the final performance compared to that stage.\n6, Lack explanation of comparing with other environments: In the paper \"Autoverse stands out for allowing more complex environment dynamics and much more environmental diversity than other open-ended learning environments.\" But I did not see any citation or detailed comparison with other open-ended learning environments. Further discussion will make the arguments more convincible."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce a fast, evolvable game language, and do open-ended search in the space of game mechanics and levels to train generalist player agents"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024autoverse,\ntitle={Autoverse: an Evolvable Game Language for Learning Robust Embodied Agents},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ysQiaWhnCN},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce Autoverse, an evolvable, domain-specific language for single-player 2D grid-based games, and demonstrate its use as a scalable training ground for Open-Ended Learning (OEL) algorithms. Autoverse uses cellular-automaton-like rewrite rules to describe game mechanics, allowing it to express various game environments (e.g. mazes, dungeons, sokoban puzzles) that are popular testbeds for Reinforcement Learning (RL) agents. Each rewrite rule can be expressed as a series of simple convolutions, allowing for environments to be parallelized on the GPU, thereby drastically accelerating RL training. Using Autoverse, we propose jump-starting open-ended learning by imitation learning from search. In such an approach, we first evolve Autoverse environments (their rules and initial map topology) to maximize the number of iterations required by greedy tree search to discover a new best solution, producing a curriculum of increasingly complex environments and playtraces. We then distill these expert playtraces into a neural-network-based policy using imitation learning. Finally, we use the learned policy as a starting point for open-ended RL, where new training environments are continually evolved to maximize the RL player agent's value function error (a proxy for its regret, or the learnability of generated environments), finding that this approach improves the performance and generality of resultant player agents."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"open-ended learning",
"reinforcement learning",
"imitation learning",
"evolution",
"search"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8f3ada71489e4f794e139d0c14acce7ccc031f99.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/450f50e235f70445123423795e91165aac06839d.pdf"
},
"title": {
"value": "Autoverse: an Evolvable Game Language for Learning Robust Embodied Agents"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ysZvK6b60c | CALoR: Towards Comprehensive Model Inversion Defense | main | Active | Privacy Leakage;Model Inversion;Defense | alignment, fairness, safety, privacy, and societal considerations | 3;5;5;5 | 4;5;5;4 | 1;2;3;2 | 1;2;2;2 | 1;3;3;2 | 4.5 | 4.5 | 2 | 1.75 | 2.25 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see Weaknesses section above for a list of all questions."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1) This paper is written well and it is easy to follow.\n\n2) The proposed method obtains noticeable improvements (Table 1, Table 2) under IF and PLG-MI attacks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a method for improving defense against generative Model Inversion Attacks (MIA), called Confidence Adaptation and Low-Rank Compression (CALoR). The proposed method demonstrates noticeable improvements over existing defense methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Given that MIAs search for high-likelihood samples for a particular class, use of $L_{CA}$ basically manipulates the region where such high-likelihood samples are present, e.g..: Now it’s present in a region where $p\\_c \\approx 0.8$ and the MIA is searching in the region corresponding to $p_c~=1$. What would happen if the MIAs were aware of this confidence training? I suspect that a substantial amount of performance could be recovered if the identity loss is adjusted to align with $L_{CA}$ during model inversion. Conducting this ablation study is crucial.\n\n2) **Using encoder features directly for MIA:** My understanding is that the proposed method focuses on white-box MIA attacks. What would happen if the adversary directly used the encoder features to perform MIA (by applying softmax directly to the encoder features) instead of the LORA features?\n\n3) **User studies are necessary to show the efficacy of MI defense.** Since this work focuses on defending against private data reconstruction, it is important to conduct user study to understand the improvements (See [A, B]).\n\n4) Why is the gap between TL and CALoR much smaller in the PLG/ImageNet setting compared to other setups (see Table 2)?\n\n5) Error bars/ Standard deviation for experiments are missing.\n\n6) It would be useful to indicate that the paper's focus is white-box MIAs.\n\n7) Missing related works [A]\n\nOverall I enjoyed reading this paper. But in my opinion, the weaknesses of this paper outweigh the strengths. But I’m willing to change my opinion based on the rebuttal.\n\n=========\n\n[A] Nguyen, Bao-Ngoc, et al. \"Label-only model inversion attacks via knowledge transfer.\" Advances in Neural Information Processing Systems 36 (2024).\n\n[B] [MIRROR] An, Shengwei et al. MIRROR: Model Inversion for Deep Learning Network with High Fidelity. Proceedings of the 29th Network and Distributed System Security Symposium."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the weakness"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The presentation is clear and well organised.\n\nThe empirical results are encouraging."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduce a novel MI defence, namely CALoR, that aims to improve MI robustness by revisiting three potential weakness aspects of MI attack: MI objective, MI overfitting, and MI optimisation.\n\nThe empirical results are encouraging through various MI setups."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While I appreciate the broad exploration of MI robustness in this paper, it feels more like a technical report rather than an in-depth investigation for each aspect. The proposed method consists of three modules: confidence adaptation, low-rank compression via a VAE-inspired framework, and Tanh activation. The concept of confidence modification has been somewhat investigated in LS defence, and the impact of vanishing gradients on MI is well-studied in PPA, LOMMA, and PLG-MI. These modules offer trivial and straightforward ways to mitigate MI attacks but lack novel insights or a significant contribution to MI.\n\n2. The concept of confidence modification is somewhat similar to LS defence. If I understand it correctly, modifying the confidence by (positive) LS also offers the similar concept. However, I am quite surprise that (positive) LS and CA has opposite effect to MI. Could the authors provide an explanation on this?\n\n3. While the experimental results are encouraging, the setups differ notably from those in existing studies. I suggest that the authors add additional experiments to strengthen the paper:\n- For the low-resolution scenario, include LOMMA (rather than IF) as it is a state-of-the-art MI attack alongside PLG.\n- For the high-resolution scenario, include PPA (rather than PLG) as it is a state-of-the-art MI attack alongside IF.\n- The evaluation model used in the high-resolution scenario also differs from existing works. I am curious if this may affect the experimental setups.\n\n4. For the ablation study, I recommend that the authors expand Table 3 to better demonstrate the effectiveness of each module in CALoR. The current results highlight the contribution of CA, but the roles of low-rank compression and Tanh activation remain unclear to me. To clarify, I suggest splitting the results for LoR into LoR with Tanh and LoR without Tanh.\n\n5. The paper should include a paragraph addressing the weaknesses of the proposed method. One potential weakness is its complexity relative to existing defenses like LS or TL. With three different modules, the proposed approach may be challenging to adapt to new setups. As shown in the paper, in high-resolution scenarios with ImageNet as the pre-training dataset, CALoR demonstrates more difficulty in adapting compared to LS or TL."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* To clarify the novelty of the proposed method, the authors should elaborate on the key differences between their approach and positive label smoothing [r2]. Without it, the contribution of the paper is limited.\n\n* Could the authors provide a more comprehensive comparison with other state-of-the-art defenses, particularly high-resolution techniques like PPA and MIRROR, on the official benchmark?\n\n* Could the authors include model accuracy metrics in Section 3.2 to better understand the trade-off between defense effectiveness and model utility?\n\n* How does the proposed method perform on larger datasets like VGGFace/VGGFace2/CASIA?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper is well-written and easy to follow.\n\n* Extensive experiments on both low-resolution and high-resolution attacks demonstrate the effectiveness of CALoR in defending against State-of-the-art MI attacks.\n\n* Perform MI attacks using modern model architectures like ViT-B/16 and Swin-v2."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents Confidence Adaptation and Low-Rank compression (CALoR), a new defense strategy against model inversion attacks (MIAs). To mitigate MIAs, CALoR employs three key techniques: (1) confidence adaptation to reduce the confidence of private training samples, (2) low-rank compression to prevent attackers' ability from mitigating the overfitting, and (3) Tanh activation to induce gradient vanishing. Extensive experiments on both low-resolution and high-resolution attacks demonstrate the effectiveness of CALoR in defending against State-of-the-art MI attacks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The experimental setup raises some questions. While PPA/MIRROR is primarily designed for high-resolution images, the experiments using PPA/ MIRROR are performed on low-resolution images. Given that StyleGAN-ADA generates 256x256 images, it's unclear how the authors adapt it to low resolution images (64x64). Similarly, PLGMI, originally designed for low-resolution attacks, is adapted for high-resolution scenarios. More details on this adaptation process should be explained.\n\n* Section 3.2 demonstrates that reducing confidence in private training samples lowers attack accuracy. However, without reporting the model accuracy, it's difficult to determine if this reduction is solely due to a decrease in model utility. As existing work [r1] suggests a strong correlation between model accuracy and MI attack success, it's possible that any action diminishing model performance, including confidence reduction, could impact attack accuracy. \n\n* The idea of reducing confidence in private training samples is similar to using label smoothing which is addressed in LS defense [r2]. Note that LS[r2] shows that positive label smoothing does not have such strong effect on defending against MI attacks.\n\n[r1] Zhang, Yuheng, et al. \"The secret revealer: Generative model-inversion attacks against deep neural networks.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\n[r2] Lukas Struppek, et al. \"Plug & play attacks: Towards robust and flexible model inversion attacks\". In ICML, 2022."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Why the hyperparameter a is needed in Eqn. (2)? I think it can be incorporated into the step size.\n\n2. How to properly set the hyperparameter b? Any theoretical justification?\n\n3. In Table 4, a rank of 20 also shows a good performance. Why highlight the rank of 30 in the main text?\n\n4. How to properly set the low rank across different datasets? Is a rank of 20/30 a safe value for various datasets?\n\n5. Is there any scenario where the proposed defense method fails? It would be great if the authors could find any case with deeper analysis."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The model inversion attacks are very important and the privacy concern of the (private) training data should be carefully handled.\n\n2. The overall empirical performance compared with other existing defense work is impressive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a defense method named CALoR against model inversion attacks. There are two key components of the proposed method. One is biasing the attack optimization target through confidence adaptation and the other is low-rank compression to mitigate the privacy leakage."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation of this work is extremely poor which makes it look like an incomplete paper, especially for the methodology part. For example, after reading Section 3.2, I still have no idea about how to ensure the utility of the classification model after executing the second stage. In Section 3.3, the authors talk about the latent vector z but without further discussing why z is relevant to their method. Lots of main details are missing/require the audience to refer to the appendix. I suggest the authors revise the manuscript significantly to make it more clear.\n\n2. The motivation of Eqn. (2) is not clear. Through the simple calculation of the derivative, we can see that the final objective is to ensure that \\hat{y}_c should converge to exp(-1/b). Why not directly use the MSE loss to mislead the attack objective? Additionally, the hyperparameter a in Eqn. (2) does not play much role.\n\n3. Low-rank compression is a common technique to cause information loss which can be further utilized to protect private input data. I would suggest the authors weaken their claim about the contribution to this part and shorten the relevant paragraphs.\n\n4. This work lacks theoretical analysis about the performance decrease bound due to confidence adaptation and low-rank compression which puts the utility of CALoR into question.\n\n5. I would suggest the authors report the uncertainty (e.g., standard deviation) of the experiment results as well."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We provide an in-depth analysis of intrinsic vulnerabilities of model inversion attacks, and propose a novel and comprehensive defense framework CALoR, including confidence adaptation and low-rank compression."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024calor,\ntitle={{CAL}oR: Towards Comprehensive Model Inversion Defense},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ysZvK6b60c},\nnote={under review}\n}"
},
"abstract": {
"value": "Model Inversion Attacks (MIAs) aim at recovering privacy-sensitive training data from the knowledge encoded in the released machine learning models. Recent advances in the MIA field have significantly enhanced the attack performance under multiple scenarios, posing serious privacy risks of Deep Neural Networks (DNNs). However, the development of defense strategies against MIAs is relatively backward to resist the latest MIAs and existing defenses fail to achieve further trade-off between model utility and model robustness. In this paper, we provide an in-depth analysis from the perspective of intrinsic vulnerabilities of MIAs, comprehensively uncovering the weaknesses inherent in the basic pipeline, which are partially investigated in the previous defenses. Building upon these new insights, we propose a robust defense mechanism, integrating ***C**onfidence **A**daptation* and ***Lo**w-**R**ank compression*(**CALoR**). Our method includes a novel robustness-enhanced classification loss specially-designed for model inversion defenses and reveals the extraordinary effectiveness of compressing the classification header. With CALoR, we can mislead the optimization objective, reduce the leaked information and impede the backpropagation of MIAs, thus mitigating the risk of privacy leakage. Extensive experimental results demonstrate that our method achieves state-of-the-art (SOTA) defense performance against MIAs and exhibits superior generalization to existing defenses across various scenarios."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Privacy Leakage",
"Model Inversion",
"Defense"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/891b7958f5e5153a88b9d67a7776f8f995656112.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/24b802789ee485d9de366c1ba5a2da764187a883.zip"
},
"title": {
"value": "CALoR: Towards Comprehensive Model Inversion Defense"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yspBoIZJ9Z | Enhancing Video Understanding with Vision and Language Collaboration | main | Active | Video understanding;video pre-trained model;vision-language model;collaboration learning | applications to computer vision, audio, language, and other modalities | 3;5;5;6 | 4;4;4;4 | 2;3;2;3 | 2;2;2;3 | 2;3;3;3 | 4.75 | 4 | 2.5 | 2.25 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "While the motivation of this work is strong and interesting, not much novelty is observed in the study. The gain of this work is also minor compared to the previous work and the variance is not reported. Furthermore, some of the ablation study (like the number of negative text) seems to be very sensitive to the variance. It is unclear whether the gain in Table 1 and the ablation in Table 3 are due to the variance."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tFigure 2 is well designed and fully captured the overall training pipeline. \n2.\tThe paper is well written and simple to follow.\n3.\tThe motivation of this work is strong and interesting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper aims to improve the spatial understanding of the existing video-language model for the classification task. This is implemented with three different optimization losses, including the classification task (classify the video into predefined classes), contrastive loss (aligning the video with the text descriptions associated with the video) and spatial knowledge transfer task. The spatial knowledge transfer task utilizes the frozen large vision language model to guide the learning of video encoder, by aligning the video token with the image token. Extensive experiments and ablation are conducted on different video datasets and backbones to demonstrate the effectiveness of the proposed training strategies."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe main contribution of this work is unclear. The classification loss and contrastive loss are common in video pretraining and video classification. Moreover, the alignment with L1 loss between the image token and the video token is also not considered novel. \n2.\tThe most interesting section in this work is the cross attention section in Eq 6-8, which leverages the segmentation model to guide the alignment. However, while the author shows that the result is not performing well, the explanation in L258-260 is not sound and solid. It will be interesting if the author can provide more study and in-depth discussion.\n3.\tL297-L299 is unclear. What does it mean by “dissimilar description would be less linked to the action”?\n4.\tThe conclusion from the negative test sampling ablation is unclear. Figure 3 shows a reflection point where the performance of the model has a minimum. Is there any reason why this is happening? What is the variance of each data point in Figure 3?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In lines 42-44, the author points out the \"emphasis on capturing temporal information\" and illustrates how this is a common phenomenon through the heatmap in Fig. 1(c). How does this demonstrate that the method proposed in this paper can address this issue, and what is the specific principle behind the solution?\n- The description of the experimental results is quite imprecise. The performance of existing methods for Top-1 has exceeded 88.5% [1] and reached 89.7% two years ago [2], while the proposed method only achieves 86.9%. Therefore, it cannot be claimed that this method has achieved SOTA. \n- In Line 267, how can we ensure that the descriptions generated by the decompose-expand prompting are accurate? What would be the consequences if there are errors?\n- Additionally, please include the Top-5 results in the experimental outcomes.\n\n\n[1] Rethinking Image-to-Video Adaptation: An Object-centric Perspective\n[2] Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ The writing is very good, relatively easy to understand, and easy to follow as well.\n+ Using VLM to enhance video understanding is a very meaningful issue, and the results of this paper are good."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work addresses the limitations of current video pre-trained models by integrating large VLMs into a cross-modal collaborative knowledge transfer framework. This method includes an attentive spatial knowledge transfer method that enhances spatial understanding and a contrastive textual knowledge transfer method for improved video-text alignment."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- It is insufficient to prove the core motivation using only one figure (Fig. 1 c).\n- the article lacks an analysis of the consumption of training costs."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- On the ablation of negative text sampling, can the authors provide higher number of k (why is 1200 the current limit). In addition, It is no meaningful to stated that sampling 200 negative text yield the third-best Top-1 accuracy. Please provide a better explanation of the initial accuracy drop when k increase from 200 to 600, and subsequently improve again. How does the authors know the diminish was due to easy negatives? Is there a way to measure if easy negative was sampled in early stage?\n- On the evaluation of gate mechanism, the improve with gate mechanism is relatively low (+0.24%). It would be to provide analysis on the class that is accurately classified with gate mechanism, and analyse if the corresponding attention map (with and without gate mechanism) match the hypothesis.\n- What is the number of description generated for each category. Has the authors analyse the score of the positive text and observed cases where the scores are generally too low for some cases."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ This work proposes a cross-modal collaborative knowledge transfer to adapt video pre-trained model for downstream tasks. Specifically, spatial knowledge is distill from VLM's image encoder via attentive spatial knowledge transfer. Then, textual knowledge transfer improve video representation via fine-grained text-video alignment.\n+ A gating mechanism is proposed to guide the distillation process to focus more on the action relation region and less on broader content.\n+ To improve the text-video alignment, a decompose-expand prompting method is proposed to improve the fine-grained and diverse description. This provide more training cue than a single description or class name. Then, a contrastive textual knowledge transfer learn the semantic knowledge.\n+ The proposed method achieve consistent improvement over multiple baselines and datasets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to leverage Vision Language Models (VLM) for cross-modal collaborative knowledge transfer to improve the spatial and semantic understanding for video data. The proposed approach aim to reduce the temporal gap between pre-training dataset and downstream tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I don't see obvious weaknesses in this work. Please refer to the Questions Section for some clarification."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Why does the learning rate of the optimizer reported in the implementation details section differ from that in the appendix?\n\n- Why were experiments not conducted on the mainstream video dataset Something-Something V2 (SSV2) for action recognition?\n\n- How are the weight coefficients for the loss functions obtained?\n\n- Line 72 should be written as (e.g., CLIP). There are many similar issues throughout the manuscript.\n\n- Many equations are missing symbols, such as Equations (6) and (7). Additionally, the bold and italic formatting in the equations is inconsistent.\n\n1. Line 238 is missing punctuation.\n\n2. Line 313, 'Eq. equation 2' is so poor.\n\n3. The capitalization of the first letters in the paper title is inconsistent."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "+ Introducing VLMs and transfer learning for video understanding is reasonable.\n\n+ The experimental results show slight improvements over previous methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a cross-modal collaborative knowledge transfer method for video action recognition. The framework consists of three key components: video adaptation, attentive spatial knowledge transfer and contrastive textual knowledge transfer. These three components correspond to three loss functions. Experiments on three datasets achieve state-of-the-art results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The paper conducts experiments on video action recognition, but the title and abstract do not explicitly mention this task. Video understanding is a relatively broad concept.\n\n- The three technical contributions result in only marginal improvements, especially as shown in Table 3. It is unclear which loss function serves as the main loss."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024enhancing,\ntitle={Enhancing Video Understanding with Vision and Language Collaboration},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yspBoIZJ9Z},\nnote={under review}\n}"
},
"abstract": {
"value": "Leveraging video pre-trained models has led to significant advancements in video understanding tasks. However, due to the inherent bias towards temporal learning in video pre-training, these models fail to capture comprehensive spatial cues. Additionally, the widely-used supervised adaption methods lack fine-grained semantic guidance as single action labels cannot precisely depict the intra-class diversity. To address these challenges, we incorporate the general capabilities of large Vision Language Models (VLMs) and propose a cross-modal collaborative knowledge transfer method to enhance video understanding. First, we propose an attentive spatial knowledge transfer method that distills spatial knowledge from the VLM's image encoder, enabling the precise capture of spatial information. Next, we design a contrastive textual knowledge transfer approach that achieves detailed video representations through fine-grained text-video alignment. Owing to the cross-modal knowledge transfer, the video representations are capable of attending to informative spatial regions and aligning with fine-grained texts that carry rich semantics. Extensive experiments demonstrate that our method achieves state-of-the-art performance across various datasets, validating its effectiveness."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Video understanding",
"video pre-trained model",
"vision-language model",
"collaboration learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c295419a0f9c4ab20ed50271451f83dfed75c6c2.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Enhancing Video Understanding with Vision and Language Collaboration"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yt7nxONs3J | Prioritize Alignment in Dataset Distillation | main | Active | dataset distillation | applications to computer vision, audio, language, and other modalities | 3;5;5;6 | 4;4;4;5 | 3;3;3;3 | 2;3;2;2 | 3;3;3;3 | 4.75 | 4.25 | 3 | 2.25 | 3 | 0.662266 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The thesis would benefit from a more structured presentation, where the authors are encouraged to list observations and analyses in Chapter 2 in a systematic manner, similar to DTAM. Additionally, the content should be included in the methods section as a motivation or exploration segment to strengthen the logical flow and context of the proposed"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Figures 2 and 3 provide valuable insights into the effects of removing both simple and difficult samples, as well as the impact of shallow parameters on model performance.\n\n2. The authors present a comprehensive experimental analysis on a small-scale dataset, including ablation studies, hyperparameter analysis, and related discussions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the task of dataset distillation, which aims to compress a large dataset into a much more compact synthetic dataset while maintaining the performance of trained models. Existing methods rely on an agent model to extract and embed information from the target dataset into the distilled version. However, the authors identify that current approaches often introduce misaligned information during the extraction and embedding stages, which degrades the quality of the distilled dataset.\n\nTo address this, the authors propose Prioritize Alignment in Dataset Distillation (PAD), a method that aligns information from two main perspectives: Dataset Pruning and Deep Layer Utilization. This simple yet effective strategy helps filter out misaligned information, resulting in significant improvements for mainstream matching-based distillation algorithms. Additionally, when applied to trajectory matching, PAD achieves state-of-the-art performance across various benchmarks, showcasing its effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The author's approach appears to be more incremental, incorporating only minor enhancements to DATM. Firstly, Section 3.1 serves as a review of previous work, and Equation (4) in Section 3.3 shows only slight modifications from Equation (3) in DATM. Additionally, Section 3.2 seems to function more as a heuristic for selecting difficult samples. Overall, the method introduces only two additional techniques compared to DATM, lacking a significant breakthrough in terms of innovation.\n\n2. in Table 2, most of the average improvements over the comparable method, DATM, are less than 1 point, indicating that the performance still requires further enhancement."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The manuscript employs DATM as a baseline, which, as I understand it, requires the pre-training of numerous agent models on the original dataset to record training trajectories. As discussed in w2, given the dynamic dataset scheduler nature of the process described in your methodology, does this imply the need to train additional agent models? It would be beneficial for readers if the authors could provide more detailed insights into the implementation specifics of this approach."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper demonstrates substantial improvements in DD through its experimental results in TAB. 1, showing that the PAD method significantly enhances the effectiveness of dataset distillation.\n2. PAD proves to be highly adaptable and generalizes well across multiple datasets, indicating that the method can be effectively applied to various dataset distillation tasks with consistent success."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces \"Prioritize Alignment in Dataset Distillation\" (PAD), a method improving dataset distillation by focusing on two main strategies. First, it adjusts data selection based on different Information Compression Ratios (ICRs) to match the required difficulty levels. Second, it enhances results by distilling only from deep-layer network parameters. The effectiveness of PAD is validated through experiments on standard datasets like CIFAR-10, CIFAR-100, and Tiny ImageNet, showing improvements over current DD methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. There are some formatting issues in the manuscript that need attention. In **Table 5**, the annotations (a) and (b) appear to be reversed, which could confuse readers. Additionally, a period is missing at the end of line 485 after \"smaller IPCs.\" Please correct these to enhance the clarity and professionalism of the document.\n2. There seems to be a mismatch between the issues presented in Section 2.1 and the methods proposed in Section 3.2. The former discusses a static selection method, while the latter introduces a dynamic approach. Could the authors clarify the connection between these sections to ensure the consistency of the methodology described?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In the main table (Table 1), it appears that multiple experiments were conducted to obtain an average, but subsequent tables present experimental values as single data points. I couldn't find any information in the paper regarding whether the results for each experiment were averaged, and I’m curious about this.\n\n2. The study exclusively uses deep layers; are shallow layers entirely unnecessary? Shallow layers likely play a role in producing a good embedding space, so is there a way to leverage them effectively?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. One advantage of this method is that it can be applied on top of other distillation methods, such as MTT and DM. As seen in cross-architecture analyses, it’s a general approach that enhances performance in a scalable, model- and dataset-agnostic way. Experimental results show that accuracy improved over traditional methods (Table 1), with increases across multiple datasets and architectures (Table 2).\n\n2. It seems the straightforward method is supported by detailed analysis and experiments. Experiments (Tables 4, 5 / Figure 5) effectively support the hypothesis that removing easy examples and using deeper layers improves performance, making the qualitative results intuitively understandable (Figures 4, 6).\n\n3. While the writing isn’t exceptionally well-crafted, it is organized in a readable and accessible manner."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the role of alignment in both the information extraction and embedding stages within the dataset distillation process. \nDuring the information extraction stage, alignment is achieved by selecting subsets of the dataset based on difficulty. For settings with a low images-per-class (ipc) count, incorporating a higher proportion of ‘easy’ data proves effective. Conversely, in high ipc settings, utilizing a larger portion of ‘hard’ data enhances distillation effectiveness. The EL2N score, analogous to model confidence, is employed as a difficulty metric.\nIn the information embedding stage, alignment is achieved by prioritizing deeper over shallower layers. Deeper layers are more adept at learning semantic information, resulting in a higher-level representation within synthetic data and more efficient distillation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I believe the contribution of this method is limited. The proposed method is very simple, and while it is supported by empirical experimental results, it lacks theoretical justification. Additionally, as mentioned in the paper, the data selection and distillation approach was already introduced by BLiP (Xu et al., 2023). Although the paper presents experimental results showing superior performance to this approach (Table 5.b), the metric for selecting difficulty (EL2N) was also previously proposed.\n\n2. The method seems somewhat sensitive to hyperparameters (AEE, IR). Accuracy varies depending on these hyperparameters (Table 4.a), with differences comparable to the performance increase over other methods in the main table. Additionally, as seen in Table 10, optimal hyperparameters change dynamically across datasets and ipc values, raising questions about the method’s stability. If it is indeed sensitive, hyperparameters may need to be tuned for each architecture and dataset, which raises concerns about the method’s practicality.\n\n3. Although the paper claims that performance improves when used alongside various distillation techniques, as mentioned in the limitations section, due to computing resource constraints, experiments were conducted only with DM and DC (and DATM in the main table).\n\nOverall, the paper demonstrates that a simple method can enhance distillation performance and can be used in a model- and architecture-agnostic manner. The experiments and logical development are well-executed; however, I believe the contribution of the method itself is limited, and it appears to be sensitive to hyperparameters. Therefore, I would rate it as [5: marginally below the acceptance threshold]."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. How does PAD compare to other recent advances in dataset distillation in terms of computational resources and scalability?\n2. Could the authors elaborate on the theoretical justification for prioritizing deep layer parameters?\n3. Are there any scenarios where PAD might not perform as expected, and if so, how does the method handle such cases?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The identification of misalignment in dataset distillation and the proposal of PAD to address it is a valuable contribution.\n2. The paper provides a clear motivation for the PAD method and supports its claims with extensive experiments.\n3. The approach of prioritizing deep layer parameters for distillation is innovative and leads to improved performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces PAD, a method aimed at enhancing the compression of large datasets into compact synthetic datasets while preserving model performance. PAD addresses the challenge of misaligned information during distillation by aligning data through selective pruning of the target dataset and leveraging deep layers of agent models in the distillation process."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It would be beneficial to prove the relationship between EL2N and the difficulty of training samples.\n2. The paper could provide a more detailed comparison with other state-of-the-art methods, especially in terms of computational efficiency and scalability. \n3. The theoretical analysis of why deep layer parameters are more suitable for distillation is lacking. \n4. The paper does not discuss potential limitations or failure cases of the PAD method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024prioritize,\ntitle={Prioritize Alignment in Dataset Distillation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yt7nxONs3J},\nnote={under review}\n}"
},
"abstract": {
"value": "Dataset Distillation aims to compress a large dataset into a significantly more compact, synthetic one without compromising the performance of the trained models. \nTo achieve this, existing methods use the agent model to extract information from the target dataset and embed it into the distilled dataset. \nConsequently, the quality of extracted and embedded information determines the quality of the distilled dataset.\nIn this work, we find that existing methods introduce misaligned information in both information extraction and embedding stages.\nTo alleviate this, we propose Prioritize Alignment in Dataset Distillation (\\textbf{PAD}), which aligns information from the following two perspectives.\n1) We prune the target dataset according to the compressing ratio to filter the information that can be extracted by the agent model.\n2) We use only deep layers of the agent model to perform the distillation to avoid excessively introducing low-level information.\nThis simple strategy effectively filters out misaligned information and brings non-trivial improvement for mainstream matching-based distillation algorithms.\nFurthermore, built on trajectory matching, \\textbf{PAD} achieves remarkable improvements on various benchmarks, achieving state-of-the-art performance."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"dataset distillation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b928959f4c9c002c1ae402c10fe8c608b2e8ddc3.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Prioritize Alignment in Dataset Distillation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ytn0rbIfOx | Formulating AutoML as a Variable-Length Optimization Problem: A Tree of Thought Approach with LLM-Driven Code Generation | main | Active | AutoML;Tree of Thought;LLM | optimization | 3;3;8 | 5;4;4 | 1;2;4 | 2;2;4 | 2;2;3 | 4.666667 | 4.333333 | 2.333333 | 2.666667 | 2.333333 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See the Weaknesses section."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "* Originality: It's a novel contribution in the field of AutoML by rethinking the traditional fixed-pipeline structure. It's the first method to integrate the ToT method for designing AutoML pipelines. Using LLMs for incremental code generation and presenting the decision-making process in the model construction is useful in practive\n \n* Quality: Good experimental design. The method is tested on multiple datasets (OpenML and clinical datasets) and against established AutoML algorithms (Auto-Sklearn, TPOT, H2O). It includes a maximum step length analysis.\n \n* Clarity: Well-organized and logically structured paper. The figures and tables are effectively used to illustrate key concepts and experimental results.\n \n* Significance: AutoML as variable-length optimization enables more adaptive model creation, and stimulate new research direction in the AutoML field with more focus on self-adjusting pipelines using language models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper \"Formulating AutoML as a Variable-Length Optimization Problem: A Tree of Thought Approach with LLM-Driven Code Generation\" presents a novel framework for Automated Machine Learning (AutoML) that adapts to task complexity by treating AutoML as a variable-length optimization problem. This approach differs from traditional AutoML methods, which rely on fixed pipelines, as it lets you adjust the model structure dynamically to better match various tasks. \n\nThe proposed framework leverages the “Tree of Thoughts” (ToT) approach along with Large Language Models (LLMs) to build machine learning models iteratively and sequentially. This means each decision in the model-building process is evaluated, allowing the models to gradually evolve with minimal manual effort. Moreover, LLMs also generate the necessary code for each step, turning model configurations into executable pipelines. In this way, it enhances transparency and reduces the need for human input. \n\nExperimental results show that TOT outperforms conventional fixed-structure AutoML systems across various datasets (OpenML datasets and clinical ones) by achieving superior model performance and adaptability. Key contributions include formulating AutoML as a variable-length optimization problem and applying the ToT method with LLMs for efficient search space navigation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major points:\n* Computation complexity due to LLMs. The computational footprint and latency can be substantial. Consider conducting a more detailed analysis of computational efficiency and resource requirements, and maybe include comparisons to existing AutoML methods.\n* Lack of clarity on the decision-making process in ToT. It's not very clear how different thought paths are evaluated or pruned throughout the optimization. Consider adding a flow diagram or pseudocode showing how the ToT method selects and ranks decisions at each stage\n* Lack of transparency on the code generation. \"If after several iterations, the program still does not perform satisfactorily, we manually coding the pipelines” -> How often does this happen? As a suggestion: describe in more detail how code validation and error handling works and mention how often manually coding is needed.\n\nMinor points:\n* In the first part of the paper, you refer to OpenML as a single dataset, not a platform that contains a collection of datasets\n* Figure 5 has overlapping ticks for the values of accuracy, precision, recall and f1-score"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. In Table 1, LLM-F shows the best performance rather than ToT on task 8, which means the bolding is incorrect. This suggests that ToT did not achieve SOTA performance across all tasks. Does the paper need to double-check all experimental results? \n\n2. In Table 1, for tasks 1, 2, and 10, the experimental results for RS-V and BO-V are identical, including the values in parentheses. Moreover, the paper does not clarify whether these values represent variance or standard deviation. A similar occurrence is observed in task 8, where RS-F and BO-F also produce identical results. Are these results a mistake by the authors? If not, could the paper provide a more detailed analysis and explanation?\n\n3. In Tables 2 and 4, the results indicate “NAN (NAN),” which is unclear. The paper previously mentioned that methods failing to produce results within an hour would not be compared. Could this mean that these results were not obtained within an hour? If so, I have three questions. First, since runtime varies across hardware and environments, what is the experimental platform? Are all experiments conducted on the same machine to ensure fairness? Second, why does the fixed-length LLM-F yield no results for task 2 while the variable-length TOT does? Finally, does “NAN (NAN)” imply that none of the five experimental trials produced results? Or does it mean that some of the five experiments did not produce results?\n\n4. In Table 2, RS-V shows a standard deviation of 0 on task 4 across five trials (we assume that it is the standard deviation), implying highly consistent results. However, in other tasks, RS-V’s standard deviations are 0.0167, 0.0268, and 0.0098, so I don't think it can achieve a standard deviation of 10^{-5} for this task. Could you please provide more data to substantiate this?\n\n5. In Table 4, Auto-Sklearn’s performance for task 1 is recorded as \"0.5 (0.0).\" Notably, while other methods score above 0.93 on this metric, Auto-Sklearn reaches only 0.5—an inconsistency not seen in other tasks. Furthermore, the standard deviation is 0.0. Both of these points remain unexplained in the paper.\n \n6. The comparative methods used in this paper include Auto-WEKA (2013), H2O (2020), and Auto-Sklearn (2019), and more recently (e.g., 2022-2024) SOTA methods in the field are expected.\n\n7. Lines 306-308 state, \"... LLM for fixed-length approaches, and our proposed ToT for variable-length. Each method was tested under both fixed (RS-F, BO-F, LLM-F) and variable (RS-V, BO-V, ToT) conditions.\" This indicates that LLM-F is a fixed-length approach. However, lines 342-344 describe \"The subpar performance of the variable-length LLM approach (LLM-F)... \" suggesting LLM-F is instead a variable-length approach. Could the paper clarify this inconsistency?\n\n8. This paper’s contribution centers on a variable-length AutoML process designed for broad applicability across diverse tasks. However, an essential question remains: how does the proposed method balance generality with accuracy? I am particularly interested in understanding whether the algorithm can outperform task-specific algorithms on complex or specialized tasks, such as those in regulated or medical domains. Could the paper consider providing additional experiments on the tasks with special scenarios to address this aspect?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The flexible AutoML framework uses variable-length optimization to adjust pipeline complexity based on the specific task requirements, increasing its adaptability and effectiveness.\n- The ToT method enables the model to efficiently explore large search spaces by prioritizing optimal paths through a structured decision-making process, enhancing its navigation and effectiveness.\n- This exploration could lead to new advancements in automated machine learning by leveraging the capabilities of LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an AutoML framework that reframes the AutoML pipeline as a variable-length optimization problem. Traditional AutoML methods use fixed-length pipelines, limiting their adaptability across tasks with differing complexities. The proposed method combines the Tree of Thoughts (ToT) approach with Large Language Models (LLMs) for dynamic pipeline generation. The paper conducted experiments on datasets from OpenML and clinical domains, which may require further analysis and explanation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The experimental design and results lack clear, logical explanations, and no code has been provided to facilitate the reproducibility of these experiments. Additionally, several concerns have been raised regarding the validity and rationale of the experiments, as detailed in Questions 1-5.\n\n- The study lacks comparisons with recent state-of-the-art methods, despite the availability of several comparative approaches and benchmarks within the field. It is recommended that the paper incorporates representative, cutting-edge methods from the past three years to more comprehensively validate the effectiveness of the proposed method.\n\n- The structure of this paper is somewhat confusing, and the research process and key findings are sometimes difficult to understand, making it difficult to read. Additionally, certain technical details are insufficiently explained, potentially hindering readers’ understanding of the core content. Comparative methods are mentioned only by name, lacking essential descriptions.\n\n- The manuscript contains several typos. For example, \"feature trnasformation\" in line 48 should be \"feature transformation,\" and \"dataset dataset\" in line 195 should be corrected to \"dataset.\" A thorough proofreading is needed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "There are too many open questions to condense them. I recommend trying to reply to the points I have raised in the box above. To summarize, the main areas for questions and suggestions are: \n\n* Put the paper into the greater context and current state-of-the-art of the AutoML field. \n* Describe the proposed method's missing details, limitations, and confounding factors.\n* Use an experimental setup that is sufficiently sophisticated for tabular machine learning and using LLMs trained on data from the internet (or that perform RAG with the internet). \n* Support claims, made in the abstract and introduction, with results in the paper."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "## Flexible AutoML due to LLM-Driven Code Generation \n\nThe core strength of the paper is its proposed idea to solve AutoML without a fixed search space. As far as I can tell, the formulation based on variable-length optimization proposed in this paper is novel. Moreover, due to using an LLM with code generation, the potential search space and operators employed to solve the ML tasks are fully flexible (even if the prompt instructs the LLM to focus on a pre-defined set of operators). \n\n## Agentic AutoML with Tree of Thought\nUsing a prompted LLM as an AutoML agent is a promising concept. Employing a tree of thought approach in this domain fits the problem well and is also original."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes automated machine learning (AutoML) with variable-length optimization, an approach in which the exact number of machine learning (ML) pipeline components is not determined beforehand. That is, no static search space is given; instead, in this work, a maximal number of components shall be freely combined. To solve variable-length optimization, the paper proposes using a large language model (LLM) (with tree of thought) that determines which pipeline component to use next and produces the Python code to run the pipeline. \n\nThe proposed approach is motivated by the claim that differently complex tasks require a different length for the ML pipeline.\nThe paper claims that the proposed method enhances efficiency, improves transparency, and outperforms traditional AutoML systems."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "## Related Work & Baselines\n\nThis work makes several unsubstantiated and misguided claims about the AutoML field and AutoML systems. Moreover, it lacks a critical reflection of the current state of the art in AutoML. The short literature review in the appendix is severely limited and should be part of the main paper to put this work into greater context. \n\n* To illustrate, the abstract claims that fixed pipeline structures \"limit adaptability to diverse task complexities\". A large fixed pipeline structure that relies on hyperparameter optimization to determine the best pipeline component (e.g., which model or which preprocessing) is, by definition, adaptable to diverse tasks. The failure of such an approach is more the cost of exploration during optimization and the inadequacy of HPO algorithms for large search spaces, as mentioned in Line 84. Likewise, the paper claims \"[traditional AutoML] assumes that tasks share similar complexity levels\" (Line 44), which, as far as I know, no AutoML systems ever assume as there would be no need for AutoML if there is free lunch. This motivation and research problem need to be more clearly defined, and the paper must clearly showcase how traditional AutoML fails to solve a task because it cannot adapt to the task at hand (instead of due to overfitting or other problems).\n* The paper claims without references that AutoML systems like Auto-WEKA, H2O, and Auto-Sklearn 1.0 are mainstream (Line 39) and state-of-the-art (Figure 2, Line 298). Neither Auto-WEKA nor Auto-Sklearn are still mainstream nor state-of-the-art. H2O is a top contender but lacks modeling prowess compared to other systems. The paper especially ignores the current state-of-the-art, such as AutoGluon, MLJAR, LightAutoML, or FLAML. Please see the AutoML Benchmark [1] or the methods used by the top teams in Kaggle's AutoML Grand Prix [2] for current mainstream and state-of-the-art AutoML. \n* The authors briefly reference TPOT (Line 50) but do not differentiate their work from an evolutionary search process, which can also be seen as variable-length with pre-defined components. However, for the proposed method in this work, the components are also pre-defined by the prompt template (assuming the LLM abides by the prompt's instruction). Thus, I see a significant overlap between the formulation of variable-length optimization and any evolutionary search method (as common in NAS but less common in AutoML for tabular data). The authors also indicate this thought in Line 347 when speaking of an \"evolutionary path\". The distinguishing factor of this work is the code generation. \n\n## Problem with the Method \n\nWhile describing the method, the paper fails to mention several very important details. Moreover, some explanations in the method section prompt several alarming problems. In general, it is questionable how much of the method's performance comes from the idea or a multitude of confounding factors introduced by the implementation surrounding the core idea.\n\n* The paper states that examples were added to the prompt (Line 210), which means the LLM performs in-context learning. This is very interesting, but from the prompt template in Figure 7, it looks like the examples come from the same task. So, how these examples were created and added to the prompt can significantly impact the performance, which requires a separate ablation study. \n* In Line 234, the paper states, \"If, after several iterations, the program still does not perform satisfactorily, we manually code the pipelines.\" - does this mean the authors manually verified that their AutoML systems worked on each test dataset and even adjusted the automatically generated code manually? Moreover, how did you do this for the test datasets? How often did this happen? This should never occur, nor should a manual intervention ever be allowed to evaluate how well the AutoML system works. This points to a major problem with the reproducibility and generalizability of the proposed method. \n* Sections 3.3 and 3.4 are highly alarming and lack almost all the details required to explain them properly. Are you doing HPO with a grid or a random search? What search space? What model library? These are all necessary details to explain the methods that are missing. Moreover, the procedure explained in Sections 3.3 and 3.4 is entirely missing in the overview Figure 3. \n* In Section 3.3, the paper states that \"for classification tasks or mean squared error for regression tasks\" are used by the method. This should be the optimization metric (e.g., ROC AUC used later in the results, Line 308) and not hardcoded. The AutoML user must be able to specify the target metric. Likewise, the employed validation strategy (e.g., what kind of split) is very important to disclose here due to its impact on overfitting. \n\n## Experimental Flaws\n\nThe tabular AutoML field luckily has a comprehensive and fair benchmark for comparing AutoML systems as introduced by the AutoML Benchmark [1]. This benchmark explains and abides by many important standards for comparing AutoML systems and tabular machine learning methods. Yet, this work seems to ignore almost all of this in favor of a questionable evaluation protocol. At the same time, the paper ignores the added complexity of rigorous scientific evaluation when using LLMs. \n\n* The paper does not contain memorization tests for the tabular dataset used in the evaluation. As a result, it is impossible to determine from the paper if the results come from the LLM memorizing the best pipeline or querying it from OpenML or if it comes from its world knowledge. See [3] for an extended discussion. Furthermore, several of the datasets used in this paper have already been shown to be memorized by GPT-4 (the LLM used in this paper).\n* Claims about the state-of-the-art cannot be made by evaluating only ten datasets for AutoML, where an extensive range of tasks must be solvable. I recommend using a more concrete selection procedure that clearly specifies the limits of the experiments and relying on curated benchmarking suites. \n* The evaluation relies on repeated train-test splits, which is questionable for an evaluation strategy. Although this follows one prior work reference, for tabular machine learning and AutoML, 10-fold cross-validation is much more appropriate. This is done in the AutoML Benchmark or papers from other AutoML systems. \n* One of the last experiments looks at the performance when varying the max step sizes, which shows drastic differences in performance based on step size. Based on this, how was the maximal step size for all experiments chosen? Did the authors take the best one based on Table 5 and used this for their results (it looks like this is the case based on the numbers in other tables)? In other words, did the authors tune this hyperparameter on the test data? This is problematic and requires a much more extensive study independent of the final test data. \n* The paper fails to mention several details about how other AutoML systems were run, e.g., which metrics these methods optimize and how many resources they were given for the 1 hour. Depending on such settings, the AutoML systems might not be comparable. Likewise, it would be good to state how expensive it is to run this AutoML system with GPT-4. \n* How were the descriptions mentioned in Line 273 created? Some descriptions on OpenML are manually created and not informative or correct. How do you manually create such a description without biasing the evaluation for a benchmark? \n* The experiment in Section 5.1 is missing almost all the required information. What BO algorithms are used? What search space? What validation strategy? How do you train to perform variable-length surrogate models for BO? Neither the text nor the appendix allow the reader to understand what was compared here. Likewise, in Line 377, what is the search space for RS? Why did you not use AutoKreas for tabular tasks? \n* The NAS-Bench-201 experiments look very weird. The best pipelines for these are known on the internet and maybe to GPT-4, so how does the evaluation guard against this possibility? Furthermore, are the reported results correct? The scores for all metrics are identical, which seems very implausible. \n\n## Failure to Provide Evidence for Claims\n\nThe paper claims that the proposed method enhances efficiency, improves transparency, and outperforms traditional AutoML systems.\nNeither of these claims is supported by sufficient evidence in the paper. \n\n* \"outperforms traditional AutoML systems\": Given the flaws in the experimental design, such a general statement is questionable at best.\n* \"enhances efficiency\": The results never discuss improvements to efficiency (like time saved) but only focus on predictive performance. \n* \"Improves transparency\": This claim confused me throughout the paper. The examples that explain better transparency (Figure 2 and Figure 4) are practically equivalent to an optimization trace, which also any other AutoML system can produce. That is, at what time did which model or hyperparameter improve the validation score is not a new form of transparency for an AutoML pipeline. Moreover, tree of thought does not cause this kind of transparency either.\n\n## Minor\n* Line 47, \"trnasformation\"\n* In Equation 1, xi ∈ {Op1, Op2, Op3} needs to be a union of Op1, Op2, and Op3 instead. \n\n# References\n1. Gijsbers, Pieter, et al. \"Amlb: an automl benchmark.\" Journal of Machine Learning Research 25.101 (2024): 1-65. (https://jmlr.org/papers/volume25/22-0493/22-0493.pdf#page=20.40)\n2. AutoML Grand Prix 2024, https://www.kaggle.com/automl-grand-prix\n3. Bordt, Sebastian, et al. \"Elephants Never Forget: Memorization and Learning of Tabular Data in Large Language Models.\" arXiv preprint arXiv:2404.06209 (2024). (https://arxiv.org/abs/2404.06209)"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024formulating,\ntitle={Formulating Auto{ML} as a Variable-Length Optimization Problem: A Tree of Thought Approach with {LLM}-Driven Code Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ytn0rbIfOx},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advancements in machine learning have created a demand for automated systems that enable efficient development and deployment of machine learning applications. Traditional Automated Machine Learning (AutoML) approaches often rely on fixed pipeline structures, which limit adaptability to diverse task complexities. In this paper, we introduce a novel formulation of AutoML as a variable-length optimization problem, allowing for dynamic adjustment of model architectures based on task requirements. To effectively navigate the expanded search space of variable-length models, we employ the Tree of Thoughts (ToT) method combined with Large Language Models (LLMs). This framework utilizes a sequential decision-making process, allowing models to be incrementally constructed by evaluating prior outcomes. Additionally, LLMs automatically generate the code corresponding to each decision, transforming model configurations into executable pipelines and reducing manual intervention. Our approach enhances efficiency by focusing on promising pathways and improves transparency by explicitly showcasing how each decision contributes to the overall optimization. Experiments conducted on diverse datasets, including OpenML and clinical tasks, demonstrate that our method outperforms traditional AutoML systems, delivering superior model performance and better adaptability across different task complexities."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"AutoML",
"Tree of Thought",
"LLM"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/af8ac289dec6381edc6fad1798826accf1b57a39.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Formulating AutoML as a Variable-Length Optimization Problem: A Tree of Thought Approach with LLM-Driven Code Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ytvWZEiywp | EVINCE: Optimizing Adversarial LLM Dialogues via Conditional Statistics and Information Theory | main | Active | LLM;GAI;AGI | foundation or frontier models, including LLMs | 3;3;3;5;6 | 4;4;4;3;4 | 2;2;2;3;3 | 1;2;2;2;3 | 3;2;2;2;3 | 4 | 3.8 | 2.4 | 2 | 2.4 | -0.395285 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. For the dual entropy theory, why does the lower bound of $H(P_C)$ correspond to the robustness of the model? The entropy, as an uncertainty measure, is more relevant to exploration as in this work's taxonomy and does not seem to have a direct relationship with the robustness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The subject of robustness in LLMs, especially in healthcare applications, is important. \n2. The multi-round debate is an interesting way to ensemble multiple LLMs' capabilities."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces EVINCE, a dialogue framework to enhance Artificial General Intelligence (AGI) by leveraging adversarial debate among multiple instances of LLMs with a novel dual entropy theory. EVINCE works as a multi-agent system where large language models (LLMs) engage in structured dialogues to improve prediction accuracy, robustness, and reasoning capabilities. The framework employs information-theoretic metrics and conditional statistics to balance exploration and prior exploitation. The effectiveness is verified by an application in healthcare, particularly in improving disease diagnosis."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The theory aspect of the paper is not well-grounded. For example, the dual entropy theory is not very meaningful to me. \n2. The empirical study is only limited to the healthcare application, and the generality of the framework remains unclear.\n3. The methodology of EVINCE is fairly simple and the idea of multiagent debate exists in previous literatures [1].\n\n[1] Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improving factuality and\nreasoning in language models through multiagent debate, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- As mentioned in the weakness, it would be better if authors could provide a more detailed case study to analyse how the proposed EVINCE can mitigate bias, reduce hallucinations and improve reasoning in the healthcare areas. It is hard to directly understand how better the proposed EVINCE is only based on the demonstrated figures."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- a framework which includes iterative LLMs' debate, multiple evaluation metrics and weighted prediction based on quality scores.\n- authors demonstrate the performance of the proposed method in the healthcare application areas."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposed a dialogue framework which uses adversarial debate and dual entropy theory to achieve the AGI and enhance its proposed method EVINCE on versatility, adaptivity and reasoning capability, and further mitigate biases, reduce hallucinations and improve reasoning. Especially, the proposed EVINCE combines multiple metrics together to evaluate the entire framework, such as Wasserstein distance, relative entropy, critical thinking algorithm, correlation coefficients, mutual information, etc. They evaluate the proposed framework in the healthcare area to demonstrate the LLMs within this debate framework can lead to better diagnosis accuracy, lower entropy, lower Wasserstein distance and higher mutual information."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Although authors argue that the proposed EVINCE method enhances versatility, adaptivity and reasoning capabilities in LLMs, it is hard to see whether the proposed EVINCE can actually achieve this target only based on the simple demonstrations of Figure 2 and Figure 3 which use different objective evaluation metrics. It is also unclear how this proposed method can mitigate biases, reduce hallucinations and improve reasoning without any case studies or examples to support in the main texts of the paper.\n- As this work mainly uses different closed-source LLMs (GPT-4, Claude, Gemini) to evaluate the performance of the proposed method, it is unclear how those closed-source LLMs can produce the probability distribution over a set of diseases, as this probability distribution is a key information to be used to calculate other evaluate metrics, such as Wasserstein distance, mutual information, etc."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How is the probability distribution obtained? For open-ended tasks, it is impossible to directly output the real distribution. For classification tasks as discussed in this paper, the text-based probability distribution directly from model output is not reliable and LLMs show strong miscalibration as discussed in many existing works [3].\n- The authors list many existing metrics in Section 2. This seems to be redundant and provides little insight into which one is the most important for moderating the LLM debate.\n- How does the framework work when only one LLM debates with itself like self-evaluation/self-refinement? This could be a good add-up to the experiments.\n\n\n[1] Du, Y., Li, S., Torralba, A., Tenenbaum, J. B., & Mordatch, I. (2023). Improving factuality and reasoning in language models through multiagent debate. arXiv.\n\n[2] Liang, T., He, Z., Jiao, W., Wang, X., Wang, Y., Wang, R., ... & Shi, S. (2023). Encouraging divergent thinking in large language models through multi-agent debate. arXiv.\n\n[3] Xiong, M., Hu, Z., Lu, X., Li, Y., Fu, J., He, J., & Hooi, B. (2023). Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms. ICLR."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Multi-agent collaboration is a significant topic for modern multi-agent systems.\n- This work introduces information theory in the evaluation of dialogues between LLM agents."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper deals with multi-agent collaboration/debate. The authors propose the EVINCE framework to enhance versatility, adaptivity, and reasoning for LLMs via adversarial debate and information-theoretic metric evaluation. The experimental results on a healthcare dataset demonstrate the effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Missing discussion with other multi-agent debates in the literature such as [1,2]. There is a bunch of work on multi-agent debate. This makes the work less convincing and may mislead audiences.\n\n- I do not have a good sense of those claims on AGI in this paper, like ‘our work targets three core AGI characteristics: versatility, iterative adaptivity, and reasoning capability’, ‘The core strength of EVINCE in advancing towards AGI lies in their ability to enhance key AGI characteristics through multi-agent dialogues’. There should be more references in the introduction, otherwise, it would be an overclaim. I personally do not believe the proposed EVINCE with adversarial debate has much to do with AGI. \n\n- Besides, the experiments were only conducted on a healthcare dataset. Whether this proposed method can generalize to other tasks needs further verification. At least, for those open-ended tasks without a class set, the EVINCE seems not to be applicable. \n\n- The introduced information-theoretic metrics seem not to change LLM prediction but only provide an early-stop criterion for LLM debate iterations according to Fig 1. So the actual improvement might just come from the multi-agent discussion itself, having nothing with the information-theoretic metrics."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I may have missed the following:\n\n1. What is the average round of debates needed to achieve convergence? Figure 3 and 4 seem to suggest that three is an oracle number of rounds. Any intuitions for why?\n\n2. Was there a dominating LLM during debating, like in human debates?\n\n3. Would the per-LLM prediction accuracy continue to grow if we continually increase the number of agents, e.g., to 100 agents?\n\n4. What is the percentage of the 40 diseases that receive accuracy improvements? I am curious to know if the overall accuracy improvement comes from most classes or just a few classes."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Mitigating the limitation in LLM inferences caused by the “maximum likelihood” convention is crucial in enabling LLMs with accurate responses in real-life, human-oriented scenarios, such as diagnoses. \n\nThe proposed EVINCE debating algorithm is essentially human-like, exchanging predictions and reason sets between two equally competent LLMs in a round, and the process continues until each metric in a diverse set of information metrics, including CRIT argument quality, Wasserstein distance, MI, converges, thus facilitating adaptability through the iterations. It is also novel to use a built-in contentiousness level (debating temperature) in the prompt to help guide the debate.\n\nEVINCE demonstrates accuracy improvements in diagnoses on the Kaggle Disease Symptoms Description dataset (covering 40 diseases)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes EVINCE, a multi-LLM debating framework, which fosters information exchange between two LLMs via rounds of debate guided by various information metrics and a debating temperature, and thus effectively adapts LLM linguistic behaviors to complete tasks. EVINCE shows enhanced diagnostic accuracy and error corrections, demonstrating an important step towards improving reasoning abilities of LLMs on real-world data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The debate is limited to pairs of LLMs, which might reinforce a wrong “popular” prediction if both models get it wrong.\n\nIt is hinted that the problem can be alleviated through multi-round inter-LLM queries, so it might be interesting to give some examples and/or discuss how exactly these queries would help. In addition, would this problem be mitigated by introducing more models?\n\nSimilarly, will the iterative debating process cascade biases if both models are prone to generate biased answers?\n\nTypos: Figure 2a “GPT4 pairs Claude” used the same plot as Figure 2b “GPT4 pairs Gemini”."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. What is the impact of the predictions on varying contentiousness and entropy values? Are there any recommended default settings for these in specific tasks, or is parameter tuning always necessary?\n\n2. Are there any findings of differences in performance between different LLM backbones?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The integration of conditional statistics and information theory into LLM adversarial dialogues is novel, especially with the dual entropy framework balancing exploration and prior adherence. The empirical results validate EVINCE’s improvements in diagnostic accuracy and reasoning quality."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes EVINCE, a dialogue framework for LLMs that leverages adversarial debate, information theory, and conditional statistics to enhance model performance in prediction accuracy, robustness, and adaptability. The framework introduces the concepts of Inclusiveness Exploration, Information Flow Dynamics, and Reasoning Quality and Coherence, providing a structured debate mechanism that balances diverse exploration with convergence. The work emphasizes the framework’s potential for AGI development by addressing common LLM limitations, such as hallucination and bias, through its structured interaction methodology."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Instead of disease diagnosis, expanding tests across more diverse domains would strengthen the claim of EVINCE’s general applicability.\n\n2. The quality evaluation of arguments is important for the EVINCE framework and dependent on the proposed CRIT scores, so more insight into how CRIT scores are generated and their potential variability across domains could improve reliability.\n\n3. Wasserstein distance and mutual information could be computationally intensive. How about comparing them to other similar but more efficient metrics and adding more discussion about this."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024evince,\ntitle={{EVINCE}: Optimizing Adversarial {LLM} Dialogues via Conditional Statistics and Information Theory},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ytvWZEiywp},\nnote={under review}\n}"
},
"abstract": {
"value": "This paper introduces EVINCE (Entropy and Variation IN Conditional Exchanges), a dialogue framework advancing Artificial General Intelligence (AGI) by enhancing versatility, adaptivity, and reasoning in large language models (LLMs). Leveraging adversarial debate and a novel dual entropy theory, EVINCE improves prediction accuracy, robustness, and stability in LLMs by integrating statistical modeling, information theory, and machine learning to balance diverse perspective exploration with strong prior exploitation. The framework's effectiveness is demonstrated through consistent convergence of information-theoretic metrics, particularly improved mutual information, fostering productive LLM collaboration. We apply EVINCE to healthcare, showing improved disease diagnosis, and discuss its broader implications for decision-making across domains. This work provides theoretical foundations and empirical validation for EVINCE, paving the way for advancements in LLM collaboration and AGI development."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM",
"GAI",
"AGI"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b5befdf254ebacefacb604d396862bb6738dbc6f.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "EVINCE: Optimizing Adversarial LLM Dialogues via Conditional Statistics and Information Theory"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yu1vqQqKkx | LICO: Large Language Models for In-Context Molecular Optimization | main | Active | large language models;molecular optimization;black-box optimization;foundation models;in-context learning | foundation or frontier models, including LLMs | 3;5;6;6 | 3;5;4;3 | 2;3;3;2 | 2;4;3;3 | 3;4;3;3 | 5 | 3.75 | 2.5 | 3 | 3.25 | 0.246183 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "**Suggestion:**\nI am willing to increase my rating if the authors report results on the *original* PMO-10K, which will ensure fair comparison with the prior work. Even if the results are not SOTA anymore, the paper can still be accepted, as the methods of the paper are interesting on their own. \n\nThe paper can have an additional table for PMO-1K, with either tuned baselines, or with a note that the baselines are not carefully tuned."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* the \"domain adaption\" trick to convert the general purpose text-based LLMs into domain-specific in-context learners using synthetic data.\n* detailed ablation studies. I really liked the analysis of the effect of the ratio of \"intrinsic\" and \"synthetic\" datasets. It gives an intuition on how to design synthetic datasets for other optimization tasks in other domains.\n* the \"scaling law\" chart (Fig. 3) is a good indicator of the scaling abilities of the proposed approach. Unfortunately there is a diversity of underlying models which makes the claims less significant. Hopefully there will be many sizes of similarly pretrained LLMs at some point (e.g. Llama 4 1B, 3B, 8B, etc.)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper focuses on one of the most promising directions of modern LLMs: LLM-enhanced optimization algorithms. It suggests a method to extend arbitrary pretrained LLMs with a couple of layers and train them to perform in-context learning for arbitrary functions. The idea is tested on molecular property prediction tasks, and the paper claims SOTA results on the famous PMO benchmark."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main weakness is that the SOTA claim on PMO is misleading. **The results reported in this paper are not really PMO.** There are two major differences. \n\na) PMO has 23 tasks, not 21. *jnk3* and *gsk3* are missing.\n\nb) PMO uses 10K budget of oracle calls (as mentioned by the authors).\n\nWhile a) does not make the comparison to prior art unfair, b) is critical. The main advantage of the PMO paper was that the authors performed a large-scale hyperparameter search for every method they tried, and even discovered hyperparameter values (e.g. for REINVENT) that were not covered even by the authors of the methods. So, all methods from PMO, and the subsequent methods like Genetic GFN have their hyperparameters tuned for the 10K budget. \n\nI agree with the authors that 1K budget can be more interesting, as 10K might feel saturated, but that's a different benchmark. I would suggest to name it something like **PMO-1K**, and then properly tune the hyperparameters of the baselines. I understand it's hard to do this in the review discussion period.\n\n** Details of the optimization algorithm**\nIt took me a few days to understand that the sentence \"At each iteration t, we generate a set of candidates\" in Section 4.3 does not mean that the candidates are generated without the LLM. As seen in the Appendix, the authors actually used a manually designed genetic algorithm for generating the candidates, and the LLM is only used for scoring them. This is a critical component of the algorithm and has to be presented well in the main part of the paper. \n\n**Three other papers that could be cited and discussed:**\n1. Optformer [1] is the earliest transformer to the best of my knowledge that used in-context learning for an optimization task. It did not use an initialization from a large pretrained model, and the data used there is not synthetic. Still, the concept is very close.\n2. Another recent approach that produced good scores on PMO is from the Chemlactica/Chemma models [2]. It has the genetic algorithm idea, very similar to the one described in the Appendix A.3. Chemlactica's scores on PMO are still a bit unfair, as it uses a lot more molecules in the pretraining phase (way beyond ZINC250k).\n3. MOLLEO [3] is another evolutionary algorithm that wraps an LLM. It has a few evaluations on \n\nA minor aspect that could be considered in the future iterations: use more realistic oracles, like molecular docking. Check [2] and [4] for new benchmarks.\n\n[1] Chen, Yutian, et al. \"Towards learning universal hyperparameter optimizers with transformers.\" Advances in Neural Information Processing Systems 35 (2022): 32053-32068.\n[2] Guevorguian, Philipp, et al. \"Small Molecule Optimization with Large Language Models.\" arXiv preprint arXiv:2407.18897 (2024).\n[3] Wang, Haorui, et al. \"Efficient evolutionary search over chemical space with large language models.\" arXiv preprint arXiv:2406.16976 (2024).\n[4] Guo, Jeff, and Philippe Schwaller. \"Saturn: Sample-efficient Generative Molecular Design using Memory Manipulation.\" arXiv preprint arXiv:2405.17066 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper studies an important task of adapting LLMs for molecular optimization tasks, which has not been studied extensively. \n\n2. The paper presents a novel approach by integrating LLMs with specialized layers to address black-box optimization problems in the molecular domain.\n\n3. The model achieves strong performance on the challenging PMO benchmark."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces LICO, a versatile model that enhances LLMs for black-box optimization, specifically in the molecular domain. LICO overcomes limitations related to domain-specific data scarcity and complex problem expression. It is trained to perform in-context predictions across diverse functions and, post-training, efficiently generalizes to new molecule properties through simple prompting. LICO achieves state-of-the-art PMO molecular optimization benchmark results, demonstrating its efficacy in complex scientific applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While this paper demonstrates the strong performance of LLMs, the analysis of their specific benefits remains limited. It would be valuable to understand which particular characteristics of LLMs contribute to the success of this molecular optimization task. For instance, how do different LLM architectures or configurations impact performance? Would domain-adaptive training on chemistry corpora further enhance results? Expanding on these points with additional explanation would strengthen the understanding of LLMs' effectiveness in this context.\n\n2. The study offers a limited exploration of prompt formats. Further investigation into how different prompt structures might influence model performance would be beneficial (e.g. Prompts that include more domain-specific chemistry terminology/Prompts that frame the task in different ways (e.g. as a prediction task vs. an optimization task/Testing different ways of structuring the input-output pairs within the prompt).\n\n3. It would be better to discuss several recent works for using LLMs for molecule optimization: a) DrugAssist: A Large Language Model for Molecule Optimization; b) Domain-Agnostic Molecular Generation with Self-feedback; c) A Sober Look at LLMs for Material Discovery: Are They Actually Good for Bayesian Optimization Over Molecules?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N.A."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can the authors argue why the additional input embedding layers are necessary here? considering that the LLM always comes with an existing embedding layers.\n\n2. How the embedding and prediction layers are integrated into the existing LLM backbone?\n\n3. Maybe I missed this, but what language representation is used in LICO?\n\n4. What's the rationale behind using Tanimoto kernels for GPR synthetic data generation? Have the authors tried other types of kernels, and how do they perform?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "-The overall approach shows good efficacy, as shown in the benchmark scores. The combo of training data and modeling recipe is interesting and novel for surrogate modeling, as far as I know (this needs cross-confirmation).\n\n-The scaling law analysis is interesting, indicating the power of scaling up the model size for better molecule optimization outcome.\n\n-The ablation study and result analysis is helpful: the analysis of surrogate modeling accuracy, vs. the GPR baseline, confirms the efficacy of proposed LLM surrogate modeling approach. The discussion of the limitation of PMO benchmark numbers are also helpful and shows scientific rigor."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a semi-synthetic training approach for LLM-based surrogate modeling, with a specific focus on molecular optimization. By integrating a pretrained language model with embedding and prediction layers and training on both \"intrinsic\" and synthetic data, it shows promises of outperforming gold-standard Gaussian process regressors. As a result, the molecular optimization algorithm coupled with this LLM-based surrogate shows better performances than presented baselines, ranging from RL algorithms to GFN etc. The authors also presented analysis on different model sizes and training approach with varying data mixture."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-The authors selectively show the numbers with a different sampling budget (1k instead of 10k in the original PMO setting) with a reason. Can they also present the numbers with different sampling budget in the supplementary information? That will confirm the generalization of the proposed approach.\n\n-The ablation and baseline should be more comprehensive: there are several concurrent works for LLM for molecular optimization[1,2], the authors should also add them as a baseline, if applicable, and thereafter discuss the efficacy of the proposed methods. For example, the MolLEO framework [1] (https://github.com/zoom-wang112358/MOLLEO) claims that they achieve superior performances than baselines on PMO as well, how does it compare with LICO? \n\n-Two more straightforward baselines I can come up with are: (1) drop the input embedding layers, simply extract the text embedding from prompts and train an MLP layer for the surrogate. (2) drop both the embedding and prediction layers, use the pretrained model to do in context learning only. This is similar to the LLAMBO work [3] that is cited in the paper. The authors should justify why they think the proposed approach is the most promising here.\n\n-The description of the experimental details is a bit lacking: e.g. how the embedding and prediction layers are integrated into the existing LLM backbone? Please also provide code and pretrained models so that the reviewers can reproduce the results.\n\n-The language models tested in this work, such as llama2 and qwen1.5, are slightly outdated. The authors should also add numbers on llama3/3.1, Qwen2 to further confirm their conclusions. \n\n-The related works section on LLM for molecular optimization is missing.\n\n[1] Efficient Evolutionary Search Over Chemical Space with Large Language Models, arXiv:2406.16976 . \n[2] ChatGPT-powered Conversational Drug Editing Using Retrieval and Domain Feedback, ICLR 2024.\n[3] https://github.com/tennisonliu/LLAMBO; Large Language Models to Enhance Bayesian Optimization, ICLR 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- How do you obtain the uncertainties do you directly use the probabilities returned by the model? Are those probabilities well-calibrated? \n- The ordering of the examples seems important - what is the impact and how is this taken care of? \n- The approach is shown for a causal LM. But it seems that a masked approach (similar to https://www.nature.com/articles/s42256-023-00639-z) might be more effective in learning from the entire sequence"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The authors use an established benchmark and perform competitive performance on this benchmark \n- Bayesian optimization is one of the most practically useful applications of machine learning in chemistry \n- The paper is well-written and easy to follow \n- The method is novel (but it is based on combining multiple existing techniques) \n- There are useful ablations (e.g., training with various amounts of synthetic data)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors report an approach to using LLMs with additional embedding layers (similar to FPT) and text-encoding and semi-synthetic training (similar to ExPT) for molecular optimization. They report good performance on the PMO benchmark."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In some places, the paper seems to indicate that ICL for Bayesian Opt has not been done in chemistry. This, however, is not the case as the following two reports show: \n - https://arxiv.org/abs/2304.05341\n - https://www.researchgate.net/profile/Christoph-Voelker-4/publication/377722231_LLMs_can_Design_Sustainable_Concrete_-a_Systematic_Benchmark_re-submitted_version/links/65b408e934bbff5ba7c85ad8/LLMs-can-Design-Sustainable-Concrete-a-Systematic-Benchmark-re-submitted-version.pdf\n- A bit related is the use of LLM-derived embeddings for Bayesian Opt in chemistry. This, for example, has been reported in https://openreview.net/forum?id=A1RVn1m3J3"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce LICO, a general-purpose model that extends arbitrary base LLMs for black-box optimization, with a particular application to the molecular domain."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024lico,\ntitle={{LICO}: Large Language Models for In-Context Molecular Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yu1vqQqKkx},\nnote={under review}\n}"
},
"abstract": {
"value": "Optimizing black-box functions is a fundamental problem in science and engineering. To solve this problem, many approaches learn a surrogate function that estimates the underlying objective from limited historical evaluations. Large Language Models (LLMs), with their strong pattern-matching capabilities via pretraining on vast amounts of data, stand out as a potential candidate for surrogate modeling. However, directly prompting a pretrained language model to produce predictions is not feasible in many scientific domains due to the scarcity of domain-specific data in the pretraining corpora and the challenges of articulating complex problems in natural language. In this work, we introduce LICO, a general-purpose model that extends arbitrary base LLMs for black-box optimization, with a particular application to the molecular domain. To achieve this, we equip the language model with a separate embedding layer and prediction layer, and train the model to perform in-context predictions on a diverse set of functions defined over the domain. Once trained, LICO can generalize to unseen molecule properties simply via in-context prompting. LICO achieves state-of-the-art performance on PMO, a challenging molecular optimization benchmark comprising over 20 objective functions."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large language models",
"molecular optimization",
"black-box optimization",
"foundation models",
"in-context learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/4767bb6c4630be3dc7f48fc9206a573a0f64980d.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "LICO: Large Language Models for In-Context Molecular Optimization"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yuuyPlywuO | Distilling an End-to-End Voice Assistant Without Instruction Training Data | main | Active | Multi-Modal LLMs;Voice Assistants;Distillation | foundation or frontier models, including LLMs | 3;3;5;6 | 4;4;4;4 | 2;3;3;3 | 2;2;1;4 | 3;3;2;4 | 4.25 | 4 | 2.75 | 2.25 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Have you compared your approach with an ASR -> LLM based pipeline? One would imagine it would score poorly on the emotion recognition tasks, but probably very high on others.\n- Have you considered diving deeper into what kind of data is requires to achieve specific results? For example, you could ablate the amount of commonvoice data required to achieve specific results; explore other datasets and/or languages and measure their impact on downstream tasks. This would provide a valuable contribution to the community."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Interesting approach which takes several pre-trained models (Llama, Whisper), connects them via a q-former based adapter and trains a small portion of the combined model to respond to audio inputs.\n- Scores relatively well on various benchmarks compared to some well known models.\n- The main contribution is showing that it is sufficient to teach an existing instruction-tuned model to understand audio through relatively light weight techniques which then allows the model to follow instructions via audio queries."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes DiVA, a voice assistant model that is able to follow both spoken and written instructions. It is trained via a dual distillation and alignment loss and shows relatively strong results on various benchmarks including head to head comparison with Qwen 2 Audio. The authors propose a \"q-former\" style injection of audio into the model, which is initialized from a Whiper model, and a text model initialized with Llama weights and left frozen which consumes the audio input. This model is then trained for relatively small amount of steps on CommonVoice data to learn input alignment as well as output alignment (as compared to text labels)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There is not a lot of novelty in this method besides choosing which data to train on. Q-former or in general attention based pooling (e.g. as in perceiver) is well known; L2 loss as a replacement for KLD has also been around for awhile (e.g. in Soundnet: Learning sound representations from unlabeled video.). Putting some of these pieces together to allow Llama models to process audio input has also been explored, for example in Nvidia's SpeechLLM or SALMONN (which is cited in this paper and apparently does not use Llama). In my opinion demonstrating that full SFT is not required to obtain an audio understanding model is interesting but not sufficient."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. How is speech generated from the model output?\n2. How is the quality of the output speech compared with the ground truth speech? \n3. it is stated that without the KL-divergence loss the speech output is incoherent, how was this measured?\n4. Are there any audio samples from the model?\n5. In what voices is the model able to generate speech?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed method is novel and easy to reproduce.\n2. Evaluations show strong results on a wide variety of tasks. \n3. The paper is clearly written. \n4. Authors use publicly available datasets and fine-tuned models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a method named DiVA for training a speechLLM without using any instruction data in the speech modality. To perform this, an LLM is adapted to enable speech input and output by first combining it with a Whisper based speech encoder, and then using two Novel loss term to train the new modules. The two losses are a cross-model alignment loss which aims to alight the speech and text modalities, and a (simplified) KL divergence loss on the distribution of text and speech outputs. The authors evaluate the paper on 3 groups of tasks, which are spoken question answering, speech classification and speech to speech translation. In all 3 task groups the method reaches state of the art results and surpassed the baseline methods. \n\nOverall, this paper shows a significant contribution and I recommend to accept it."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It is unclear how speech (waveform) is generated from the model output.\n2. The paper is missing some direct evaluation of the speech quality, such as MOS experiments. \n3. For the speech to speech translation task, the paper is lacking an evaluation of the speaker similarity and translation quality (ASR-BLEU) metrics, this may better inform the reader about the overall quality of the translations. \n4. While the paper claims to have created a voice assistant, there is no evaluation or clarification what a voice assistant means. It isn't clear that good performance on the tasks stated in the papers results in a voice assistant. This should be clarified or rephrase."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Questions:\n1. In Section 3.1, the authors state that they use the decoder weights of Whisper to initialize the Q-Former’s _cross attention_. However, in Section 3.2.1, it’s mentioned that the speech tokens are processed with _causal attention_ in Whisper's decoder. This seems contradictory and is confusing. Could the author clarify the architecture of the Q-Former adapter? \n2. In Section 3.2.1, the cross-modal token alignment loss is applied only to the last $N$ tokens. However, this choice is unclear since Q-Former’s output tokens lack causal relationships. Additionally, the number of speech tokens $Q$ is defined as a hyperparameter, so the assumption that $Q>N$ may not hold for all inputs during inference. Could the author provide further justification for these design choices and proofs for the assumption about token counts?\n3. In Section 3.2.1, the author claim that the additional $Q-N$ token provide information bandwidth for other information. However, I see no evidence to support this claim. Is there any statistic on the number of tokens that did not undergo alignment during training? If so, how does dropping these tokens affect the model's performance? \n4. In Section 3.2.2, the authors claim that the proposed $L_2$ loss can be computed more efficiently than KL divergence. Could additional evidence on training costs (e.g., computation time or resource usage) be provided to support this claim?\n5. The ablation study indicates that the token alignment loss aids the model in adhering to text instructions. However, this difficulty in following text instructions might stem from DiVA being trained solely on speech inputs without accompanying text instructions. Could incorporating text instructions before speech tokens during training improve performance? Furthermore, if this modification were implemented, would the token alignment loss still be necessary, or could it be reduced or omitted?\n\nTypos and minor mistakes:\n1. In Table 1, the base LLM for Qwen 2 Audio Instruct is listed as Qwen2 Instruct. However, according to the technical report, the correct base model should be Qwen-7B. \n2. In Equation (1), the subscripts in the summation notation should start from 1.\n3. In Section 3.2 line 247, \"L2\" should be $L_2$\n\n[1] Chu Y, Xu J, Yang Q, et al. Qwen2-audio technical report[J]. arXiv preprint arXiv:2407.10759, 2024."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The cross-modal alignment approach effectively retains general instruction-following abilities, addressing the common issue of forgetting in supervised fine-tuning (SFT).\n2. The Distilled Voice Assistant (DiVA) surpasses previous SOTA models, such as Qwen2-Audio, with significantly lower resource requirements.\n3. A qualitative user study shows DiVA’s strong alignment with human preferences in conversational quality.\n4. Ablation analysis confirms the distinct contributions of cross-modal token alignment and embedding distillation loss to DiVA’s performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel approach for training Speech LLMs in the absence of explicit instruction-following data. The authors introduce a method to transfer the instruction-following and conversational capabilities of text-based LLMs to speech-based models by leveraging only ASR data. Specifically, the proposed approach aligns input tokens using a cross-modal token alignment loss and output representations via embedding distillation loss, effectively bridging the modality gap between speech and text. Experimental results demonstrate that this method generalizes well to downstream tasks, including spoken QA and speech translation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Limited Novelty:** Although the authors suggest an \"alternative paradigm\" for aligning speech inputs to text-based LLMs, similar approaches involving distillation text responses for cross-modal alignment have already been explored in works like BLSP [1] and AudioChatLlama [2]. A more detailed discussion and comparison with these works would help clarify the unique contributions. \n2. **Potential Limitations of Q-Former as Modality Adapter:** DiVA employs a Q-Former as the modality adapter to convert Whisper outputs into tokens. However, recent research in Vision-Language Models suggests that Q-Former can introduce semantic deficiency and \"redundant double abstraction,\" making it less effective than simpler alternatives like MLP with average pooling [3]. Furthermore, the fixed number of query tokens restricts the model’s ability to process speech of varying lengths, potentially limiting its adaptability.\n3. **Unclear Basis for Performance Gains:** The claim that DiVA outperforms SOTA models like Qwen2-Audio may be misleading, given that DiVA builds on Llama-3—a stronger backbone than Qwen2-Audio’s Qwen-7B. This discrepancy makes it difficult to attribute performance gains solely to the proposed training method. Including the text-only performance of the backbone models (Llama-3 / Qwen-7B) or training DiVA on the same backbone as Qwen2-Audio could clarify the source of the observed improvements. \n\n\n\n[1] Wang C, Liao M, Huang Z, et al. Blsp: Bootstrapping language-speech pre-training via behavior alignment of continuation writing[J]. arXiv preprint arXiv:2309.00916, 2023.\n\n[2] Fathullah Y, Wu C, Lakomkin E, et al. AudioChatLlama: Towards General-Purpose Speech Abilities for LLMs[C]//Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2024: 5522-5532.\n\n[3] Yao L, Li L, Ren S, et al. DeCo: Decoupling Token Compression from Semantic Abstraction in Multimodal Large Language Models[J]. arXiv preprint arXiv:2405.20985, 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The authors should clarify the differences between this model and an ASR-LLM cascade, as mapping speech to text-like tokens questions both novelty and latency gains.\n- Single-token predictions seem insufficient to capture temporal dynamics, potentially reducing DiVA’s effectiveness for tasks that require detailed temporal context."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The use of ASR data alone to improve instruction-following behavior could offer cost benefits by reducing dependence on annotated instruction data.\n- DiVA’s evaluation across diverse benchmarks provides a quantitative assessment of the model’s instruction-following performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes the Distilled Voice Assistant (DiVA), a Speech Large Language Model (LLM) trained on ASR data alone to enhance instruction-following abilities through self-supervised distillation. The authors align DiVA’s responses with those of a text-only LLM to enable cross-modal transfer of instruction adherence. However, the approach does not present substantive methodological innovation, as it closely mirrors existing distillation and instruction-following techniques. Furthermore, the work critically lacks comparisons to similar method in this domain, failing to substantiate its claimed improvements over key baseline methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Lack of Novelty: The paper’s distillation approach lacks originality and does not go beyond established self-supervised and cross-modal transfer methods. Its reliance on prior distillation techniques without meaningful innovation limits the work's impact.\n- Missing Comparisons to Key Similar Work: The paper does not compare DiVA to highly relevant models, such as those using cross-entropy on target sequences or alternative distillation methods, failing to clarify any advantages over standard methods in the field [1, 2, 3].\n- Insufficient Literature Survey: The paper omits numerous relevant works, resulting in a limited survey that neglects essential context for DiVA’s approach within the broader Speech LLM and instruction-following literature.\n- Paralinguistic Claims: Assertions about capturing paralinguistic features (e.g., sarcasm, emotion) are questionable given that DiVA’s speech embeddings are mapped to text-like embeddings, likely relying more on text semantics than true paralinguistic cues.\n\n\n\n1. Fathullah, Yassir, et al. \"AudioChatLlama: Towards General-Purpose Speech Abilities for LLMs.\" Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2024.\n2. Wang, Chen, et al. \"Blsp: Bootstrapping language-speech pre-training via behavior alignment of continuation writing.\" arXiv preprint arXiv:2309.00916 (2023).\n3. Wang, Chen, et al. \"BLSP-KD: Bootstrapping Language-Speech Pre-training via Knowledge Distillation.\" arXiv preprint arXiv:2405.19041 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We train a Speech LLM using context distillation, rather than external supervision, and show that this improves generalization."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024distilling,\ntitle={Distilling an End-to-End Voice Assistant Without Instruction Training Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yuuyPlywuO},\nnote={under review}\n}"
},
"abstract": {
"value": "Voice assistants, such as Siri and Google Assistant, typically model audio and text separately, resulting in lost speech information and increased complexity. Recent efforts to address this with end-to-end Speech Large Language Models (LLMs) trained with supervised finetuning (SFT) \n have led to models ``forgetting\" capabilities from text-only LLMs. Our work proposes an alternative paradigm for training Speech LLMs without instruction data, using the response of a text-only LLM to transcripts as self-supervision. Importantly, this process can be performed without annotated responses. We show that our Distilled Voice Assistant (DiVA) generalizes to Spoken Question Answering, Classification, and Translation. Furthermore, we show that DiVA better meets user preferences, achieving a 72\\% win rate compared with state-of-the-art models like Qwen 2 Audio, despite using $>$100x less training compute."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multi-Modal LLMs",
"Voice Assistants",
"Distillation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c56743807484131ed40d1c97c7604cf01fb90b9d.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Distilling an End-to-End Voice Assistant Without Instruction Training Data"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yuymgwkjj1 | Correcting the Bias of Normalizing Flows by Synthetic Outliers for Improving Out-of-Distribution Detection | main | Active | OOD Detection;Normalizing Flow | applications to computer vision, audio, language, and other modalities | 3;5;5;5 | 3;5;4;4 | 2;3;3;2 | 1;2;2;2 | 3;3;3;2 | 4.5 | 4 | 2.5 | 1.75 | 2.75 | 0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How does the method handle OOD data that exhibit similar complexity levels to the ID data but differ in semantic content? Could the current approach potentially overlook such cases?\n\n- Is the choice of Gaussian blur for synthetic outlier generation extendable to other domains, or would you recommend domain-specific modifications for applications outside of vision and text?\n\n- Could you elaborate on any observed trade-offs between using synthetic outliers versus real outliers in terms of computational efficiency and detection accuracy?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper proposes a novel approach by using synthetic outliers to correct bias in normalizing flows for OOD detection, uniquely addressing data complexity issues. The softplus-based objective further enhances model stability, setting this method apart in improving robustness.\n\n- The experimental setup is thorough, including benchmarks across various datasets and both image and text modalities. The results consistently demonstrate the effectiveness of the proposed approach, particularly in improving AUROC and other detection metrics.\n\n- The paper is well-organized, and the methodology is explained in detail, making it easy for readers to understand the approach. The visualizations, such as complexity distributions and comparisons between standard and softplus-based training, add clarity to the results and methodology."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the problem of OOD detection, specifically focusing on correcting the likelihood bias in normalizing flows that affects their performance in OOD detection. The authors propose incorporating synthetic outliers during training and introduce an adversarial likelihood objective, utilizing the softplus function to improve model stability. Experiments on both benchmark datasets and high-dimensional real-world datasets show that the proposed method achieves significant improvements in OOD detection accuracy, yielding results comparable to models trained with limited real outlier data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The choice of synthetic outliers, particularly using Gaussian blur for images, may limit applicability to cases where blurring captures outlier characteristics. For complex datasets with nuanced OOD structures, more sophisticated synthetic outlier generation techniques may be needed.\n\n- While the softplus objective enhances stability, it introduces additional computational overhead, especially in high-dimensional data scenarios. The paper could explore potential optimizations for large-scale datasets.\n\n- The method’s performance is heavily tied to the complexity assumptions underlying the model. The complexity-adjusted scoring approach, while effective, might misinterpret data with atypical structures. Additional investigation into adapting complexity measures could broaden the method's applicability."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The paper claim to estimate Lipschitz Constant by taking the L_infinity norm of the gradient vector. How are the samples selected here? \n2. What is the slope and R^2/correlation of likelihood vs. complexity in Figure 3? It seems like the relationship between likelihood and complexity is rather weak."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The motivation and methodology design is concise and very written.\n2. The dataset used for evaluation is comprehensive: SVNH, LSUN, CelebA, CIFAR-10, CIFAR-100 for visual recognition; Chest X-ray, RealBlur, and KonIQ-10K for high-resolution imaging, and movie reviews, AG News, SST-2, and WikiText-2 for text.\n3. Detection performance shows improvement from the addition of synthetic outliers and complexity scoring."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aim to alleviate a well-known weakness of normalizing flows models towards detecting complexity for the task of out-of-distribution (OOD) detection. To accomplish this, the presented methodology is composed of three changes. First, the method applies gaussian blur to the original in-distribution (ID) images to generate synthetic OOD images. Second, the method add an additional softplus function to the maximum likelihood objective to prevent numerical instability. Lastly, the method construct a new OOD score that is the combination of the predicted likelihood and the complexity of the image. The paper reveals that these modification improved the OOD detection capabilities of the normalizing flow model considerably."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The in-distribution performance is completely missing in the evaluation. While the paper shows that adding synthetic OOD images via blurring improves detection performance, how does it affect the normalizing flow's ability to model the ID data distribution (i.e. FID between the generated images and ID images)?\n2. The performance gains from using synthetic outliers is very marginal compared to complexity score. It is unclear whether it is even necessary (especially considering 1.) because after adding the complexity score (shown in Table 3), the effect of adding synthetic data is very marginal (sometime even slightly worse than MLE in the case of SVNH). \n3. The significance and novelty with the addition of the complexity score is weak since it is underexplored in this work. The usage of JPEG2000 compression to calculate complexity limits its uses to only image data, and hence, the lack of evaluation of complexity scores on high-resolution image data and text data.\n4. The paper does not provide literature support of the claim \"synthetic outliers enhance the local Lipschitz constant, improving model stability and performance\" in line 432-437. This claim is counterintuitive: the Lipschitz constant provides an upper bound on the change in model prediction, which is looser with higher constant. i.e. prediction can be more sensitive towards small perturbations, and hence, less stable. Empirical evidence or theoretical justification is needed to justify this counterintuitive claim."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I have incorporated questions in the weakness section."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is well-written and easy to follow.\n- The paper introduces a simple yet effective approach to improve the OOD performance of NF models.\n- The proposed idea of generating low-complexity OOD to mitigate previously found drawbacks of NF models is well motivated."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper aims to improve the out-of-distribution detection performance of Normalizing Flows. The paper is motivated by observation from a previous work that generative models, including normalizing flows, assign high likelihood to less complex OOD data, resulting in misclassifying such OOD data as ID data. To address the issue, the proposed method proposes to generate simple OOD data by simply applying Gaussian blur on ID data. Then, the model proposes to minimize the softplus function of likelihood of these synthetic OOD for training stability. As a result, the paper demonstrates non-trivial OOD detection performance improvement upon NF baselines."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed augmentation method is rather too simple that it raises several concerns.\n - There could be several cases in a real-world scenario, where a blurry image is not OOD but ID instead. How to handle such cases?\n - Would this work on more real-world datasets, such as MVTecAD and ShanghaiTech?\n\n- The paper lacks enough discussions on prior works. For example, there is another OOD/anomaly detection paper that extends NF framework design to learn to handle synthetic OOD [A]. Can authors provide discussions and quantitative comparisons against this method (with the same augmentation suggested by the authors or by this paper)?\n\n- In the paper, it says \"By fine-tuning the outlier synthesis probability through validation, we achieve an optimal balance between maximizing the likelihood of ID samples and minimizing the likelihood of OOD samples.\" What is a validation set here? What kind of data is in the validation set? What is its size?\n\n- I'm not really convinced that using synthetic OOD with low complexity is the key. If you generate enough and diverse OOD samples, wouldn't it be helpful even though it's complex? What about applying aug methods that make it more complex (cutmix, mixup, etc)? I believe these kinds of augmentation methods will lead to more complex OOD data but still improve the OOD performance, possibly more than simple Gaussin blurs. The experiments on this could strengthen the authors' claims.\n\n\n[A] SANFlow: Semantic-Aware Normalizing Flow for Anomaly Detection and Localization, NeurIPS 2023"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "see the weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is well organized and easy to follow.\n- The author provides many formulations which help to understand the main pipeline of the manuscript.\n- The experiments demonstrate that synthetic outliers improve the detection performance of normalizing flow models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "- The author finds that normalizing models tend to assign higher likelihood to the input data. Therefore, when the complexity of ID data is lower than that of OOD data, the model will detect OOD samples better. To correct this bias, the author generates synthetic outliers with lower complexity while forcing the model to assign lower likelihood to them."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Synthetic outliers is not a novel topic in OOD detection. Similar papers include NPOS (arXiv:2303.02966), VOS (arXiv:2202.01197), SSOD (arXiv:2307.00519), to name a few. Besides, there are also many papers which generate synthetic OOD images to provide auxiliary supervision. Therefore, it seems that this manuscript is not creative enough.\n- Lack of comparisons with other OOD synthesis methods.\n- The performance is poor compared with SOTA techniques. The OpenOOD benchmark (https://github.com/Jingkang50/OpenOOD) collects main OOD detection methods and demonstrates their FPR95 and AUROC."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose leveraging synthetic outliers alongside a specialized training objective to enhance the OOD detection ability of normalizing flows for both images and texts."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024correcting,\ntitle={Correcting the Bias of Normalizing Flows by Synthetic Outliers for Improving Out-of-Distribution Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yuymgwkjj1},\nnote={under review}\n}"
},
"abstract": {
"value": "Out-of-distribution (OOD) detection is critical for ensuring the reliability and robustness of deep learning models in real-world applications. While normalizing flows have demonstrated impressive performance for various task of image OOD detection, recent findings suggest that they still encounter limitations and severe biases when applied to datasets with different statistics. Specifically, it has been observed that normalizing flow models tend to assign higher likelihoods to OOD samples with low complexity, which undermines the effectiveness of likelihood based OOD detection methods. In this paper, we explore the bias related to data complexity linked to normalizing flow models in OOD detection. We propose a novel method for bias correction by incorporating synthetic outliers during training, guiding the model to assign lower likelihoods to OOD samples. Additionally, we introduce a specialized training objective that leverages the softplus function for OOD data, ensuring a smooth and effective training process. Extensive experiments on benchmark and high-dimensional real-world datasets, including both images and texts, confirm that our proposed approach significantly enhances OOD detection accuracy, achieving performance comparable to models trained with a limited number of real outliers. Moreover, our method increases the Lipschitz constant, supporting the hypothesis presented in related literature."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"OOD Detection",
"Normalizing Flow"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/c9c1ff47c5f041761ba3628d1ba4a1f87821e426.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Correcting the Bias of Normalizing Flows by Synthetic Outliers for Improving Out-of-Distribution Detection"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yvxpHbydFx | Understanding Diffusion-based Representation Learning via Low-Dimensional Modeling | main | Active | diffusion representation learning;representation learning;diffusion model;denoising auto-encoder | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;3;5;6 | 4;3;3;4 | 3;1;3;2 | 1;1;2;2 | 2;1;3;2 | 4.25 | 3.5 | 2.25 | 1.5 | 2 | 0.19245 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "as above"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Theorem 1 provides direct connection from the noise level sigma_t to the empirical results. \n\n2. The paper is well-written and easy to read."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a low-rank decomposition theory of diffusion representation learning, which supports the empirical result that using a medium denoise level t achieve best classification performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The results are not rigorous. theorem 1 is obtained by many approximations: 1) x_theta(x_t, t) defined in proposition1 is replaced with x_theta(x_0, t) with some informal arguments. The gap between these two terms cannot be easily ignored. 2) eq 9 replaces eq 6 for simplicity. It should be put in theorem instead. \n\n2. The empirical results. Since authors do not provide new empirical method, I am wondering why no larger experiments are conducted, e.g., ImageNet with some pretrained diffusion models. Evidendece on Cifar are not strong enough to support all arguments in this paper.\n\n3. The K-space. Do you use K=number_of_classes? Is it too strong assumption? Human classify objects into different classes, but it would be overly strong to assume that K classes subspaces are independent?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "It seems to me a fair amount of the theory is borrowed from [1] without being explicit about it in the text. \n\n\n[1] Wang, Peng, et al. \"Diffusion models learn low-dimensional distributions via subspace clustering.\" arXiv preprint arXiv:2409.02426 (2024)."
},
"flag_for_ethics_review": {
"value": [
"Yes, Research integrity issues (e.g., plagiarism, dual submission)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Why does it make sense to replace $x_t$ with $x_0$ in $x_\\theta(x_t, t)$?! What is $x_\\theta(x_0, t)$ computing? This ad-hoc replacement creates an inconsistency between the noise level on the input image and the noise variance given to the network. Now it is not clear what the output of the function $x_\\theta$ is after creating this inconsistency, and is not clear how to interpret empirical results that involve this (which seems to be all the empirical results). \n2. What does figure 1a convey? How did you measure *posterior accuracy* for CIFAR images? What is called posterior accuracy throughout the paper is equivalent to denoising performance. Indeed you can measure denoising performance, but how did you measure its accuracy for **real** images?! This is an unsolved problem, and it is not explained in the paper how the authors solved it. \n3. Figure b shows a result that has been around for a long time. That is denoising at higher noise levels results in the loss of details. It's not clear how this is part of their results, given that this has been known for more than 100 years and also has been rediscovered in deep learning era again and again. There should be at least some citations."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. Important and interesting topic: The question of representation learning through diffusion models is interesting and worthwhile for more investigation. \n2. This paper could potentially be interesting with major improvement in writing and clarity. Overall, the paper needs more refinement to be ready for publishing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper makes an attempt to explain *representation quality* in a diffusion model using a simple model of Gaussian data over low dimensional subspaces. They observe a uni-modal trend in *representation quality* as a function of noise level, which indicates that the best representation of the clean input image can be obtained at a certain noise level. It seems that the representation quality is empirically measured by classification accuracy over test data using the internal representation at the bottleneck of a UNet. \n\nTo explain this observation, they 1) assume image data lies on a union of manifolds, where each manifold corresponds to a class. 2) They approximate manifolds with linear sub-spaces. 3) They assume each subspace of a class contains features that are relevant for high quality representation ($U_k$) and anything that is not relevant is noise and lies on orthogonal complement ($U^T_k$). Moreover, the assumption is that data is Gaussian distributed over the image subspaces. 4) They define Class Signal to Noise Ratio to measure the goodness of representation using denoised images (i.e. posterior mean estimate) and the class sub-spaces. 5) They show both of their measures for quality of representation (i.e. classification test accuracy from bottleneck and CSNR) have a uni-modal trend as a function of noise level. 6) Due to the theoretical analysis, for the low rank Gaussian model, the highest representation quality is a function of the ratio of added Gaussian noise level and the *noise* level of portion of image that lies in ($U_k^T$)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper is very poorly written. It's hard to follow the logic and it's not clear what are the contributions. It's also not clear what parts are borrowed from other works and what parts are novel. For example, a good portion of the assumptions, modeling, and toy experiment setup are borrowed from [1] which is not obvious from the text. \n2. The key concept are not explained clearly. For example, what do you mean by \"representation quality\"? It's been referred to throughout the text and figures without any definition. How do you connect your two measures of quality to each other? Overall, many of the key concept in the text are never defined! How can one assess the results without knowing what the experiments are trying to measure?! \n3. The linearity assumption (approximating each class nonlinear manifold with a subspace) is obviously too simplistic for real data. As a result the theoretical result, expressed in theorem 1 does not extend beyond the simple low rank Gaussian model. \n4. Even within the union of low-rank Gaussian model, a major drawback is the arbitrary split of $U_k $ and $U_k^T$. How does one decide what it relevant to representation and what is *noise* or irrelevant perturbations? To answer this question and correctly decide where the image subspace is, one needs to solve the representation learning problem first. To decide what is relevant information to representation, one needs to define the task the representation will be used for. Features that are noise with respect to one task are relevant information w.r.t another task. For example: for a coarse level classification task more information is irrelevant hence $U_k$ is lower rank. That means the mode in your uni-modal plots will be at higher noise levels. For a more granular classification task, more information about details are needed, so the mode will be at lower noise levels. Thus, the whole notion of \"representation quality\"as a function of noise level is not well-defined. \n5. The authors are using two separate notions of representation: the ave-pooling responses in the bottleneck layer, and the strength of projections of posterior mean onto the class subspace (CSNR). Merely showing that both of these are uni-modal as a function of noise level doesn't prove anything! Specially given that the maximums happen at different noise levels! \n\n[1] Wang, Peng, et al. \"Diffusion models learn low-dimensional distributions via subspace clustering.\" arXiv preprint arXiv:2409.02426 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. For image denosing case, I'm assuming the work is based on the idea of latent diffusion. So x_0 is the clean data embedded into the latent space using a VAE? Or it's the raw image? \n2. How is U_k calculated for cifar10? Is it calculated by doing PCA on all examples (in vae latent space) with class label k?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. It relates multi-scale score matching to posterior estimation to representation learning.\n2. It make (realistic?) assumption on the data so the connection in 1 can be studied and analyzed theoretically.\n3. The theoretically analyze under simple data assumption is thorough."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "I believe the authors try to study the structured emerged from training multi-scale score matching objective. Given different noise scale, for the model to achieve score matching (denosing), it musts learn feature at that scale. For example, for score matching on natural images, if the noise level is low, the model just need to learn low level (texture) features to denoise. But when the noise level is high, the model needs to learn high level (features) concept or also have to guess what's in the image. The author shows different scale of features learnt by multi-scale score matching can be used for semantic task like image classification, as well as for posterior estimation of mixture of Gaussian. The author author shows the performance of doing these task under different noise scale form an inverted u shape curve. Under strict assumption of mixture of Gaussian, the author provide analysis on the posterior estimation power given differer noise level. The author compare the learnt model's performance with the optimal one."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I felt like the paper is missing lots of detail. I have to make many assumption outside the paper.\n2. I wish I see a cleaner method. Despite these's extensive experiment and theoretical analysis under the strict \"mixture of low-rank Gaussians\" setting. I don't really see the key message. What I get is that different level of features learnt at different scale can be used for posterior estimation of mixture of Gaussian, as well as doing classification. How are these two related specifically? One statement is that they both has the inverted U shape curve. But is that it? I hope the author can make this more clear. \n3. Figure 4b tells us the learnt multi-scale score matching function (with neural network?) behave different from the optimal one on a simple dataset. I think this is really interesting result. But I also think the author should provide some insight on why is this the case, because it might reveal the nature of bias in neural network. For example, in [1], they find cnn based denoiser cannot learn optimal solution for simple toy example like global planer wave because it has spatial locality as bias. \n\n[1] Z, Kadkhodaie, et al, 2023, Generalization in diffusion models arises from geometry-adaptive harmonic representation"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- As indicated in line 192, in a setup with clean images used being used for classification, the diffusion model is imputed with clear image and timestep t, “where t serves solely as an indicator of the noise level for diffusion model to adopt during feature extraction.” Isn’t this setup out of distribution for a model, as the diffusion model is never trained with a clear image or conditioned with a timestep equal to zero?\n- Why limiting the evaluated features to only those encoded in the bottleneck layer of the UNet architecture? A lot of information relevant to classification (such as high-frequency features) might be passed through the residual connections."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- In Section 4.1, the authors present an interesting and insightful analysis of the effect of diffusion training with multiple levels of noise passed through the same model on the learned representations. It is intriguing to see that diffusion models have relatively steady representations across different denoising timesteps."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work evaluates the potential of representation learning with diffusion models. The authors evaluate the connection between the quality of posterior estimation and the quality of learned representations, extending this analysis to different scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Several results presented in this work are not novel. For example, the evaluation presented in Figure 1 was already discussed in [1] and [2]. As noted by authors, the fact that representation learning dynamic captures a “fine-to-coarse” shift with the increased amount of noise was already noted in DAE works [3] and [4]\n- I fail to see the significance of the contribution “Linking posterior estimation ability of diffusion models to representation learning”. Isn’t this observation a straightforward implication of the fact that samples at the late diffusion steps are, by definition, more noisy, which results in lower quality of the posterior estimation and higher entropy when analyzing predictions from the linear probe?\n- The “representation learning” capability evaluation is very limited to the simple linear probing task. This significantly limits the significance of the presented results. Extending the analysis with other SSL tasks would strengthen the submission.\n- Editorial: I’m lost in the presentation of this work as in Sec. 2.3 it is written that “since diffusion models tend to memorize the training data instead of learning underlying data distribution when the training dataset is small (Zhang et al., 2023), we focus on the case where sufficient training data is available throughout our analysis in Section 3”, yet there Figure 2 on the second page seems to be a reproduction of work by Zhang et al. The analysis related to Figure 2 is presented on the last page\n\n\n[1] Xiang, Weilai, et al. \"Denoising diffusion autoencoders are unified self-supervised learners.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\n[2] Deja, Kamil, Tomasz Trzciński, and Jakub M. Tomczak. \"Learning data representations with joint diffusion models.\" Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Cham: Springer Nature Switzerland, 2023.\n[3] Choi, Jooyoung, et al. \"Perception prioritized training of diffusion models.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n[4] Wang, Binxu, and John J. Vastola. \"Diffusion models generate images like painters: an analytical theory of outline first, details later.\" arXiv preprint arXiv:2303.02490 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024understanding,\ntitle={Understanding Diffusion-based Representation Learning via Low-Dimensional Modeling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yvxpHbydFx},\nnote={under review}\n}"
},
"abstract": {
"value": "This work addresses the critical question of why and when diffusion models, despite their generative design, are capable of learning high-quality representations in a self-supervised manner. We hypothesize that diffusion models excel in representation learning due to their ability to learn the low-dimensional distributions of image datasets via optimizing a noise-controlled denoising objective. Our empirical results support this hypothesis, indicating that variations in the representation learning performance of diffusion models across noise levels are closely linked to the quality of the corresponding posterior estimation. Grounded on this observation, we offer theoretical insights into the unimodal representation dynamics of diffusion models as noise scales vary, demonstrating how they effectively learn meaningful representations through the denoising process. We also highlight the impact of the inherent parameter-sharing mechanism in diffusion models, which accounts for their advantages over traditional denoising auto-encoders in representation learning."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"diffusion representation learning",
"representation learning",
"diffusion model",
"denoising auto-encoder"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7f49da494398acfdf2a07df1fb685e2792af5773.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Understanding Diffusion-based Representation Learning via Low-Dimensional Modeling"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ywFOSIT9ik | Revisiting Zeroth-Order Optimization: Minimum-Variance Two-Point Estimators and Directionally Aligned Perturbations | main | Active | zeroth-order optimization;SGD;convergence analysis | optimization | 5;5;5;6;8 | 3;3;3;2;5 | 2;3;3;3;3 | 2;3;3;3;3 | 2;4;3;3;3 | 5.8 | 3.2 | 2.8 | 2.8 | 3 | 0.735147 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) P3, Theorem 2.2: the inequalities are clear, but it is not clear to me what you can deduce from them, as they give only a lower- and upper-bound on the quantity you want to minimize. Is it true that the variance is minimal if and only if $\\rho_V=0$? Or $\\rho_V=0$ is only a sufficient condition? Are conditions (a) and (b) sufficient conditions to get $\\rho_V=0$? Apparently no, since from (b) you can not get $\\rho_V=0$. To me, it is not completely clear the logic of the reasoning neither the statement. This is true especially in connection with the comment on Gaussian Smoothing at P4: does the fact that $\\rho_V>0$ imply that Gaussian Smoothing does not achieve minimal variance? From the inequalities of the Theorem you just know that the variance is lower-bounded and upper-bounded by two different quantities...\n\n2) P6, DAP: for the unknown gradient $\\nabla f(x)$, you can apply a small batch of perturbations to obtain an estimated gradient. Second level question: with which distribution do you sample $v$ for the estimator of the gradient used in DAP?\n\n\nMore bibliography:\n\n- Cai, Mckenzie, Yin, Zhang: Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling; SIAOPT 2022\n\n- Cai, Mckenzie, Yin, Zhang: A One-bit, Comparison-Based Gradient Estimator; ACHA 2022\n\n- Rando, Molinari, Villa, Rosasco: Stochastic Zeroth order Descent with Structured Directions; COAP 2024\n\n- The paper [Rando, Molinari, Villa, Rosasco: An Optimal Structured Zeroth-order Algorithm for Non-smooth Optimization] has been published in NeurIPS 2023\n\n- Akhavan, Chzhen, Pontil, B. Tsybakov: A gradient estimator via L1-randomization for online zero-order optimization with two point feedback; NeurIPS 2022\\\\\n\n***Minor comments:***\n\nP2: the formula in Contribution 1 is not correct, should be $\\nabla f(x;\\xi)$ without the $v$\n\nP2: explain why the constraint $\\mathbb{E} vv^T = \\delta I$ gives the unbiasedness of the gradient approximation (this is true only for $\\mu \\to 0$); why do you say it is a linear constraint?\n\nP3, L124: first line of the equation is wrong; in the second line, where is the second order term with $M_c(v)$? Explain better the approximation you make...\n\nP4, DPA: highlight that, in the practice of zeroth order optimization, this condition can not be imposed like it is, since $\\nabla f(x)$ is not available\n\nP4, L187: $a^Tv=\\pm \\sqrt{\\delta}\\|a\\|$\n\nP4, L206: $\\hat{\\nabla}f(x;\\xi)$ has not been defined, but only $\\hat{\\nabla}f(x;\\xi, v)$\n\nP4, L210: comment that the quantity $\\min_t \\|\\nabla f(x_t)\\|$ is not something you can check in the practice of zeroth order optimization; in particular, you don't know which one is the best iterate accordingly to the criterion $\\|\\nabla f(x_t)\\|$\n\nP4, L210: $f^*$ not defined\n\nP5, L222: $\\mathbb{E}_{\\xi} f^*_{\\xi}$ not defined\n\nP5, L226: say that $c$ is the strong-convexity constant (it appears only in the definition in the appendix)\n\nP5, L238: comment (a) is not clear to me\n\nP5, Corollary 3.2: \"If choosing\" is not correct (used twice); the bound $\\leq \\varepsilon$ has constants involved that are omitted\n\nP6, Fig. 1: the caption is not clear to me"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "***Main comments:***\nThe review of the literature is complete, the problem is meaningful and relevant, the theoretical results are significant, the proofs are correct. The paper is interesting and well-written, the presentation is both concise and comprehensible."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "***Summary:***\nIn the paper the authors study (sufficient?) conditions on the distribution of the sampling directions in order to build a two-point finite difference estimator of the gradient that is at the same time unbiased and with minimal variance. Then they state convergence results for SGD using this kind of estimators (in the non-convex and stronly-convex case), showing that they achieve the optimal complexity in terms of dimension. Finally they focus on DAP (directionally aligned perturbation), a new estimator which satisfies unbiasedness and minimal variance. They design an algorithm to implement it and show promising numerical experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper is well written but there are some points with imprecise statements. See the comments in the Questions box. \n\n1) The stochastic optimization setting (in $\\xi$) is not needed in the first part of the paper but only for SGD (Section 3), and it creates confusion.\n\n2) P3, Theorem 2.2: the inequalities are clear, but it is not clear to me what you can deduce from them, as they give only a lower- and upper-bound on the quantity you want to minimize."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Could you provide the explanation related to the reviewer's questions in the weakness?\n\n- Is (a) and (b) in Theorem 2.2 sufficient conditions for achieving the minimum variance, or are they also necessary conditions?\n\n- It seems the authors claim in the remark about (a) of Theorem 3.1 that a small $\\delta$ leads to more gradient updates. However, it appears that Theorem 3.1 provides an upper bound result, so it may not serve as logical evidence for your discussion. Or do you have a lower bound result as well?\n\n\n**Minor questions:**\n- Is the definition of parameters in (a) and (b) of Theorem 3.1 the same? Is there a reason you are repeating them?\n- Did you try to write $< \\infty$ in line 215?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "To the best of the reviewer's understanding, the core contributions of this paper are two parts:\n- This paper formalized the problem of characterizing the class of optimal distributions of random perturbations in a zeroth-order estimator to minimize its variance and provided sufficient conditions.\n- Based on the first contribution, they conceptualize the novel condition which they name DAPs, provide a way to use it practically, and demonstrate the effectiveness by experiment.\n\nThe reviewer thinks these are meaningful contributions. The paper also shows that the complexity of SGD with two-point gradient estimation achieves the best-known sample complexity when the perturbation distribution $V$ is chosen to achieve the minimum variance.\n\nAlso, the reviewer thinks the writing of the paper is overall nice."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the two-point zeroth-order gradient estimator, specifically focusing on the problem of identifying the optimal distribution of random perturbations that minimizes the estimator's variance. In Section 1, they briefly introduce the preliminary concepts and raise the motivating questions of the work. They first question whether it would be possible to determine the class of optimal distributions of random perturbations in a zeroth-order estimator to minimize its variance, and provide Theorem 2.2 as the answer. In Theorem 2.2, they introduce two sufficient conditions for the question, which are constant magnitude perturbations and a novel condition called directionally aligned perturbations (DAPs). In Section 4, they take a closer look at DAPs and provide a sampling strategy for practical implementation. Finally, in Section 5, they demonstrate the practical effectiveness of DAPs through two experimental setups with a synthetic example and language model optimization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This can be closer to a question than a weakness, however, the reviewer is confused about the underlying logic and contribution of Section 3. The reviewer may be missing some elementary points, but they are still confused about the sufficient and necessary condition for minimum variance.\n\n- The most confusing point was the relation between the fourth-order moment. In Theorem 2.2, as addressed in the Remark, it seems (2) has minimum variance when equality holds. But is the converse also true? It seems the terms related to the fourth-order moment only appear in the upper bound. If the converse doesn't hold, isn't the finiteness of the fourth-order moment neither a sufficient nor a necessary condition for achieving minimum variance? In this context, is the finiteness of the fourth-order moment an additional assumption (other than minimum variance) imposed to obtain the results in Section 3?\n \n- The reviewer thinks the observation about the influence of the fourth-order moment addressed in the Remark of Theorem 3.1 can be meaningful by itself. However, the reviewer thought the main focus of the paper was the conditions for minimum variance and specifically DAPs. Yet, the first half of Section 3 seems to be just a convergence analysis of SGD, with the assumption of the finiteness of the fourth-order moment. According to the authors' explanation, the proof heavily relies on arguments considered in prior works.\n \n- In short, what is the role of Theorem 3.1 in the overall context of the paper? It seems Theorem 2.2 is used in the proof; is it crucial? The reviewer thinks it would be better to address a quick overview of Section 3 in the overall context of the paper at the beginning of the section. The reviewer felt lost when first reading Section 3."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "No."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper explores the two-point zeroth-order gradient estimator and identify the optimal distribution of random perturbations that minimizes the estimator's variance. This paper formulates it as a constrained functional optimization problem over the space of perturbation distributions. This paper reveals that optimal perturbations either maintain a fixed length or align directionally with the true gradient. While existing research has largely focused on fixed-length perturbations, the potential advantages of directional alignment have been overlooked. To address this gap, this paper delves into the theoretical and empirical properties of the directionally aligned perturbation (DAP) scheme, which adaptively offers higher accuracy along critical directions. Additionally, this paper provides a convergence analysis for stochastic gradient descent using $\\delta$-unbiased random perturbations, extending optimal complexity bounds to a wider range of perturbations. Through empirical evaluations on both synthetic problems and practical tasks, we demonstrate that DAPs outperform traditional methods under specific conditions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the two-point zeroth-order gradient estimator and identify the optimal distribution of random perturbations that minimizes the estimator's variance. This paper formulates it as a constrained functional optimization problem over the space of perturbation distributions. This paper reveals that optimal perturbations either maintain a fixed length or align directionally with the true gradient. While existing research has largely focused on fixed-length perturbations, the potential advantages of directional alignment have been overlooked. To address this gap, this paper delves into the theoretical and empirical properties of the directionally aligned perturbation (DAP) scheme, which adaptively offers higher accuracy along critical directions. Additionally, this paper provides a convergence analysis for stochastic gradient descent using $\\delta$-unbiased random perturbations, extending optimal complexity bounds to a wider range of perturbations. Through empirical evaluations on both synthetic problems and practical tasks, we demonstrate that DAPs outperform traditional methods under specific conditions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I don't think the study over problem Eq. (3) is meaningful. \nUnder the theory of this paper, the random coordinate is better than Gaussian random vector.\nHowever, just as pointed out in Theorem 1 of \"Fine-tuning language models with just forward passes'', the Gaussian random vector can provide a \"Dimension-Free Rate''.\nUnfortunately, the random coordinate can not guarantee this ``Dimension-Free Rate'' even it is good under the thooery of this paper.\n\nThe experiments in this paper do not show significant advantages of DAP over other estimations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The problem studied is of significant interest to the optimization community. And it shows two classes of random perturbations that give minimum variance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the zeroth-order gradient estimator and identifies the optimal distribution of random perturbations that minimize the gradient estimator's variance.\nThe problem is formulated as a constrained optimization problem. And it is shown that the optimal perturbations maintain a fixed length or align directionally with true gradient.\nThese gives two classes of random perturbations that achieve the minimum variance : Constant magnitude perturbations and Directionally aligned perturbations.\nConvergence of SGD with both these classes of perturbations are proved. And some experimental results are shown."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In the main theorem (Theorem 2.2), what about only if part ? Does it happen that equality holds in theorem only if the given conditions (a) or (b) is satisfied ?\nA discussion of this would be interesting.\nThe experimental results are weak. Only one practical application of language model optimization is given.\nNo comparisons with other constant magnitude perturbations: random coordinate/direction sampling and Rademacher distribution are shown.\nWhy DAP perturbations give better performance than uniform perturbation in experiments is not clear. As the theorem says that theoretically both give minimum variance."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Have you analyzed the impact of the Taylor approximation in your analysis? \n2. Have you analyzed the impact of the difference between theoretical analysis and estimating gradients in the wild? I acknowledge the experiment for the easy functions, where it is exact. \n3. Why would the assumption of isotropic noise $\\mathbb{E}[vv^\\top] = \\delta I_d$ be valuable in terms of perturbations? Why not something else?\n\n\n\nI am very open to changing my score once the issues I have raised are addressed!"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper presents a nice idea to describe a framework. The optimization problem set up is easily solvable, making the narrative flow. The result is approachable, clear, allowing to derive convergence rates that are expressive of all the dependencies on the various parameters. The plot converges towards \"defending\" these directionally aligned perturbations, which are of interest even simply because the standard choice is the other optimal one. By passing through simple examples, they are also able to experimentally verify their formulation. Overall a very standard but structured paper in optimization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the author(s) propose a new perspective on zeroth-order methods, focusing on an optimization problem that minimizes the variance. After carefully crafting the problem instance, based on constraints that are standard in literature, some comments on classic choices of fixed-length perturbations are made. This is the right motivation to proceed and advocate for \"directionally aligned perturbations\", the other \"optimal choice\". In particular, the theorems derived in this branch for convergence of SGD under standard assumptions make explicit the role of higher moments of the distribution, while still recovering nice to parse upper bounds. To conclude, the author(s) come back to the directionally aligned perturbations and show some experiments for synthetic datasets and a Language task. The practical feasibility of their optimization problem is also discussed."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "See also the questions below. \n\nThe main weakness is about wording: you claim you identify the optimal distribution in the abstract, while indeed you do not. I feel like the sentence should be adjusted. You find a sufficient condition for optimality subject to a construction of isotropic perturbations (you $\\delta$-unbiasedness) and a Taylor approximation. This to me is not identifying an optimal distribution, nor identifying anything optimal at all, if not only wrt your specific criterion, which then you would have to specify anyway. \n\nI feel like the other main weakness is experimental validation. I believe you have brought the best results you could find, and still, the improvement is marginal. For example, figure $3$ right is really an improvement in machine precision. Figure $4$ is more promising. I also acknowledge that we should not care about SOTA but about understanding, so this is a weakness that is not suggesting any further comment. \n\nThe other point is that you do not discuss the accumulation of errors when you (i)\n perform the Taylor approximation and (ii) perform the gradient estimation. I believe the two should be theoretically explored further to understand in restricted settings how much is lost wrt the convergence theorems. \n\nLastly, no limitations are discussed. \n\n\n###### Typos\nPlease do not count these as weaknesses. \n- You never define $\\nabla f(x; \\xi, v)$, (e.g. line 085), since the notation for derivatives variable, I would define it. \n- \"In the mean while\" (line 166), meanwhile; \n- The numbering of lists as $(1), (2)$ etc resembles a lot equations, not a typo but a potential source of anti-dynamic reading; \n- line 215, you say $< 0$, probably meant to be finite. \n- line 264 \"to a specific types of...\"\n- line 269 \"achiving\"\n- \"classitcal\" (line 472)\n- \"it solves the projection is...\" (line 864)\n- corollary 3.2 (b) the sentence is not correct logically. If we add the assumption by choosing further specific scalings we get the result, right?"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper discusses the minimum-variance condition for two-point zeroth-order gradient estimators and proposes a new random perturbation."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024revisiting,\ntitle={Revisiting Zeroth-Order Optimization: Minimum-Variance Two-Point Estimators and Directionally Aligned Perturbations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ywFOSIT9ik},\nnote={under review}\n}"
},
"abstract": {
"value": "In this paper, we explore the two-point zeroth-order gradient estimator and identify the optimal distribution of random perturbations that minimizes the estimator's variance. We formulate it as a constrained functional optimization problem over the space of perturbation distributions. Our findings reveal that optimal perturbations either maintain a fixed length or align directionally with the true gradient. While existing research has largely focused on fixed-length perturbations, the potential advantages of directional alignment have been overlooked. To address this gap, we delve into the theoretical and empirical properties of the directionally aligned perturbation (DAP) scheme, which adaptively offers higher accuracy along critical directions. Additionally, we provide a convergence analysis for stochastic gradient descent using $\\delta$-unbiased random perturbations, extending optimal complexity bounds to a wider range of perturbations. Through empirical evaluations on both synthetic problems and practical tasks, we demonstrate that DAPs outperform traditional methods under specific conditions."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"zeroth-order optimization",
"SGD",
"convergence analysis"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3abb2fa2dc43cc485a27f49e948eab8b2fe80672.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/8e1cb696726193529422dc2a31edfe985aadf789.zip"
},
"title": {
"value": "Revisiting Zeroth-Order Optimization: Minimum-Variance Two-Point Estimators and Directionally Aligned Perturbations"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ywHOnGOLb1 | A Competitive-Cooperative Actor-critic Framework for Reinforcement Learning | main | Active | Deep reinforcement learning; Double-actor framework; Competition and Cooperation | reinforcement learning | 3;5;6 | 3;4;5 | 2;3;3 | 2;3;3 | 2;3;3 | 4.666667 | 4 | 2.666667 | 2.666667 | 2.666667 | 0.981981 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tExtension to Complex Tasks: Could the authors elaborate on the applicability of the competitive-cooperative framework in more complex, non-simulation tasks?\n2.\tTrade-offs in Collaborative Loss Implementations: What specific use cases would benefit more from each implementation of the collaborative loss? Are there trade-offs between performance and computational complexity that should be considered?\n3.\tExploration and Exploitation Balance: How does the framework affect the exploration-exploitation balance compared to other multi-actor DRL methods, and how would this influence application to environments with sparse rewards?\n4.\tImpact of Q-value Discrepancy Minimization: Given the role of Q-value minimization in aligning actor policies, are there specific scenarios or tasks where this could inadvertently limit exploration or policy diversity?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tGenerality and Flexibility: The proposed framework is designed to be generic and can be seamlessly integrated into existing double-actor DRL methods as well as extended to multi-critic DRL methods. This broad applicability enhances its potential impact on the DRL community.\n2.\tImproved Policy Performance: By promoting mutual imitation among actors, the framework addresses the issue of independent exploration in existing methods, leading to the development of better policies and improved performance.\n3.\tConcrete Implementations: The paper provides two specific implementations of the framework, demonstrating its flexibility and practical applicability.\n4.\tComprehensive Experiments: The authors conduct extensive experiments on four MuJoCo tasks and evaluate their method against nine state-of-the-art DRL algorithms. The results show consistent performance improvements, supporting the effectiveness of their approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses two significant challenges in Deep Reinforcement Learning (DRL): enhancing exploration capabilities and improving the accuracy of Q-value estimation. The authors observe that existing double-actor DRL methods, while promising, suffer from a lack of cooperation between the two actors, leading to suboptimal policies. To mitigate this, they propose a competitive-cooperative actor-critic framework that encourages mutual learning among actors by minimizing the differences in their output actions and the discrepancies in the Q-values estimated by their respective critics. They present two specific implementations of their method and extend it to multi-critic DRL methods. The effectiveness of their approach is demonstrated through extensive experiments on four MuJoCo tasks, where it enhances the performance of nine state-of-the-art DRL methods in terms of return."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tExperimental Scope: The experiments are restricted to MuJoCo tasks. Expanding validation to environments with more complex and dynamic variables, or to real-world tasks, would provide stronger evidence for the framework’s generalizability and practical relevance.\n2.\tLimited Analysis of Implementation Trade-offs: While both implementations of collaborative loss show performance improvements, the paper could benefit from a more nuanced discussion on the trade-offs between complexity and performance across various use cases."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Clarification:\n- The “collaborative loss” tries to minimise the difference between policies and q-function estimates (compared to existing work which generally advocates for improved diversity). I feel that, in effect, the algorithm should behave more similar to a normal “single-model” algorithm, like TD3. Doesn’t this go against the spirit of double actor-learning?\n- 4.2 (2) also indicates that the critic are optimised to predict the same target value Q-hat, which means that the predictions are even more similar. Can you provide some plot or analysis that showcases how different policy predictions and critic predictions actually are? I would really like to see a comparison to non double-actor methods."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Strengths:\n- The overall writing is fine; I could follow the paper well\n- The method is simple and, indeed, seems widely applicable to many existing algorithms\n- The method is described in reasonable detail, which allows to understand the methodology well\n- Many baseline algorithms are used for comparison"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a training-method to improve double actor reinforcement learning. They propose to use an additional loss that minimizes the discrepancy between actors and ciritic predictions during training. The authors present two variants of this \"collaborative loss\". The authors show how this loss can be incorporated into the implementation of a wide range of existing double-actor and multi-critic RL algorithms. They highlight benchmark results for four Mujoco environments, and show the developed approach matches or outperforms existing RL algorithms without the collaborative loss."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Weaknesses:\n- There seems to exist quite a bit of work that uses some sort of interaction/regularization between multiple actors (e.g., Li2023 et al.: Keep Various Trajectories: Promoting Exploration of Ensemble Policies in Continuous Control), indeed going beyond two actor-critic pairs, so I do not feel the statement “Existing double-actor DRL methods involve each actor-critic pair exploring independently” is warranted\n- Given the close results compared to baseline algorithms, the limited choice of environments (just 4 Mujoco environments) is a bit lacking; showcasing environments with weak performance of other methods, and highlighting the performance the proposed method would have been more convincing\n- Some of the reported baseline benchmark results seem off, e.g., CAL reports > 5000 reward for Ant in their implementation at 1M steps (Figure 2, Li et al., Simultaneous Double Q-learning…), which deviates very significantly from the reported reward. In Table 1, for SD3, 1750 is reported for Hopper; however, for Hopper in the SD3 paper, it seems to be much closer to 3500 than 2882; similarly >4000 for Ant is reported in the SD3 paper compared to the score of 1176 given in this paper. While I accept that there can be differences based on random seeds and unreported hyperparameter settings, these deviations seem pretty large. Can the authors please check these numbers or explain the discrepancies?\n- Overall, I do not find the empirical results to be compelling; I do not see that the proposed approach significantly outperforms the existing methods\n\nMinor:\n- For intrinsic exploration, there should be more foundational related work, which should be cited\n- Sometimes, the terminology in the related work is weak, e.g., “Current mainstream DRL algorithms typically utilize an architecture based on one actor and two critics.” -> That is the case for some algorithms like SAC or Double-DQN, but not necessarily for others; would can be more specific here\n- Section 4.4. adds relatively little new information; I think it’s pretty straightforward, and a statement like “space complexity increases due to the parameters of the value network, which includes a five-layer Multilayer Perceptron” does seem too specific to the given implementation\n- The term “mutual information” is an established term, and it is not obvious that the difference between actors/critics (4.2. (2), (4)) reduces mutual information\n- Section 4.5 is difficult to follow in the main paper; maybe this can be re-structured not entirely to rely on the appendix"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Could the authors provide a more detailed explanation of how the mutual learning mechanism between actors directly impacts the exploration process and policy convergence? Specifically, how does minimizing the action differences between actors quantitatively lead to improved exploration outcomes?\n2. In the selective imitation method, why was the value function chosen to determine when imitation should occur?\n3. Have you considered adding a weight parameter between the Q function and the collaborative loss when computing the actor's gradient? Additionally, how does this hyperparameter influence the experimental performance?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper proposes an innovative competitive-cooperative actor-critic framework, which addresses the limitation of independent exploration in existing double-actor methods.\n2. This paper rigorously analyzes the proposed framework and validates its effectiveness through extensive experiments. \n3. This paper proposes a general and scalable framework for improving the performance of reinforcement learning algorithms. Both approaches mentioned in this paper achieve good performance on four widely adopted MuJoCo environments.\n4. This paper demonstrates through an ablation experiment that the best results are achieved when both the actors and critics simultaneously engage in mutual imitation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a framework that promotes mutual learning and imitation between actors. It achieves this by minimizing differences in actions between actors and Q-value discrepancies between critics, thereby improving the performance of reinforcement learning algorithms. The framework is implemented through two specific approaches: Direct Imitation and Selective Imitation. Direct Imitation minimizes differences in actions produced by actors and Q-value discrepancies between corresponding critics, fostering mutual learning between actors. Selective Imitation utilizes the value function to assess actions and imitates only those with higher assessments, thereby avoiding the replication of lower-quality strategies. Additionally, the framework extends to multi-critic architectures. Experimental results demonstrate that this method significantly improves the performance of nine state-of-the-art reinforcement learning algorithms across four MuJoCo tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. This paper does not provide the implementation code, which may limit the reproducibility of the experimental results. To improve transparency and reproducibility, it is recommended that the authors make the code publicly available, for instance by sharing a link to a GitHub repository, so that reviewers and readers can verify and reproduce the experiment.\n2. This paper does not consider the weight between the Q function and the collaborative loss when computing the actor's gradient. It is recommended that the authors consider the weight, and conduct an ablation study on this parameter to explore the impact of different values on the performance in the test environments, or please give a justification for this fixed weight."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This work aims to develop a generic framework that can be seamlessly integrated with existing double-actor DRL methods to promote cooperation among actor-critic pairs"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024a,\ntitle={A Competitive-Cooperative Actor-critic Framework for Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ywHOnGOLb1},\nnote={under review}\n}"
},
"abstract": {
"value": "In the field of Deep reinforcement learning (DRL), enhancing exploration capabilities and improving the accuracy of Q-value estimation remain two major challenges.\nRecently, double-actor DRL methods have emerged as a promising class of DRL approaches, achieving substantial advancements in both exploration and Q-value estimation. However, existing double-actor DRL methods feature actors that operate independently in exploring the environment, lacking mutual learning and collaboration, which leads to suboptimal policies. To address this challenge, this work proposes a generic solution that can be seamlessly integrated into existing double-actor DRL methods by promoting mutual learning among the actors to develop improved policies. Specifically, we calculate the difference in actions output by the actors and minimize this difference as a loss during training to facilitate mutual imitation among the actors. Simultaneously, we also minimize the differences in Q-values output by the various critics as part of the loss, thereby avoiding significant discrepancies in value estimation for the imitated actions. We present two specific implementations of our method and extend these implementations beyond double-actor DRL methods to other DRL approaches to encourage broader adoption. Experimental results demonstrate that our method effectively enhances four state-of-the-art (SOTA) double-actor DRL methods and five other types of SOTA DRL methods across four MuJoCo tasks, as measured by return."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Deep reinforcement learning; Double-actor framework; Competition and Cooperation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8b555d62a985b107123694fbb6fc6262ac96e6cc.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "A Competitive-Cooperative Actor-critic Framework for Reinforcement Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ywKlmMor0f | MMA: Benchmarking Multi-Modal Large Language Model in Ambiguity Contexts | main | Active | Multi-Modal Large Language Model;Ambiguity;Benchmark | datasets and benchmarks | 3;5;5;5 | 4;4;3;4 | 2;3;3;3 | 2;3;2;3 | 2;3;2;2 | 4.5 | 3.75 | 2.75 | 2.5 | 2.25 | -0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "The author mention in the paper that images are collected from search engine and generated by models. While the former might cause potential licensing issues, the author do not provide and license and ethic statements in terms of the image using."
},
"flag_for_ethics_review": {
"value": [
"Yes, Legal compliance (e.g., GDPR, copyright, terms of use)"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Requiring clarification on the dataset quantity: in abstract and Appendix, the authors state there are 261 questions, and in Table 3 the numbers in parentheses are inconsistent.\n2. How might the benchmark be adapted to evaluate MLLMs' ability to explain their reasoning process in addition to selecting the correct answer?\n3. Could you provide more detail about the criteria used to determine whether generated images were suitable replacements for real-world images?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* The benchmark's design of using paired images for the same ambiguous text is innovative and well-suited to the research question. Such a well-designed multi-image QA setting could be also useful to explore other capabilities of the MLLMs. Moreover, I think it could also prompt the community to think about how the existing multimodal understanding LLMs actually work, e.g., do they really understand the contexts?\n\n* The categorization of ambiguity types (lexical, syntactic, semantic) provides a structured framework for analysis\n* The human evaluation provides a strong baseline for comparison."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a designed to evaluate how well MLLMs handle ambiguity in language when provided with visual context. The main distinguished design is to provide the same text with two different visual contexts, and count the models correctly answer only if both answers are correct. \n\nThe benchmark consists of 261 (?) questions in total, each paired with two different images that suggest different interpretations of the same ambiguous text, requiring models to leverage visual information to disambiguate meaning. The questions are categorized into three types of ambiguity: lexical, syntactic, and semantic. The authors evaluate various MLLMs, including both proprietary and open-source models, on their ability to correctly interpret ambiguous questions when given different visual contexts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The dataset size is relatively small (261 questions), particularly for certain ambiguity subcategories.\n* Missing the annotation protocol of human baseline. The authors should provide a thorough description since it's the basis of almost all the analysis. \n\n- The image data collection pipeline has potential ethical and biased issues : 1) use of generated images for some test cases might not fully reflect real-world scenarios (which could not be fully solved and also discussed in the literature); 2) the statement on the usage of images from \"Google\" is quite vague and could cause license issues.\n- The multiple-choice format, while practical for evaluation, might not capture the full range of possible interpretations, especially considering the options could provide additional contexts for the models. How do you ensure that the multiple-choice options don't inadvertently provide hints about the correct interpretation? Could you elaborate on the option design process?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How can the dataset be expanded and improved to overcome the limitations of size and representativeness?\n2. Do you have any suggestions for better MLLMs based on the findings you have on this dataset."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This paper proposed a novel benchmark aimed at evaluating MLLMs’ ability to leverage visual information to clarify the ambiguities in texts. The task is designed to rely on both text and image information.\n2. This work conducted comprehensive evaluations on 24 proprietary and open-sourced MLLMs. The categorization of ambiguities into different types allows for a detailed analysis of model performance.\n3. The results, such as models' error consistency rate and the performance differences between different types of ambiguity and model types, offer valuable insights for future research and development."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents MMA, a benchmark for evaluating Multi-Modal Large Language Models (MLLMs) in ambiguous contexts. It uses a multiple-choice visual question-answering format with 261 questions and associated pairs of images representing different scenarios. The benchmark categorizes ambiguities into lexical, syntactic, and semantic types. Through experiments on 24 MLLMs, it shows that models often overlook image information, perform better on lexical ambiguity, and that proprietary models generally outperform open-source ones."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The dataset size is limited in some categories due to constraints on the number of participants. This may affect the representativeness and generalizability of the results.\n2. The authors should explore how to do the data collections in an autonomous way instead of human labors. Only on this manner, the data size can be increased significantly.\n3. The authors should propose valuation suggestions for the MLLM data preparation, pretraining and posttraining. Some experiments on the open source MLLMs will add some value to this work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Data Curation: In the Appendix, I found a section on dataset distribution, but there seems to be no dedicated section on data curation. Based on A.3, the data is generated using text-to-image models. This raises questions about quality control, model output risk management, and the generated images' accuracy and safety. It would be beneficial to clarify how the outputs from these models are vetted for correctness and potential risks.\n\nDataset Construction: What is the source and motivation behind generating the specific types of questions? The rationale for distributing lexical, syntactic, and semantic ambiguities is also unclear. Why were these particular proportions chosen? If all three types are equally significant, a balanced distribution might be expected, or there should be an explanation if certain ambiguity types are more prevalent in real-life scenarios. (If I missed an explanation in the paper, please direct me to it.)\n\nTable 4: I am unclear about the motivation behind this table. For instance, in the lexical category, items should ideally have two distinct interpretations. Correctly answering one interpretation does not imply the model has accurately handled the ambiguity by answering the other interpretation. Could you elaborate on the intended purpose of this analysis?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Novel Benchmark: The introduction of MMA is innovative, providing a unique tool to evaluate MLLMs' handling of ambiguity across multiple categories.\nComprehensive Analysis: The study rigorously assesses performance across lexical, syntactic, and semantic ambiguity types, offering granular insights into model capabilities and limitations.\nValuable Findings for Future Improvements: The paper's results highlight weaknesses in current MLLMs, focusing on the need to improve model handling of syntactic and semantic ambiguities.\nBroad Applicability: The benchmark has the potential for wider application in fields requiring high-precision understanding of ambiguous language, such as natural language processing in complex human-computer interactions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper \"MMA: Benchmarking Multi-Modal Large Language Models in Ambiguity Contexts\" introduces the MMA benchmark, specifically designed to assess how Multi-Modal Large Language Models (MLLMs) handle ambiguous contexts. MMA presents questions associated with pairs of images that imply distinct scenarios, testing MLLMs' ability to resolve ambiguities through visual cues. The benchmark categorizes ambiguities into lexical, syntactic, and semantic types and evaluates 24 MLLMs, finding that while humans achieve an accuracy of nearly 89%, MLLMs struggle, achieving only around 53.22%. The results reveal that MLLMs are particularly challenged by syntactic ambiguities, with open-source models generally performing worse than proprietary ones. This study highlights areas for improvement in MLLMs’ integration of visual and textual information to handle ambiguity effectively."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors have placed the dataset distribution and construction details in the Appendix rather than the main text, which makes it challenging to follow the methodology while reading. Including these aspects in the main body of the paper would improve readability and help readers better understand the dataset's structure and rationale as they go through the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Question:\n- Why use near-native English speakers for evaluations?\n\nComments:\n- The paper could potentially benefit from deeper analysis, such as:\n * Currently scaling law is only shown of VILA1.5, it would be interesting to evaluate on other models with multiple sizes QWen, LLaVA-NeXT etc. If the scaling law exists for single modality (text) eval.\n * Quantitative and qualitative error analysis for MLLMs’ mistakes with natural images versus generated images.\n * A comparative analysis of human errors and mistakes made by MLLMs."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This paper provides a novel evaluation benchmark and it is an interesting addition to the existing multimodal evaluation. \n- The benchmark is well-designed, covering multiple types of ambiguity.\n- The paper is well-written, and the evaluation is thorough, covering both closed-source and open-source MLLMs, as well as human evaluation baselines."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a benchmark for evaluating multimodal large language models’ (MLLMs) understanding of ambiguous language. The benchmark contains 261 questions and 522 natural or generated images covering lexical, syntactic, and semantic ambiguities. The authors evaluated 16 state-of-the-art MLLMs and found significant performance gaps between MLLMs and human accuracy. While MLLMs handle lexical ambiguities relatively well, they struggle with other types. Detailed evaluation using text-only data reveals that this performance gap is primarily due to the models’ limited ability to effectively integrate multimodal information."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed MMA dataset is relatively small. \n- Some technical details are missing in the current paper. For example, how many people contributed to the dataset/question creations, annotation process/instructions, percentage of natural images versus generated images etc.\n- Some technical practices:\n * The benchmark includes natural images with unknown licensing and are not under Creative Commons. Although the authors express a willingness to pay for these images in the appendix, this practice raises concerns.\n * Near-native English speakers are recruited for human evaluations instead native English speakers, such design choice is not well justified."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a benchmark to evaluate the performance of current MLLMs in ambiguity contexts, and the results demostrate that current MLLMs averagely lag behind human performance by about 36.85%"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024mma,\ntitle={{MMA}: Benchmarking Multi-Modal Large Language Model in Ambiguity Contexts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=ywKlmMor0f},\nnote={under review}\n}"
},
"abstract": {
"value": "Multi-Modal Large Language Models (MLLMs) recently demonstrated strong capabilities in both instruction comprehension and responding, positioning them as promising tools for human-computer interaction. However, the inherent ambiguity of language poses a challenge, potentially leading models astray in task implementation due to differing interpretations of the same text within varying contexts. In multi-modal settings, visual information serves as a natural aid in disambiguating such scenarios. In this paper, we introduce the first benchmark specifically designed to evaluate the performance of \\textbf{M}LL\\textbf{M}s in \\textbf{A}mbiguous contexts (MMA). This benchmark employs a multiple-choice visual question-answering format and includes 261 textual contexts and \nquestions with ambiguous meaning. Each question is linked to a pair of images that suggest divergent scenarios, thus leading to different answers given the same question. These questions are stratified into three categories of ambiguity: lexical, syntactic, and semantic, to facilitate a detailed examination of MLLM performance across varying levels of ambiguity. By evaluating 24 proprietary and open-sourced MLLMs, we find that: (1) MLLMs often overlook scenario-specific information provided by images to clarify the ambiguity of texts. When presented with two different contextual images and asked the same question, \n MLLMs achieved an accuracy rate of only 53.22\\% in answering both correctly, \n compared to human performance at 88.97\\%.(2) Among the three types of ambiguity, models perform best under lexical ambiguity and worst under syntactic ambiguity. (3) Open-sourced models generally perform significantly lower than proprietary MLLMs, with an average performance gap of 12.59\\%, Claude 3.5 Sonnet, emerges as the top model, achieving 74.32\\% accuracy. These findings firstly underscore the current limitations of MLLMs in integrating visual information to clarify textual ambiguities and highlight critical areas for future improvements. The codes and benchmark data are \\href{https://github.com/AnonymousSubmitter-gpu/MMA_Anony}{available}."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multi-Modal Large Language Model",
"Ambiguity",
"Benchmark"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/6b15da2be50eaa85b587f5d599071cc80c564128.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/ef15499b04b2b077d12a71edb452535668b73bb0.pdf"
},
"title": {
"value": "MMA: Benchmarking Multi-Modal Large Language Model in Ambiguity Contexts"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
ywgwArtbDq | Seeing Through the Mask: Rethinking Adversarial Examples for CAPTCHAs | main | Withdraw | CAPTCHAs;Adversarial examples;Vision models;Robust models | other topics in machine learning (i.e., none of the above) | Andreas Plesner;Yahya Jabary;Turlan Kuzhagaliyev;Roger Wattenhofer | ~Andreas_Plesner1;~Yahya_Jabary1;~Turlan_Kuzhagaliyev1;~Roger_Wattenhofer1 | 1;3;3;5 | 5;4;4;3 | 1;2;2;2 | 2;1;2;2 | 2;2;2;3 | 3 | 4 | 1.75 | 1.75 | 2.25 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "In line 195-200, the paper says that the method uses a weighted average metric to capture various aspects of image quality. How to select these weights? Is it possible to use only one metric?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper proposes more aggressive perturbations to apply to images, as the limit is not imperceptibility bu rather semantic preservation for humans in CAPTCHA.\n\nThe experiments are conducted using five models including ConvNeXt, EVA02, ResNet, ViT-H-14 and RoBERTa-L. The results show that proposed masks can reduce accuracy of these models."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the problem of CAPTCHA fooling all models with adversarial samples. The paper defines four masks including \"Circle\", \"Diamond\", \"Square\" and \"Knit\", and applies these masks to the images at various intensities. The experiments are conducted on the constructed datasets using four masks, and the drop in Acc@1 and Acc@5 accuracy are calculated."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "There are no comparison methods in the main results, e.g., Table 1 and 2. It is difficult to understand the advantage of proposed methods compared other adversarial samples.\n\nThe novelty is limited. The paper proposes to apply different masks to images for constructing the datasets, and then calculates the accuracy of images in the constructed dataset.\n\nIt is better to show some visualizations, e.g., images with masks at various intensities."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Strengths:\n\n- The method of introducing periodic noise into image CAPTCHAs to challenge the imperceptibility constraints in adversarial attacks is both novel and well-founded.\n- The dataset and experimental setup are extensive and well-executed, offering compelling evidence for the conclusions drawn."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work challenges the traditional constraints of imperceptibility in adversarial attacks by introducing periodic noise into image CAPTCHAs, making them resistant to CAPTCHA recognition attacks. By allowing more substantial modifications to the images while preserving their semantic information and ensuring they remain solvable by humans, this approach is capable of deceiving many state-of-the-art models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have significant concerns about the effectiveness of the periodic noise method. It appears that the authors trained their models on standard images and then evaluated them using masked images, which understandably results in a substantial drop in performance. If an attacker were to learn how to apply this periodic mask technique and train with noisy images, the validity of this approach would be greatly undermined."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "The paper proposes a new attack that may disrupt current image recognition models."
},
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. What is the attack performance in other datasets?\n2. How robust is the attack if the image model owner finds the attack pattern? Can he remove the attack?\n3. Why is it easy to deploy in large-scale CAPTCHA systems? How much resources will the attack consume? Is it memory-efficient and time-efficient?\n4. How do you understand the difference between human and machine perception?\n5. Why is RoBERTa robust against AEs?\n6. Why are the manipulated images still semantically useful for human users? How do you find that?\n7. What about other types of CAPTCHA?"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The writing is clear and easy to understand.\n2. The paper inspects the adversarial examples from a new perspective, which holds an assumption different from traditional ones on stealthiness. Instead, the attack, in this case, preserves \"functionality\" for human beings. The angle is refreshing.\n3. The experiments include some of the largest and most advanced transformer models, which is an outstanding point."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper reviews the common imperceptibility assumptions adopted by most adversarial attacks and proposes a new attack against automated image models to recognize CAPTCHA by incorporating filters like repeated patterns and words. The paper shows its effectiveness against different models like ViT and RoBERTa. It also shows the attack performance under various parameter settings like opacity. The paper states that this attack can disrupt automated bypassing while preserving the semantic functionality for human users."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **The experiments are only conducted on a portion of ImageNet, up to 5000**. This makes all the insights gained less convincing.\n2. **Many contributions or claims are not validated**. \n- *\"The simplicity and ease of execution of the proposed attacks make them readily available to large-scale CAPTCHA systems.\"*: While the attack might seem \"too easy,\" have you tried to deploy it in a large-scale CAPTCHA system? If not, how much resources will the attack consume? Is it memory-efficient and time-efficient?\n- *\"Our research aims to understand and leverage the difference in human and machine perception.\"*: I do not see any insights from understanding the difference except the trade-off between quality and ASR. The stronger the attack is visually, the stronger it will be functionally. Not surprising or interesting enough.\n- \"We challenge the constraint of imperceptibility in adversarial attacks.\": The constraint is naturally relaxed due to this specific problem. You are not challenging the imperceptibility generally for adversarial attacks.\n- \"thus showing that machines have not caught up with humans–yet.\": Showing that adding patterns fails ViT in CAPTCHA cannot act as proof of the sweet victory of humanity.\n3. **The evaluation against robust models is insufficient.** Firstly, the paper does not reference the \"robustness\" of RoBERTa. Secondly, the paper does not show the attack can bypass certified robust models, smoothed models, or adversarially trained models.\n4. While the paper claims that **the attack does not negatively influence human beings in recognition, there is no validation at all**. A user study might be helpful. \n5. While I appreciate enhancing the resistance of CAPTCHA against deep-learning image models, it is unclear whether this problem is significant considering CAPTCHA in the real world. According to my experiences, most of the CAPTCHA I encounter nowadays are all different kinds of fancy and weird puzzles. The simplest one might be the one to identify the regions containing the target object. I wonder how significant the problem is considering the scope. Maybe providing some statistics about adopting the \"classification-based\" CAPTCHA might be helpful. \n6. The claim that \"CAPTCHA does not need imperceptibility\" is unclear and not convincing. I have two guesses for the authors' intended meaning: (1) The background of CAPTCHA is naturally complex, so the attack looks natural. In this case, the attack's stealthiness still needs to be evaluated. (2) The attacker does not need to make the manipulation invisible as long as the human user can still recognize the object. Now, it seems quite easy to find the attack since they are perceptible. What if the model holder finds out the added patterns and gets them removed? How hard is it to nullify the attack (a.k.a, how robust is the attack itself)? \n7. The paper's writing quality is insufficient, at least for a top conference like ICLR. The paper's language is casual, making the paper read like a technical blog or a homework report. There are many freewheeling claims, as mentioned above. Also, in my perspective, it would be more appropriate to position the paper's contributions in a battle game or \"protection\" against automated scrapping bots rather than \"advancing and understanding robust computer vision systems\"."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Given your aim to leverage the differences in perception between humans and machines, could you elaborate on why there was no thorough examination of how vision models interpret adversarial examples?\n\n2. Why did you not consider existing literature that discusses vulnerabilities in Captcha? How do you believe your CAPTCHAs would perform against the attacks presented in that study?\n\n3. The paper appears to have a technical report-like structure rather than a comprehensive research study. Could you clarify your rationale for this approach and discuss whether you plan to expand on any sections in future work?\n\n4. Have you considered conducting studies to assess how diverse groups of users interact with and perceive your CAPTCHAs? What plans do you have for including this type of analysis?\n\n5. The choice of datasets used for your experiments seems limited. Can you explain your reasoning behind using only ImageNet-based datasets, and how do you plan to address the generalizability of your findings to different CAPTCHA contexts?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper introduces a new approach by using visibly perturbed images for CAPTCHAs, potentially enhancing security mechanisms.\n \n2. It aims to leverage the discrepancies in human and machine perception and the existence of AI-hard tasks where humans surpass machines, which could provide new insights into CAPTCHA design.\n\n4. The detailed description of the method and results is useful and easy to follow, making the findings accessible to readers."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a new approach to CAPTCHA design leveraging insights from geometric adversarial perturbations by adding visible geometric patterns (like circles, squares, diamonds, and knit patterns) to images while preserving the semantic information. This makes them difficult for computer vision models to interpret but still easy for humans to interpret. The authors found that these patterns significantly lowered model accuracy, even with robust vision transformers."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While the paper mentions leveraging differences in perception, it lacks a thorough analysis of how vision models interpret adversarial examples. This is particularly relevant given their stated contribution of understanding the difference in human and machine perception.\n\n2. The authors do not consider existing literature that demonstrates vulnerabilities in hCaptcha and successful large-scale attacks, such as \"A Low-Cost Attack Against the hCaptcha System\" by Hossen and Hei. The authors need to check if their CAPTCHAs can withstand these attacks.\n\n3. The paper feels more like a technical report rather than a comprehensive research study, lacking depth in certain areas that would typically be expected in a research paper.\n\n4. The choice of datasets, while practical, may not fully represent the diverse real-world images and contexts CAPTCHAs encounter. Relying solely on ImageNet-based datasets could limit the generalizability of findings across different CAPTCHA scenarios.\n\n6. The paper focuses heavily on machine performance but does not provide a comprehensive assessment of human performance on images with the applied masks. This omission raises questions about how human users actually experience these modified CAPTCHAs and how intuitive they are for practical use."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nplesner2024seeing,\ntitle={Seeing Through the Mask: Rethinking Adversarial Examples for {CAPTCHA}s},\nauthor={Andreas Plesner and Yahya Jabary and Turlan Kuzhagaliyev and Roger Wattenhofer},\nyear={2024},\nurl={https://openreview.net/forum?id=ywgwArtbDq}\n}"
},
"abstract": {
"value": "Modern CAPTCHAs often rely on vision tasks that are supposedly hard for computers but easy for humans. Although image recognition models pose a significant threat to such CAPTCHAs, they can be fooled by hiding ``random'' noise in images. However, these methods are model-specific and thus can not aid CAPTCHAs in fooling all models. \n We show in this work that by allowing for more significant changes to the images while preserving the semantic information and keeping it solvable by humans, we can fool many state-of-the-art models. Specifically, we demonstrate that by adding masks of various intensities the Top 1 Accuracy (Acc@1) drops by more than 50%-points for all models, and supposedly robust models such as vision transformers see an Acc@1 drop of 80%-points. \n These masks can therefore effectively fool modern image classifiers, thus showing that machines have not caught up with humans -- yet."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Andreas_Plesner1",
"~Yahya_Jabary1",
"~Turlan_Kuzhagaliyev1",
"~Roger_Wattenhofer1"
]
},
"authors": {
"value": [
"Andreas Plesner",
"Yahya Jabary",
"Turlan Kuzhagaliyev",
"Roger Wattenhofer"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"CAPTCHAs",
"Adversarial examples",
"Vision models",
"Robust models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "plesner|seeing_through_the_mask_rethinking_adversarial_examples_for_captchas"
},
"pdf": {
"value": "/pdf/4a57da276df1c6eb8a225f9d3f37dc8778486b4c.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Seeing Through the Mask: Rethinking Adversarial Examples for CAPTCHAs"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
yx8bU8T5ZN | A Unified View of Delta Parameter Editing in Post-Trained Large-Scale Models | main | Active | Large Language Models;Delta Parameters Editing | foundation or frontier models, including LLMs | 1;3;3 | 4;4;5 | 1;2;1 | 1;1;2 | 3;1;2 | 2.333333 | 4.333333 | 1.333333 | 1.333333 | 2 | 0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "see weaknesses"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The paper is clearly written."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Authors propose using an approximation term to evaluate various methods to compress the model. In particular, authors use Riemann sum to establish the connection between delta W and delta L. Authors discuss different cases where approximation term (delta L) is equal to, larger than, or smaller than 0."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The derivation in 4.1 is mathematically trivial due to this Riemann sum assumption. Either the locally constant assumption of Riemann sum is too strong or the expectation term derived in (5) is too strong. The math shows delta L is 0 regardless of p. If p=0.999, should the loss still be zero? In addition, I cannot see a connection between 4.1’s experiment and theory. The theory shows L is zero.\n2. For the same reason, the math derivation in 4.2 is trivial. Adding a k does not have any effect on the proof.\n3. Authors derived delta L in section 5 (larger than 0) and section 6 (smaller than 0), but there is no theoretical implication about why delta L in section 5 is positive and why delta L in section 6 is negative. The experiment results are empirical and whether it’s positive or negative has already been studied in BitDelta and EXPO.\n4. A clear contradiction is: in section 4 when delta L is derived as zero, the experiment’s delta is 1e-5. While in section 6 when delta L is derived as non-zero, the absolute value of delta L is 1e-6, which is an order of magnitude smaller than the loss that is derived as zero. The experiment results also imply that the derivation in section 4 is false.\n5. Overall I don’t see any value in the math derivation of this paper. The experiment part is also mostly expected after reading the paper that the respective section is referring to."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the weaknesses section.\nPlease let the reviewer know if there is any misunderstanding about the paper."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "- Present a novel unified view of existing model merging methods that is lacking so far.\n- The proposed Riemann sum approximation of loss difference based analysis is interesting and gives insights for future work on theoretical understanding of model merging."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper provides a unified view of weight-space editing methods (a.k.a. model merging) through the lens of approximated loss difference. The authors categorize existing methods into three classes -- maintained performance, increased performance, and decreased performance, and generalize the existing methods by analyzing the crucial factors in the proposed loss difference approximation framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Eq. (1), the main proposed theoretical framework in this work, has a severe technical flaw**.\n - Specifically, the authors analyze $\\Delta L = L(W_{POST} + \\Delta \\tilde{W})-L(W_{POST})$ to discuss the performance of existing model merging methods.\n - However, all the existing model merging methods mentioned in this work such as DARE [1], TIES-Merging [2], BitDelta [3], and so on, applied the edited delta parameter to the $W_{PRE}$ rather than $W_{POST}$. \n - Therefore, the desired analysis should be conducted on the loss term such as $L(W_{PRE}+\\Delta \\tilde{W})-L(W_{POST})$ or $L(W_{PRE}+\\Delta \\tilde{W})-L(W_{PRE})$ rather than the current form ($L(W_{POST}+\\Delta \\tilde{W})-L(W_{POST})$) to make any claims on the final downstream performance of existing merging methods.\n- **Limited contributions**\n - Although the authors provide some generalization of existing methods, e.g., multiplies a magnitude hyperparameter $k$ in DARE framework, the novelty and innovativeness of these generalizations are too limited and the implications are also not surprising and uninformative. It seems like just reporting a result from engineering. Presenting more rigorous generalizations and providing profound implications from the proposed unified framework will improve the quality of this work significantly.\n- **Unreasonable experiment setup**\n - Regarding the EXPO [4] method, the authors claim that the relative effectiveness of extrapolation and interpolation depends on the dataset, which shows the performances of interpolation and extrapolation over some NLP downstream tasks.\n - However, the motivation of EXPO is focused on the alignment for enhancing the instruction-following capability of large language models, and the authors should conduct the experiment about EXPO on that kind of benchmark such as AlpacaEval 2.0 adopted in the EXPO paper.\n- **Bad presentation and validity of claims**\n - The quality of some presentations is not good enough for the purpose of publication. For example, see Figure 3. It is much better to omit the pre-train models' performance here to highlight the more important parts -- a comparison between varying $k$ values.\n - Moreover, although the authors make an argument based on some bar plots (Figure 1, Figure 4, Figure 7) they state the differences have some trend, and the absolute scale is too small among the comparison participants, which raises concerns about the statistical significance. \n\n\n\n> Reference\n1. Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch, Yu et al. 2024\n2. TIES-Merging: Resolving Interference When Merging Models, Yadav et al. 2023\n3. BitDelta: Your Fine-Tune May Only Be Worth One Bit, Liu et al. 2024\n4. Weak-to-Strong Extrapolation Expedites Alignment, Zheng et al. 2024"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "see weakness"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Delta parameter editing is an important topic in LLM efficiency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose a framework using Riemann sum approximation to analyze delta editing (pruning, compression ..) methods based on their effects on model loss."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The authors attempt to cover a broad scope in their analysis, but the exploration is often shallow, with certain aspects appearing incremental or inaccurate. \n1. Assumptions weaken the generalizability and accuracy of their conclusions:\n\n a. In Equation (4), the authors address the randomness in DARE [1] by asserting that “it is straightforward to deduce that,” leading to Equation (5). This is wrong as this need to rely on $\\Delta W_{ij}$ and $\\Delta \\mathcal{L}_{ij}$ being uniformly distributed, which is generally not the case, making the equality invalid. \n\n b. In BitDelta [2] , the authors state that it is difficult to conclude that Equation (9) equals zero due to the interaction between $\\text{sign}(\\Delta W_{ij}) \\Delta \\mathcal{L_{ij}}$. This approach is inconsistent, as they assume uniformity of $\\Delta W_{ij}$ and $\\Delta \\mathcal{L}_{ij}$ when proving Equation (5) for DARE, but not for BitDelta in Equation (9). If the uniformity also assumed for BitDelta, the $\\mathcal{L}=0$ should hold. Consequently, the conclusion that BitDelta performs worse than DARE is questionable.\n\n2. Question on EXPO[3]:\n\n a. Incremental Analysis and Use of EXPO Framework: The paper uses the similar framework as EXPO, particularly referencing Equation (2) in EXPO, where EXPO use first-order Taylor Expansion with an alignment objective (which functions similarly to the loss used in the current paper). This similarity can be seen as an incremental extension rather than a substantial innovation.\n\n b. Claim on Gradient Correlation with Delta Parameter: In section 2.2 of EXPO, EXPO already established that the success of their approach depends on a positive correlation between the gradient and the delta parameter, highlighting a direct relationship with the approximation term. Given this established finding, it appears that the new paper is building on known results rather than offering a novel insight in this area.\n \n c. Claims on EXPO Limitations: According to EXPO, extrapolation can improve performance when a positive correlation between the gradient and the delta parameter, a point that the new paper seems to question. However, if EXPO already addressed this with clear justification, the claim in the current paper may not hold strong novelty or accuracy. If they misunderstand EXPO’s stance on extrapolation, it could weaken their argument about EXPO’s limitations.\n\n3. I feel the logic in DARE section is unclear. First, the extension of DARE lacks a clear connection to the theorem in Section 4.1, making the motivation for introducing k unclear. I though authors will give motivation in Section 4.3, but cannot found it. Also, in section 4.3, the authors claim that DARE overlooks delta loss. However, the original DARE analysis of random pruning considers both delta parameters and the input. Specifically, the authors’ analysis in Equation (4) focuses on delta parameters and delta loss, and due to the linear approximation, delta loss can be proportional to the input x. This resemblance to DARE makes it inappropriate to claim that DARE disregards delta loss (represented by x in DARE’s case).\n\n4. I also find the logic in analyzing BitDelta [2] unclear. Similar to DARE, the motivation for introducing noise to mean magnitude lacks a clear connection to the theorem in Section 5.1. Additionally, the original BitDelta [2] already demonstrates that calibrating the scaling factors can improve performance as their contribution. This resemblance to BitDelta makes it inappropriate to claim that BitDelta overlooks this issue, which limits the novelty of the current approach.\n\n\n\n[1] Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch\n\n[2] BitDelta: Your Fine-Tune May Only Be Worth One Bit\n\n[3] Weak-to-strong extrapolation expedites alignment."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a novel perspective based on Riemann sum approximation of the loss function to elucidate delta parameter editing operations."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024a,\ntitle={A Unified View of Delta Parameter Editing in Post-Trained Large-Scale Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yx8bU8T5ZN},\nnote={under review}\n}"
},
"abstract": {
"value": "Post-training has emerged as a crucial paradigm for adapting large-scale pre-trained models to various tasks, whose effects are fully reflected by delta parameters (i.e., the disparity between post-trained and pre-trained parameters). While numerous studies have explored delta parameter properties via operations like pruning, quantization, low-rank approximation, and extrapolation, a unified framework for systematically examining these characteristics has been lacking. In this paper, we propose a novel perspective based on Riemann sum approximation of the loss function to elucidate delta parameter editing operations. Our analysis categorizes existing methods into three classes based on their post-editing performance: competitive, decreased, and improved, explaining how they are expressed by the Riemann sum approximation term and how they alter the model performance. Extensive experiments on both visual and language models, including ViT, LLaMA 3, and Mistral, corroborate our theoretical findings. Furthermore, we introduce extensions to existing techniques like DARE and BitDelta, highlighting their limitations in leveraging the properties of delta parameters and reorganizing them into general expressions to enhance the applicability and effectiveness of delta parameter editing in post-trained models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Models",
"Delta Parameters Editing"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b3c1269ccd9d27d5b691d75d7a24e274669b618d.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/23519424fc6d01093287e2d043a3ac3ad2ac2cc4.zip"
},
"title": {
"value": "A Unified View of Delta Parameter Editing in Post-Trained Large-Scale Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yyIHdaSDUU | Adaptive Vision Encoders: Balancing Efficiency and Robustness in Vision-Language Models | main | Active | large vision-language models;multimodal learning;continual learning | transfer learning, meta learning, and lifelong learning | 1;3;3;3 | 3;3;4;4 | 1;2;2;2 | 1;1;2;2 | 1;3;1;2 | 2.5 | 3.5 | 1.75 | 1.5 | 1.75 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "N/A"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "N/A"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a method to improve the robustness of vision-language models (VLLMs) on diverse domains. The proposed approach, called Low-Rank Adaptation with Structured Updates (LoRSU), selectively updates the model parameters to improve the robustness. Through experiments, the authors claim that LoRSU is able to improve the VLM performance with less than 10x compute. The author also provided theoretical justification on the strategy of proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Overall, this paper is not in good quality and is more like something generated by LLMs. Here are some of the reasons:\n1. The argument of this paper are usuall unclear and meaningless (e.g. \"DO WEAKNESSES IN CLIP PROPAGATE TO THE VLM\" and \"separately updating the vision encoder\"). The content is usually out of context and not organized in a logical way. \n2. The claims in abstract and introduction are not supported and inconsistent to context on the later pages (e.g., the claim \"improvements on data where previous mistakes occurred\" in abstract is never discussed in the method or experiment parts)\n3. There are a lot of invented terms that are not consistent or do not exsit.\n4. The method and theory parts (section 4) do not make any sense. For example, the datasets of TSI, DALL-E, GTS, AIR and CAn (Table1, 2 and 3), the model LLama-2+Pj and CLIP-L-14, and the method of LN, F-FT, F-EWC on Table 4.\n5. The last 2 papergraphs of introduction (Line 95 and Line 111) are identicail. \n\nThere're also many other evidences that could be eazily identified in the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Most of the questions are listed in the weaknesses. \n\nhowever, two additional questions: \n\n1. what is the control set for the VQA? \n2. why did the authors choose to present target improvement and not absolute results? the current state of presenting results is highly confusing."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The studied problem is relevant. It is indeed helpful for improving the OOD robustness of the VLMs. \n2. The method seems to provide some improvements on some benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper focuses on the OOD robustness of the present-day vlms. The authors hypothesize that the lack of robustness in the vlms stems from the vision backbone of the vlms. They propose ways to mitigate it with selective parameter optimization and test it on a continual learning and also offline setting. The results show improvements."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The writing lacks clarity. The paper can get a lot of help by improving the writing. For example, a paragraph is repeated 2 times in the introduction. Proofreading can be helpful. \n2. Many details are missing in the paper. For example, what is DALLE in Table. 1? Table. 2 - how did the authors test these methods? \n3. The paper also lacks suitable ablations to back the type of method chosen for the selection of parameters to update. What is the exact rationale behind it? \n4. The writing is very dense in the evaluation section. After multiple readings, I cannot understand exactly the evaluated metrics. Also, what is the need to use these metrics? Is there some background literature on these? Specifically talking about Target Improvement and Average Control Change. \n5. Simpler methods are not compared. Like peft methods. I can think of visual prompt tuning method from the top of my head."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Following on the weaknesses:\n1. How will the presentation be improved?\n2. How does the preliminary analysis (Sec. 3) advance the studies already present in the literature?\n3. The are mixed results (e.g., SPU sometimes works comparably to LoRSU): when are the cases where the further LoRA fine-tuning of the most important heads is needed? and why?\n4. What is the main technical contribution w.r.t. SPU and LoRA?\n5. Is there a specific motivation behind focusing on the vision encoders tuning? Is it for the training/computational advantages shown in Fig. 2?\n6. How have been the hyperparameters selected? and how the 800 datapoints used for importance estimation?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Despite the overlap/findings present in other works, the analysis in Tab. 1 and 2, showing that CLIP issues propagate to LLaVA is a clear motivating example for the need to adapt the CLIP encoders.\n\n2. The article combines existing techniques (i.e., SPU, LoRA) in a sound manner, achieving good results across a wide range of tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The article tackles the problem of improving the performance of vision-language models across multiple visual domains and tasks. To achieve this, it proposes LoRSU (Low-rank adaptation with structured updates) that selectively updates a subset of the parameters in each transformer layer, i.e., the first linear layer of the MLP block, as in Zhang et al. (2024) and the most informative attention head (estimated via the task-specific loss). Experiments show that LoRSU achieves better or comparable results with existing adapters (e.g., LoRA, SPU)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lines 112-128 are a repetition of lines 96-110. This unfortunate mistake harms the presentation because (i) it denotes a lack of thoroughness in proofreading the manuscript; (ii) it reduces the effective length of the paper to ~9.5 pages, taking out space from potential additional analyses or findings the reader may have benefited from. Presentation is part of the research process and carefully proofreading the manuscript is essential to present the contribution in the best possible way. \n\n2. A core part of the manuscript is the presentation of the shortcomings of CLIP's visual encoder. This is stressed in lines 44-46 and verified on tests on action recognition datasets (Table 1): TSI (Toyota Smart Home) and a synthetic one (generated via DALL-E). There are two takeaways from these results: (i) CLIP has shortcomings on rare domains/distributions and (ii) those are propagated to large multimodal models using it (Table 2). This is considered a contribution of the manuscript, as stressed in lines 99-101 (and 115-117, due to the repetition). However, manuscripts have already shown the limitations of CLIP on benchmarks wider and more structured than the one presented in Table 1. Examples are [a,b,c] focusing on various types of compositionality (e.g., 8 textual modifications in [a], 10 challenges in [c]), [d] focusing on low resource challenges (i.e., rare domains) and [e] already showed how CLIP issues propagate to VLMs using it (Fig. 6 in their paper). The claim contribution (1) is not clear w.r.t. these works as well as the contribution of Sec. 3. \n\n3. The technical contribution w.r.t. previous work is unclear. LoRSU combines two techniques: (i) selectively updates the parameters of the first linear blocks and (ii) selects which parameters to update based on the task-specific loss. The first has been presented in SPU, Zhang et al. (2024), as acknowledged in lines 92-93 and 225. Looking at the results presented in the appendix, SPU is often comparable to LoRSU, even outperforming it in some scenarios (e.g., Tables 7,9,10). For point (ii) the update is done via LoRA adapters, with the main difference being the focus on specific heads via the gradient of the task-specific loss (following what is done in SPU as well to select parameters). However, there is no ablation showing how the number of heads picked influences the final performance. All in all, there is a lack of analysis justifying the various design choices, with the advantages mostly shown via the empirical results on downstream datasets against the competitors. It would be helpful to include additional ablation studies/analyses (e.g., pruning ratio, where to apply LoRA layers etc.) and to expand on the contribution/practical advantages w.r.t. SPU and LoRA.\n\n4. Following on the previous, the hyperparameters choice is not justified and thoroughly analyzed. For instance, Appendix B states that SPU and LoRSU use different sparsity ratios (e.g., 15% the first, 10% the latter) without analyzing the impact of this choice. The same goes for the data points used: appendix B states 800 data points to compute gradients, without further details on how they are picked. \n\n5. It is hard to parse the results sa they are now. Instead of the most commonly used accuracy metric (adopted in Table 1 and Table 2), the main tables (3, 4, and 5) report target improvement and average control change. Those are harder to grasp, especially due to the lack of reference points to ground the results themselves. It would be better to report the results as done in the Appendix (i.e., with the natural accuracy choice) or use other metrics commonly used for continual learning (e.g., as done in SPU with average accuracy and forgetting). It could also be helpful to expand on why these metrics have been chosen (lines 355-364)/what they add w.r.t. those already present in the literature.\n\n6. While it is always interesting to see methods linked to theoretical justification, the proof in Sec. 4.1 does not expand the principles previously defined. Specifically, Eq. (5) defines gradients as the criterion for pruning and Eq. (10) uses the same gradients to define the optimization problem, stating that we want to preserve only a subset of the heads (i.e., S). It simply follows that the optimal subset of the head is the one that leads to the largest overall gradient (i.e., the top-S). Note that this is simpler than a knapsack problem (stated in line 305) as there is no constraint on the capacity, just on the number of \"items\" to be selected. In the context of the proof, some elements are unclear (i.e., what does the intersection between $I_i$ and $I_j$ mean?) or not accurately defined (e.g., as per Eq. (5), $s_l$ is not bounded between [0,1], thus its sum could be greater than the number of layers in $I_l$).\n\n7. The introduction heavily stresses the role of updating the vision encoder (contribution 2, lines 101-102 and 117-118). In Tables 5, 14, 15, and 16, the results are counter-intuitive as often the best results (or comparable) are achieved when tuning the language encoder (something that has already been studied in [f]). The analyses of the results in lines 480-497 also confirm the efficacy of updating the language side. This makes the message from the introduction and the experiments contradict each other: it would be better to clarify in the introduction that LoRSU is a general approach and that updating the vision encoder is not essential for achieving good results.\n\n**Minors:**\n- The claim \"unseen domains\" for CLIP (line 49) is hard to make as CLIP has been exposed to a huge amount of data and it might be exposed to virtually all domains but with different frequency. It would be better to replace \"unseen\" with \"rare\".\n\n- Line 102 states that the method updates \"the vision encoder [...] specifically on data where CLIP fails\". This is slightly inaccurate, as the method does not take into account for errors of the model in the most common sense but rather takes a dataset as input (where CLIP potentially does not work well) and applies adapters there: the method per se has no notion of \"data where CLIP fails\" and it could be applied to any dataset given as input. This might be clarified.\n\n- Line 146: LoRA is written incorrectly (it is not Low Rank Updates but Low Rank Adapters).\n\n- Line 248: in the very last formula of the line, the subscript should be \"k\" and not \"q\" for W, as the gradient refers to the keys.\n\n- Line 249: I could not find the definition of $\\tilde{W}_o$. \n\n\n**References:**\n\n[a] Tristan Thrush et al., \"Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality\", CVPR 2022.\n\n[b] Mert Yuksekgonul et al., \"When and why vision-language models behave like bags-of-words, and what to do about it?\", ICLR 2023.\n\n[c] Cheng-Yu Hsieh et al., \"SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality\", NeurIPS 2023.\n\n[d] Yunhua Zhang et al., \"Low-Resource Vision Challenges for Foundation Models\", CVPR 2024.\n\n[e] Shengbanc Tong et al., \"Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs\", CVPR 2024.\n\n[f] Xiaohua Zhai, et al. \"Lit: Zero-shot transfer with locked-image text tuning.\" CVPR 2022."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The author might consider doing a significance analysis on tables that show the main comparison;\n2. The author should include LoRA-V to the relevant comparisons.\n3. The proof in section 4.1 is a bit redundant. The author might instead want to try to theoretically justify the deviate from the “local minima” (or solution space) given by the full gradient is small with the proposed method."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The motivation is clear in that the paper demonstrates a clear case in which the CLIP encoder fails and also the downstream LLM.\n2. The method seems to be highly efficient, especially when fine-tuning only the CLIP encoder without updating the LLM. It seems to give even higher performance with much less cost on computation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper shows the negative impact of CLIP encoder on downstream LLM, when the data is out-of-domain and hard for CLIP. The paper proposes a method that combines LoRA, SPU and selection of weights with large gradient to efficiently and effectively fine-tune the CLIP encoder, so that the problem can be mitigated."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper lacks novelty in the methodology. The proposed method is a straightforward combination of LoRA, SPU, and selection of weights (attention heads) with large gradient, without any interaction between these components. Meanwhile, the evidence of its effectiveness seems to be not strong enough, as is discussed in the following. The author might provide more evidence for the effectiveness of this straightforward method.\n2. Some experimental evidence needs more justification:\n 1. In Table 3 and also other tables, the difference between methods seems to be small (especially between SPU and LoRSU). The author might consider doing a significance analysis.\n 2. In Figure 2 and table 13, it seems that just fine-tuning vision encoder is an efficient and effective strategy for LoRSU. However, the author should also include LoRA-V to have a fair comparison (also in Table 5). Also, the author should point out which version of LoRSU (L/L+/V) is used in the comparisons in Table 3 and other tables.\n3. I personally think the section 4.1 is a bit redundant in that the proof is very straightforward and can be described just intuitively: since it basically tries to prove that selecting the largest components give us the biggest sum of components. The author might instead want to try to theoretically justify the deviate from the “local minima” (or solution space) given by the full gradient is small with the proposed method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024adaptive,\ntitle={Adaptive Vision Encoders: Balancing Efficiency and Robustness in Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yyIHdaSDUU},\nnote={under review}\n}"
},
"abstract": {
"value": "Vision-language models (VLMs) demonstrate impressive capabilities in visual question answering and image captioning, acting as a crucial link between visual and language modalities. However, existing open-source VLMs rely heavily on pretrained vision encoders, such as CLIP. Despite CLIP’s robustness across diverse domains, it still exhibits significant image understanding errors. These errors propagate to the VLM responses, resulting in sub-optimal performance. In our work, we propose an efficient and robust method for updating vision encoders within VLMs. Our approach selectively and locally updates the model parameters, leading to substantial performance improvements on data where previous mistakes occurred, while maintaining overall robustness. We demonstrate the effectiveness of our method during offline and continual few-shot updates, simulating a model editing regime for VLMs. While our method also scales efficiently and effectively to adapting the language model (LLM) component of the VLM, we show that separately updating the vision encoder can be a very efficient alternative. This approach improves VLM performance with less than 10x the compute resources required for updating the LLM. Our method is also supported by theoretical justifications on the parameter selection strategy."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"large vision-language models",
"multimodal learning",
"continual learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2ef90d61839a5a0ee2c7af422b86b7fa5a73a860.pdf"
},
"presentation": null,
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Adaptive Vision Encoders: Balancing Efficiency and Robustness in Vision-Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
yzloNYH3QN | Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers | main | Active | Large Language Model;Information Retrieval | applications to computer vision, audio, language, and other modalities | 3;5;6;6 | 4;4;4;5 | 2;3;3;3 | 3;2;3;3 | 3;3;3;3 | 5 | 4.25 | 2.75 | 2.75 | 3 | 0.471405 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. A listwise LLM-based ranker typically requires O(N) forward calls because it often assumes that the number of candidate documents, N, is too large, exceeding the LLM's context limit. This necessitates the use of a sliding window for multiple forward passes. Given the same number of documents, how does the proposed ICR achieve this with only two forward passes? If RankGPT requires O(N), I believe the proposed ICR would also require O(N) with the same LLM.\n2. Since RankGPT is the only used baseline, the TREC 19 and 20 datasets tested in the RankGPT paper should also be included in the experiments.\n3. ICR utilizes attention scores from all transformer layers. Could you provide results from an ablation study showing the outcome when using only the attention scores from the final layer?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The writing of this paper is well-crafted, allowing readers with relevant backgrounds to quickly follow and provide feedback.\n2. The proposed method is versatile and applicable, making it suitable for use with open-source LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors focus on adopting large language models (LLMs) as zero-shot rerankers. I believe this is crucial because if we also need to train the LLMs as rerankers, there would be no difference from the previous framework. The authors propose leveraging the intrinsic attention scores to aggregate into a document score, along with a standard debias score. ***The authors claim that the proposed method operates in O(1) LLM forward passes (This is a point I am skeptical about and would like the authors to clarify in their discussion).*** I will first assign a threshold score and then adjust it based on the authors' rebuttal."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The claim of O(1) LLM forward passes raises significant questions. Please refer to my Question 1 for a detailed explanation, as this will determine the overall quality of the paper.\n2. A limitation of this method is that ordinary users cannot implement it using advanced commercial large language models, especially when compared to RankGPT. However, I believe this is due to commercial factors rather than technical ones.\n3. When only comparing against a single baseline, RankGPT, the classic datasets TREC 19 and 20 are notably missing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The analysis in Figure 4 lacks details on how results would change if the document input order were adjusted. Additionally, if the document set contains similar documents, would these similar documents receive higher weights?\n\n2. In Section 3.3, how does the method address biases introduced by documents of varying lengths?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The problem is important, and the overall writting is clear. \n2. The proposed method is easy to follow and demonstrates effectiveness on two public LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel re-ranking method based on LLMs called ICR. ICR first aggregates the attention weights received by each document token from all query tokens. To mitigate intrinsic biases of LLMs, ICR calibrates the ranking scores by subtracting the attention scores obtained by using a content-free query. ICR is evaluated on both single-hop and multi_-hop re-ranking tasks using open-source LLMs, Mistral and LLaMA 3.1."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My primary concern is the insufficient comparison with other methods and a lack of depth in experimental analysis.\n\n1)Although the paper compares ICR with RankGPT, highlighting improvements, it would be strengthened by comparisons with more methods, especially other zero-shot listwise methods. \n\n[1] RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models.\n\n[2] A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models.\n\n2) The paper could benefit from additional analysis to deepen the understanding of ICR’s behavior. First, for open-source LLMs, it would be helpful to provide a clear comparison between ICR and fine-tuning (FT) methods, highlighting the performance gap and cases where fine-tuning might be preferable. Second, the paper does not analyze the sensitivity of the proposed algorithm to changes in the input document order. It would be valuable to examine how much the re-ranking results fluctuate when the order of input documents is adjusted."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In line 216, \"We use N/A as a calibration query\". I was wondering why use this as the special token, how about NA or None or blank space or <>? Could these be alternatives?\n2. In line 226-227, \"the final ranking score sd_i, which measures the change of attention weights that each document receives when the query changes from the content-free calibration query to the actual query\". I was wondering whether sd_i can be considered as distance between the actual query and calibration query? It would be better to have more high-level intuition and explanation about this.\n3. In line 233-234, \"since we place the query tokens at the end of the re-ranking prompt, ICR can share the KV cache of document tokens when computing s_{di, Q} and s_{di, Qcal}\". I was wondering why the shared KV cache is in effective when you put query tokens at end? I didn't get the point here.\n4. In line 297, \"ICR's performance advantage is more prominent on Mistral, which has weaker instruction-following capabilities...\". I was wondering what's the advantages of the proposed method when instruction following is involved? Whether the proposed method can better leverage important information in the query? I'm looking forward to more explanation here.\n5. In ablation study/discussion, I'd love to see how the overall distribution looks like in terms of calibration scores."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The motivation is \"whether autoregressive generation necessary and optimal for LLMs to perform re-ranking?\", which is quite interesting and worth discussing. \nThe proposed method is technical sound and is effective on various datasets of re-ranking tasks. \nThe paper is well-written and easy to follow.\nThe experiments and discussions are thorough."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed in-context re-ranking (ICR), an efficient re-ranking method based on the attention distributions of LLMs. This method can achieve better performance compared to RankGPT while maintaining lower latency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper lacks more high-level intuition and explanations especially about why the proposed method works intuitively. \nWhat are the advantages and disadvantages of the proposed methods intuitively.\nWhy the proposed method works well although the model's instruction following ability is poor?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) UPR is a popular unsupervised reranking method that ranks documents based on the log-likelihood of generating a query from the input document. How does ICR's performance compare to that of UPR?\n\nSachan, Devendra Singh, et al. \"Improving passage retrieval with zero-shot question generation.\" arXiv preprint arXiv:2204.07496 (2022).\n\n2) Are the documents randomly shuffled before concatenation? To what extent does document order impact ranking performance?\n\n3) Why is the top 20 instead of the top 100 utilized for multi-hop evaluations?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper is well-written and easy to follow.\n2. Extensive experiments, including nine single-hop and three multi-hop datasets, were conducted to prove the improvements over unsupervised baseline RankGPT\n3. Ablation studies were conducted to prove the effectiveness of calibration and aggregation. \n4. Clear analyses were presented to help explain why calibration helps and what re-ranking signals the proposed method captures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an unsupervised ranking method that leverages the attention scores of query tokens over each document token. Extensive experiments, including nine single-hop and three multi-hop datasets, were conducted to prove the improvements over unsupervised baseline RankGPT. Ablation studies were conducted to prove the effectiveness of calibration and aggregation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) Missing important unsupervised baseline UPR:\n\nSachan, Devendra Singh, et al. \"Improving passage retrieval with zero-shot question generation.\" arXiv preprint arXiv:2204.07496 (2022).\n\nUPR ranks documents based on the log-likelihood of generating the ground-truth query given the document. Although it is less efficient than ICR, UPR operates in an unsupervised manner and should be compared with ICR.\n\n2) The impact of document order has not been studied. ICR ranks all documents within the context, but the paper does not specify how documents are concatenated. Are they randomly shuffled before concatenation? To what extent does document order impact ranking performance?\n\n3) For single-hop evaluations, only the weaker BM25 retriever is applied. It remains unclear whether ICR can enhance the performance of stronger retrievers."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose an efficient LLM-based re-ranking method that outperforms RankGPT while only requiring two forward passes without specialized training."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024attention,\ntitle={Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=yzloNYH3QN},\nnote={under review}\n}"
},
"abstract": {
"value": "Information retrieval (IR) systems have played a vital role in modern digital life and have cemented their continued usefulness in this new era of generative AI via retrieval-augmented generation. With strong language processing capabilities and remarkable versatility, large language models (LLMs) have become popular choices for zero-shot re-ranking in IR systems. So far, LLM-based re-ranking methods rely on strong generative capabilities, which restricts their use to either specialized or powerful proprietary models. Given these restrictions, we ask: is autoregressive generation necessary and optimal for LLMs to perform re-ranking? We hypothesize that there are abundant signals relevant to re-ranking within LLMs that might not be used to their full potential via generation. To more directly leverage such signals, we propose in-context re-ranking (ICR), a novel method that leverages the change in attention pattern caused by the search query for accurate and efficient re-ranking. To mitigate the intrinsic biases in LLMs, we propose a calibration method using a content-free query. Due to the absence of generation, ICR only requires two ($O(1)$) forward passes to re-rank $N$ documents, making it substantially more efficient than generative re-ranking methods that require at least $O(N)$ forward passes. Our novel design also enables ICR to be applied to any LLM without specialized training while guaranteeing a well-formed ranking. Extensive experiments with two popular open-weight LLMs on standard single-hop and multi-hop information retrieval benchmarks show that ICR outperforms RankGPT while cutting the latency by more than 60% in practice. Through detailed analyses, we show that ICR's performance is specially strong on tasks that require more complex re-ranking signals, such as handling contextualization and contradiction between the query and passages, as well as information integration across multiple passages. Our findings call for further exploration on novel ways of utilizing open-weight LLMs beyond text generation."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Model",
"Information Retrieval"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7933247a0eeca5352564b4c9939161a9fbd07cf7.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z0B7A6Dh1H | High Probability Contextual Bandits for Optimal Dosage Selection | main | Active | Linear Bandits;Dosage Selection;Contextual Bandits | applications to physical sciences (physics, chemistry, biology, etc.) | 3;5;6;8 | 4;4;3;4 | 2;3;4;3 | 2;3;3;3 | 2;3;4;4 | 5.5 | 3.75 | 3 | 2.75 | 3.25 | -0.160128 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "NA"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Could the authors please provide justifications of the suggested efficacy and toxicity function by providing real-word examples?\n2. In the regret, is it guaranteed that alpha_t is less than alpha_t^\\star?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This work introduces a new area of research: contextual bandits applied to dosage optimization while accounting for a (toxicity) constraint. The reviewer believes that this framework has broad practical relevance, making it a valuable and worthwhile topic to explore further."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work investigates linear contextual bandits, with a brief exploration of a nonlinear case at the end, in the context of optimal dosage selection under a constraint function. A UCB-type algorithm is proposed to minimize regret while ensuring that the constraint is satisfied with high probability. Theoretical results are presented, and the approach is validated through numerical experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The main concern raised by the reviewer is related to the assumptions. Specifically, the assumptions for the efficacy generation $R_t=\\alpha_t <X_t,\\theta^*> + \\alpha_t\\xi_t^r$ and $C_t=\\alpha_t <X_t,\\mu^*> + \\alpha_t\\xi_t^c$ are not intuitive to the reviewer. \n\nAdditionally, the presentation of the paper needs improvement for better readability, and the theoretical results should be expanded and explained in greater detail.\n\nThere are some minor comments:\n$\\gamma_\\alpha$ is not defined. This should be stated in Assumption 1.\nL134-138: not clear\nL159-170: Thought X and Y are introduced, they are not explained. Also, the meaning of p_k and q_k should be stated.\nL171-176: For the reviewer, it is hard to find some connection between this paragraph and the previous one.\nThe definition of K(x) should be more clear.\nThe citation format needs correction, and the language throughout the paper should be made more formal."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please see above"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper is well-written and solves a relevant problem. The paper is well-written and organized and is easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper considers the problem regret version of the optimal dose finding with constraints that need to be satisfied with high probability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper claims that is it the first to consider high-probability constraints on costs. I do not agree, since https://arxiv.org/pdf/2401.08016 (\"Contextual Bandits with Stage-wise Constraints\") seems to consider the setting with constraints that need to be satisfied with high probability as well. It is true that they need the knowledge of safe action, but I feel that data is benign in this application where there is likely to be a historical data set.\n\n2)The motivation of the regret formulation is not clear. What does high regret mean ? Suppose the safety constraints were easy to satisfy, can then the algorithm just give out the maximum dosage? to make regret negative (Assuming the rewards are positive)\n\n3)The techniques overall are standard in the linear function case. Maybe in future versions, the authors can specifically describe the main challenges faced in the proofs as a separate section"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "An additional question to the weakness above, what does \"stage-wise constraint\" mean in the abstract? I can not see any explanation on this in the main text."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The two-objective problem in dosage finding formulated by the paper is meaningful and practical relevant.\n2. The way that how the authors adopt the idea of UCB looks interesting. The technical results are solid.\n3. The paper is well written and relatively easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the optimal dosage finding problem with two objectives, maximize the drug’s efficacy and ensuring the toxicity especially from a high probability perspective. The authors adopt a linear contextual bandit formulation with stage-wise constraints, and design an efficient algorithm based on the idea of UCB. They establish a regret bound for this high-probability constrained approach, ensuring sublinear regret over time, meaning the model becomes more accurate as it learns."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Formulation. I am not sure whether it is common to think that the efficacy and the toxicity are linear with the dosage. From my experience in clinical trials, it is not usually the case. \n2. Technical contribution. I like the authors' way of adpoting UCB, but the techniques seem very standard, mainly based on the repetitive use of the inequality from Abbasi-Yadkori et al., (2011). Is there any technical contribution that the authors want to highlight?\n3. A very minor comment: I do not think you need to have a new paragraph only to say like \"Proof. The proof is in A.3.\" You can save a lot of space to present more interesting results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "- Back to the issue of not having an initial safe dosage... why can't we just use the minimum dosage as the initial safe dosage?\n- What is the theoretical novelty in deriving Theorem 5.1? How is it different than prior proofs of LinUCB or cost-expectation-based solutions?\n- The simulations did not have any comparison to prior solutions. Can you add such comparisons? In particular, you have criticized prior solutions as only caring about expected cost constraint -- then how bad are they when you count step-wise constraint violation? What is the tradeoff of regret and constraint violation for all methods?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ The paper pushes the boundary of dose-finding methodology designs by considering the per-step toxicity constraint in a bandit-based solution. This setting has practical relevance and the solution thus can move the MAB-type methodology to be useful in practice. This aspect itself is very important not only to the ML community but probably more importantly, to the clinical trial methodology design community where safety and efficiency have been two of the critical considerations. \n+ The high probability guarantee of cost violation is a novel component of this work. Prior works have only considered the average cost constraint, which as the authors argued may not be useful in practice. The algorithmic and theoretical aspects of linear contextual bandits are also of interest to other problems.\n+ The trick to get around the problem of not having an initial safe dosage is clever.\n+ The work makes some effort to extend the design to non-linear functions, with some initial results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a novel approach for determining optimal drug dosages using a linear contextual bandit model with stage-wise constraints. The key contribution is an algorithm that controls the toxicity of the administered dosage with high probability per step, rather than just in expectation as done in prior literature, thus addressing safety concerns in clinical settings. This method maximizes drug efficacy while minimizing the risk of overdose by ensuring toxicity remains below a threshold. The paper establishes theoretical regret bounds and demonstrates the algorithm's effectiveness through synthetic experiments, highlighting its potential in adaptive dose-finding scenarios."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- One aspect that might be interesting to consider is that the toxicity tolerance threshold is often varying across patients. A more practical model would be to set $\\tau$ also as a function of $X_t$.\n- The argument in Section 4 relies on using $L$ and $S$ to construct the initial safe interval. This depends on the prior knowledge of accurate $L$ and $S$, which is, in some sense, reflecting the initialization of the trial. So fundamentally this is not surprising.\n- The algorithm design part in Sec. 4 can be improved by more clearly articulating the differences to LinUCB."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Dosage Selection problem modeled by Linear Contextual Bandits under high probability constraints."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024high,\ntitle={High Probability Contextual Bandits for Optimal Dosage Selection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z0B7A6Dh1H},\nnote={under review}\n}"
},
"abstract": {
"value": "Multi-Armed Bandit ($\\textit{MAB}$) formulations are commonly used to model the problem of $\\textit{Optimal Dose-Finding}$.\nHowever, in many practical applications, it is necessary to receive data about the patient’s current state and then administer a drug dosage adapted to that state. \nTo overcome this issue, we adopt a linear contextual bandit formulation with stage-wise constraints.\nAt each round, the learner selects a dosage and receives both a reward signal and a cost signal.\nThe learner’s goal is to maximize the drug's efficacy—captured as the expected cumulative reward—while ensuring that the toxicity, reflected by the cost signal, remains below a known threshold.\nSatisfying the cost signal constraint only in expectation can be dangerous, as it may lead to over-dosage complications in certain cases.\nTo address this issue, we introduce a novel model that controls the realization of the cost signal with high probability, in contrast to previous works where control was only applied to the expected cost signal.\nOur algorithm follows the $\\textit{UCB}$ approach, for which we establish a regret bound over \n$T$ rounds and run numerical experiments.\nWe further generalize our results to $\\textit{non-linear}$ functions and provide a regret bound in terms of the $\\textit{eluder dimension}$, a measure of function class complexity."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Linear Bandits",
"Dosage Selection",
"Contextual Bandits"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f40164e39b773241a92061a2a1b5cc868aed3bdc.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to physical sciences (physics, chemistry, biology, etc.)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/45771a9127d5c034caed7d07a26c0e3c651673b3.zip"
},
"title": {
"value": "High Probability Contextual Bandits for Optimal Dosage Selection"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z0hUsPhwUN | Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption | main | Active | image compression;vqgan;generative compression model;multi-grained representation | applications to computer vision, audio, language, and other modalities | 5;5;6;6;6 | 4;5;5;5;5 | 3;2;3;3;3 | 3;1;3;3;2 | 2;2;3;3;3 | 5.6 | 4.8 | 2.8 | 2.4 | 2.6 | 0.612372 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Control-GIC combines classical coding principles with VQGAN to achieve controllable generative compression across various bitrates with a unified model.\n2. The framework allows for highly flexible and controllable bitrate adaption, which is a significant advancement over existing methods.\n3. Unlike other methods that require training multiple models for different bitrates, Control-GIC can adapt to various bitrates with a single model, reducing computational costs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper titled \"ONCE-FOR-ALL: CONTROLLABLE GENERATIVE IMAGE COMPRESSION WITH DYNAMIC GRANULARITY ADAPTION\" introduces Control-GIC, a framework for controllable generative image compression. It addresses the challenge of flexible rate adaption in image compression by leveraging a VQGAN framework that encodes images as variable-length codes . The framework correlates local image patch information density with granular representations, allowing for fine-grained bitrate control. It includes a granularity-informed encoder, a statistical entropy coding module, and a probabilistic conditional decoder. The experiments demonstrate that Control-GIC outperforms state-of-the-art methods in terms of flexibility, perceptual quality, and compression efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. More comparion with other GIC methods need to be provided.\n2. The novelty is limited compared to other VQGan based GIC method"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why is the mask ratio set to 50%, 40%, 10% for fine, medium, and coarse features in the training setting? Please provide further justification for this choice.\n2. In Mao et al.'s papers [2, 3], the bitrate used in VQGAN is significantly lower, ranging from <0.05 bpp to ≤0.03 bpp, while in GIC, the bpp ranges from 0.1 to 0.6 bpp. Could you explain why the bitrate range cannot be lower in Control-GIC, and if it is possible to extend the model to support lower bitrates?\n\n\n\nReferences:\n[1] Qi Mao, et al, Extreme image compression using fine-tuned vqgan models.DCC, 2024.\n[2] Naifu Xue, Qi Mao, et al, Unifying Generation and Compression: Ultra-low bitrate Image Coding Via Multi-stage Transformer.ICME.2024."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed Control-GIC introduces a unified model that allows dynamic bitrate adjustment, which effectively solves the inefficiencies faced by existing models that need multiple fixed-rate versions.\n2. The granularity-informed encoder and probabilistic conditional decoder are well-designed to achieve efficient encoding and high perceptual fidelity.\n3. Experimental results show superior performance over state-of-the-art methods, demonstrating both flexibility and effectiveness in compression."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Control-GIC, a controllable generative image compression framework aimed at addressing the limitations of existing generative image compression methods in achieving flexible bitrate adjustment. Built upon the VQGAN framework, Control-GIC incorporates multi-granularity encoding mechanisms and a probabilistic conditional decoder to achieve flexible bitrate control and high-quality image reconstruction. Both qualitative and quantitative results demonstrate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper lacks sufficient details on how the features are divided into different granularities in Section 3.1.\n2. The DIV2K comparisons in Figure 4 do not include evaluations against important baselines like VVC, M&S, and other methods (presented in Figure 3), which limits the completeness of the analysis.\n3. The paper does not compare Control-GIC with other VQ-based methods, such as GLC [1], Mao et al. [2], and UIGC [3], which would provide a better context for understanding the model's relative performance.\n4. It is suggested that the authors add more comparisons with results from other datasets featuring different image sizes to enhance the robustness, such as CLIC-dataset.\n\n\nReferences:\n[1] Jia Z, Li J, Li B, et al. Generative Latent Coding for Ultra-Low Bitrate Image Compression[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 26088-26098.\n[2] Qi Mao, et al, Extreme image compression using fine-tuned vqgan models. DCC, 2024.\n[3] Naifu Xue, Qi Mao, et al, Unifying Generation and Compression: Ultra-low bitrate Image Coding Via Multi-stage Transformer.ICME.2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "see the weakness part"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper proposes to flexibly determine a proper allocation of granularity for patches, supporting dynamic adjustment for VQ-indices and make the framework capable of fine-grained bitrate adaptation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper targets at addressing the challenge of rate adaption proplem for generative image compression and proposes a controllable generative image compression framework. The paper represents the image patches of sequential spatially variant VQ-indices to support precise variable rate control and adaption. A non-parametric statistical entropy coding is devised to encode the VQ-indices losslessly.\nA probabilistic conditional decoder is proposed to aggregate historic encoded multi-granularity representations, achieving realism improvements."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In Figure 3, the proposed method shows worse performance than CDC for DISTS, and is also worse than CDC at the high bitrate range. Could you give some analysis? \n\n- In Figure 4, why do you compare only with BPG, CTC, and HiFiC, instead of aligning with the methods used in Figure 3?\n\n- In Figure 3 and Figure 4, most metrics are nearly reaching saturation when the bpp increases. Is there obvious difference for the visualization quality when the bpp increases at the high bitrate range? For example, in Figure 6, when the bpp increases, I do not see obvious enhancement between r2=40%(bpp=0.3864) and r2=32.9%(0.4171).\n\n- Visualization analysis about the influence of the granularity of image patches. Will increasing the image patch size lead to block artifacts in the reconstructed images? And can making the image patch size smaller solve some artifacts in VQ based methods, such as the unsatisfactory artifacts for small faces?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How are the masks $m_1$, $m_2$, and $m_3$ obtained? The main text does not seem to provide a clear calculation method for these.\n2. If compression is required at a specific bitrate, how should $r_1$, $r_2$, and $r_3$ be determined?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The authors’ granularity-informed encoder effectively leverages the information density distribution to distill image patches into a hierarchical structure with three levels of granularity. This approach enables compression with a controllable encoding rate, providing a novel solution for adaptable image compression.\n\n2. The paper’s codec framework is lightweight and demonstrates relatively fast encoding and decoding speeds, making it more suitable for practical applications where computational efficiency is a priority."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption introduces Control-GIC, a flexible framework for high-quality image compression that allows fine-grained bitrate control without needing multiple models. Built on a VQGAN foundation, Control-GIC encodes images as variable-length sequences, adapting bitrate based on local image information density, which allows for effective compression adjustments to meet different content complexities. The model includes a granularity-aware encoder that assigns varying levels of detail across image patches, and a probabilistic conditional decoder that reconstructs images with high perceptual quality by aggregating multi-scale features. Experimental results show that Control-GIC outperforms recent state-of-the-art methods in perceptual quality, flexibility, and efficiency, achieving this with a single unified model across a wide bitrate range."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experimental results show mixed outcomes compared to existing baselines like HiFiC, without demonstrating a clear advantage.\n2. The paper should consider comparing with MS-ILLM [1], which uses a similar VQ-VAE structure, to better position its contributions.\n3. Tests on the CLIC 2020 dataset are missing, which would help validate the model's robustness across diverse datasets.\n4. Section 3.2 is overly detailed and could be streamlined for clarity and conciseness.\n5. Section 3.3 appears to be a direct application of existing methods, which could benefit from additional elaboration or innovation.\n\n[1] Muckley M J, El-Nouby A, Ullrich K, et al. Improving statistical fidelity for neural image compression with implicit local likelihood models. International Conference on Machine Learning. PMLR, 2023: 25426-25443."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "There are no ethics concerns."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the above weakness."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. A comprehensive explanation on adapting VQ-GAN into a rate-adaptive perceptual codec.\n\n2. Highly flexible and controllable bitrate adaptation.\n\n3. Well-conducted ablation study.\n\n4. Open-sourced code! This is fantastic, and I want to thank the authors for their commitment to sharing this."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel framework for controllable generative image compression (Control-GIC). The proposed Control-GIC integrates a granularity-aware encoder to enable precise variable rate control and adaptation, a non-parametric statistical entropy coding method for lossless encoding of VQ-indices, and a probabilistic conditional decoder to reconstruct hierarchical granular features. Experimental results on the Kodak and DIV2K datasets demonstrate that Control-GIC not only delivers strong perceptual compression performance but also provides highly flexible and controllable bitrate adaptation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of comparison with the latest state-of-the-art (SOTA) perceptual codec, MS-ILLM [1]. It appears that MS-ILLM outperforms both HiFiC and Control-GIC in terms of compression performance.\n\n2. Control-GIC utilizes a mask for multi-scale encoding, but the method for generating the masks $m_1, m_2$ and $m_3$ during training and inference is not clearly explained. Please provide a more detailed description of this process.\n\n3. Misuse of capitalization: 'The Encoding and decoding' on line 432 should be corrected to 'The encoding and decoding.'\"\n\n[1] MJ Muckley, A El-Nouby, K Ullrich, H Jegou, J Verbeek. Improving Statistical Fidelity for Neural Image Compression with Implicit Local Likelihood Models. In ICML, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A unified image compression model capable of fine-grained variable bitrate adaption with VQGAN."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024onceforall,\ntitle={Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z0hUsPhwUN},\nnote={under review}\n}"
},
"abstract": {
"value": "Although recent generative image compression methods have demonstrated impressive potential in optimizing the rate-distortion-perception trade-off, they still face the critical challenge of flexible rate adaption to diverse compression necessities and scenarios. To overcome this challenge, this paper proposes a $\\textbf{Control}$lable $\\textbf{G}$enerative $\\textbf{I}$mage $\\textbf{C}$ompression framework, $\\textbf{Control-GIC}$, the first capable of fine-grained bitrate adaption across a broad spectrum while ensuring high-fidelity and generality compression. We base $\\textbf{Control-GIC}$ on a VQGAN framework representing an image as a sequence of variable-length codes ($\\textit{i.e.}$ VQ-indices), which can be losslessly compressed and exhibits a direct positive correlation with the bitrates. Drawing inspiration from the classical coding principle, we correlate the information density of local image patches with their granular representations. Hence, we can flexibly determine a proper allocation of granularity for the patches to achieve dynamic adjustment for VQ-indices, resulting in desirable compression rates. We further develop a probabilistic conditional decoder capable of retrieving historic encoded multi-granularity representations according to transmitted codes, and then reconstruct hierarchical granular features in the formalization of conditional probability, enabling more informative aggregation to improve reconstruction realism. Our experiments show that $\\textbf{Control-GIC}$ allows highly flexible and controllable bitrate adaption where the results demonstrate its superior performance over recent state-of-the-art methods."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"image compression",
"vqgan",
"generative compression model",
"multi-grained representation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f55dd9f1455013db893c3cb065517dbe2f7e1e62.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/e2f7de5cd564e19d5a32c7af31bfb90630938b92.zip"
},
"title": {
"value": "Once-for-All: Controllable Generative Image Compression with Dynamic Granularity Adaption"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z1Jq1PLQWs | Dueling in the Dark: An Efficient and Optimal $O(\sqrt{T})$ Mirror Descent Approach for Competing against Adversarial Preferences | main | Active | Large Language Models (LLMs);Reinforcement Learning from Human Feedback (RLHF);gradient descent-based algorithm;theoretical foundations;active no-regret learning;preference feedback;trajectory preferences;multi-way feedback;human-AI alignment;practical impact. | learning theory | 5;5;5;6;6;6 | 4;3;2;2;4;3 | 3;2;3;3;3;3 | 3;2;2;3;3;2 | 3;3;3;3;3;3 | 5.5 | 3 | 2.833333 | 2.5 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Line 458: \"in the convergence proof of ??\" lacks the reference.\n\n2. Although the bound for top m ranking is better than pairwise comparison, the query times of the two methods are different. Can the authors compare the total query complexity of the two methods to achieve the same suboptimality?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Algorithmic Innovation: The authors introduce an Online Mirror Descent (OMD)-based algorithm, named Double-Scrible, that optimally handles adversarial preferences using only weak preference feedback, unlike previous models that rely on value feedback.\n\n2. Performance Guarantees: This algorithm achieves optimal regret bounds and includes generalizations to batched feedback (handling multiple preference queries at once) and multi-way preference feedback (handling partial ranking).\n\n3. Efficiency: The approach offers computational efficiency, particularly suitable for high-dimensional and adversarial environments, making it more practical for real-world AI applications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper \"Dueling in the Dark: An Efficient and Optimal Mirror Descent Approach for Online Convex Optimization with Adversarial Preferences\" addresses the challenge of using human preference feedback in reinforcement learning, particularly with applications for AI alignment in large language models (LLMs). The main focus is on creating an efficient online gradient descent algorithm capable of handling adversarially changing preferences, providing theoretical guarantees for performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. For the formulation, there is already some work formulating the problem like a dueling bandit (Xiong et al. (2024), [2]) and even dueling bandit with human feedback [1]. I think this work needs to provide more discussion of the comparison with those works. Especially with [1], because they both consider adversarial preference feedback.\n\n2. One of the contributions that the authors highlight is the computational efficiency of their algorithm, but they don't carry out experiments or even simulations to show the computational efficiency of their algorithm. It is questioned whether their algorithm can be implemented. Hence, the authors are suggested to provide some experiments.\n\n\n[1] Di Q, He J, Gu Q. Nearly optimal algorithms for contextual dueling bandits from adversarial feedback[J]. arXiv preprint arXiv:2404.10776, 2024.\n\n[2] Wang Y, Liu Q, Jin C. Is RLHF More Difficult than Standard RL?[J]. arXiv preprint arXiv:2306.14111, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "It's a theoretical paper and don't have ethics concerns."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The paper assumes linear model. Is it possible to extend it to the nonlinear setting? And is there a way to circumvent the eigenvalue decomposition?\n2. Since the proposed algorithms are online, I wonder how the algorithms facilitate exploration."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper seems to be the first to address the problem of adversarial online linear optimization with preference feedback, and the analysis is very comprehensive. Some techniques such as the gradient estimation for preference feedback seem novel to me.\n2. The algorithms/assumptions and Theorems are clearly stated."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an efficient online mirror descent algorithm called \"Double-Scrible\" for optimizing Large Language Models (LLMs) using human preference feedback, where the algorithm only observes relative preferences between pairs of outputs rather than direct feedback."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Missing related work section? And adding some intuition on algorithm design would make it easier for the readers to understand.\n2. The paper assumes linear model, and the algorithms require eigenvalue decomposition, which makes the algorithms hard to scale up."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Minor comment: Undefined reference at L458."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The paper is well written. Both the problem and algorithms are rigorously explained, and the notation is intuitive. I did not check the proofs, but the claims in the paper are sound. I appreciate various practical extensions of the algorithm to batched and ranked settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Double-Scrible, a gradient descent algorithm for online linear optimization with adversarial preferences applicable to the online alignment of LLMs. Using mirror descent with a self-concordant barrier function, it achieves near-optimal regret bounds and extends to batched and partial ranking feedback."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The algorithm is not empirically evaluated. Without any experiments, it may be difficult for the readers to apply the algorithm in practice as there is no reference implementation. Furthermore, aligning LLMs is a difficult domain, and theoretically sound algorithms may not necessarily have an effect as the human preferences are non-linear and context-dependent, the number of LLM arms grows exponentially with the length of the sentence, etc. Even small-scale experiments using, e.g., GPT-2, can increase the impact of the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can you provide more examples on why adversarial human preference is of importance in RLHF applications, to motivate your theoretical framework?\n\n2. Can you provide numerical evidence showing the computation efficiency of your proposed algorithm? \n\n3. Can you present the regret bound of the three settings in a way that can be fairly compared?\n\n4. In line 347, you claim your runtime requirement is O(dT), it seems does not count the computation complexity of eigen decomposition?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Novelty and Impact: This paper studies the online convex optimization problem with preference feedback, the problem is of importance and directly related to solving RLHF problems in practice. The insight of the proposed algorithm has potential in many RLHF application areas.\n\n2. Theoretical Soundness: For all three feedback settings, i.e., pair-wise preference feedback, batched feedback, and ranking feedback settings, the paper propose gradient based online optimization algorithms which has theoretical regret guarantees matching the lower bound. Overall it is a solid theoretical paper with good contribution.\n\n3. Algorithm Practicality: the proposed algorithm is computationally tractable with gradient based approach, the major computation burden in each iteration seems to come from eigen decomposition."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the online convex optimization problem from preference and ranking feedbacks, which is a simplification of the modern RLHF framework. The paper studies the adversarial online linear optimization problem where the utility of each arm is linear. At each round, the agent plays two arms and the feedback on which arm is better is generated based on the Bradley-Terry model. The goal of the agent is to minimize the utility regret. The paper proposes an online mirror descent algorithm named Double-Scrible for the pair-wise preference setting, with provable regret guarantee matching the lower bound. Generation to batched settings and ranking feedbacks are also studied with theoretically optimal algorithms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Motivation: the motivation of study adversarial human feedback (the linear parameter \\theta_t adversarially change over time) seems weak from the presentation of paper.\n\n2. Comparison: the regret definitions are different for each feedback settings, which makes comparing the bounds across settings unfair. For example, the bound in Theorem 3 accounts for the case where the sample size and the number of human queries are B times larger than the case in Theorem 1 (although the definition of regret in batched setting is averaged). Therefore, the claim that batched comparison and ranking feedback improves algorithm performance is not completely solid, although correct.\n\n3. Computation Efficiency: one of this paper's claim is the proposed algorithm is more computationally efficient compared to confidence based algorithms (UCB or TS based) in the literature, but the evidence is not clear from the paper. Given computation tractability is a major strength of the proposed algorithm, the reviewer feels that numerical evidence should be provided, at least for a simple setup.\n\nTypos: \n\n1. equation 1 seems to be a typo\n\n2. Theorem 3, superscript of regret should be Batched-LogitDB\n\n3. line 458, missing reference."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This is a reasonably well written paper that is easy to read despite its highly technical nature. The work addresses a seemingly important problem of OLO with preference feedback. By the author's admission this is the first efficient online gradient descent-based algorithm for this problem with provably optimal performance guarantees. The paper does a good job of considering generalizations of the original dueling bandits framework by incorporating batched responses and ranked preferences."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a mirror descent approach for a variety of online linear optimization (OLO) problems that involve preference feedback. The work presents matching upper- and lower-regret bounds (up to logarithmic factors) for each of the scenarios considered. First, the authors consider adversarial logistic dueling bandits and present the Double-Scrible algorithm with matching $O(\\sqrt{T})$ upper and lower regret bounds. The authors generalize this work to the batched setting and present the BaBle-Scrible algorithm we equivalent matching regret bounds. Finally, the authors show that the top-m ranking feedback setting can be reduced to the batch setting and they present MNL-Scrible with equivalent matching regret bounds."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**NOTE** I do not consider myself qualified to adequately review this paper as I am not an ML theorist. I do not have sufficient knowledge of the literature in this space to judge impact / contribution. I have notified the AC to this effect early in the review process but did not receive a response. Nevertheless, below is my best attempt at a critique for this work and I will assign a low confidence score to help calibrate.\n\nRather than do one thing well this paper reaches for a lot of potential contributions that in my opinion lessen the overall impact of the work. In particular, the references to LLMs in the abstract and introduction seem a bit misplaced as it is unclear what the actual connection is to LLMs in this work (marketing?). The authors present three algorithms with regret bound analysis, but do not provide any guidance on implementation or empirical validation, making it impossible to conclude whether the algorithms are effective in practical settings.\n\nOne of my issues in reviewing this work is that I cannot place it in the context of existing work. The paper would benefit from a \"Related Work\" section to help highlight the potential impact of these contributions w.r.t. existing work. \n\nAs a broader comment, ICLR is a conference on \"learning representations\" and it is unclear what connection this work has to representation learning. At first glance it would not seem that this work is well-placed at ICLR. Upon further digging it seems that ICLR has published quite a few works on bandit algorithms and so perhaps the organizers better insight into the fitment of this work to the conference."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See Weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The author proposes a novel algorithm based on the mirror descent method and achieves a near-optimal regret guarantee.\n\n2. The author generalizes the result to several bandit settings with batched datasets and multinomial logit bandits.\n\n3. The paper is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work studies the linear dueling bandit problem with adversarial binary feedback. The author proposes a novel algorithm based on the Mirror Descent method and achieves a near-optimal regret guarantee. In addition, the author extends the algorithm to deal with batched datasets or multinomial bandits, maintaining a similar regret guarantee."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concern is that the author may be overclaiming some contributions.\n\n1. As the author mentioned when discussing the limitations of existing algorithms, prior attempts to adapt them to RLHF either make unrealistic modeling assumptions or are computationally inefficient. The main contribution of this work is claimed to be the development of more computationally efficient algorithms. However, this work focuses only on the logistic linear bandit, which also falls under the category of unrealistic modeling assumptions and does not offer a clear advantage over previous algorithms for logistic linear bandits. Additionally, under this setting, an efficient algorithm [1] already exists, even with adversarial preferences.\n\n[1] Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback\n\n2. The author claims a contribution for providing a lower bound on the regret when learning a logistic dueling bandit. However, a lower regret bound already exists in previous work [2].\n\n[2] Stochastic Contextual Dueling Bandits Under Linear Stochastic Transitivity Models\n\n3. The author mentions that gradient descent algorithms are simple to implement and can seamlessly integrate with modern deep learning frameworks, making these methods computationally efficient. However, the proposed method only focuses on the logistic dueling bandit, and it is unclear how to implement it in modern deep learning frameworks or whether the performance guarantees still hold in such settings. Furthermore, when restricted to the logistic dueling bandit problem, the proposed method does not seem to offer an advantage over UCB or Thompson Sampling (TS) methods."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper introduces a gradient descent-based algorithm with no-regret guarantees for adversarial dueling bandits, which has implications in theoretical understanding of RLHF"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024dueling,\ntitle={Dueling in the Dark: An Efficient and Optimal \\$O({\\textbackslash}sqrt\\{T\\})\\$ Mirror Descent Approach for Competing against Adversarial Preferences},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z1Jq1PLQWs},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent developments in Large Language Models (LLMs) have sparked significant attention in Reinforcement Learning from Human Feedback (RLHF), which uses reinforcement learning techniques to optimize a model's performance through human-provided feedback. A simple, widely used, and cost-effective method for gathering human feedback is through relative queries based on human preferences, often modeled using sigmoid utility models. Despite the popularity of sigmoid model-based RLHF algorithms, their theoretical foundations remain underdeveloped as existing algorithms often lack performance guarantees or are limited to small-scale problems due to computationally intractable steps. We address the challenge of developing no-regret learning algorithms for training optimal policy RLHF, and develop the first efficient gradient descent-based algorithm with near-optimal regret guarantees. More technically, we consider the adversarial online convex optimization problem with preference feedback and propose a mirror descent method to obtain a regret of $O(\\sqrt{T})$ over $T$ rounds. The main challenge we are required to solve lies in finding a suitable `gradient-approximation' of the underlying utility functions solely from a binary preference feedback. Following this we extend our results to policy optimization in the RLHF framework with trajectory preferences and design no-regret RL policies using a variant of mirror descent. We also extend our methods beyond pairwise preferences --- to multi-way (batched pairwise) feedback and ranking feedback --- and analyze the trade-off between learning rate with increasing subset size. Our contribution lays the groundwork for a practical gradient descent-based algorithm in RLHF with human preferences. Supported by robust theoretical guarantees, our approach holds promise in the current landscape of developing efficient algorithms for LLMs and addressing human-AI alignment challenges. Empirical evaluations validate our theoretical findings."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Models (LLMs)",
"Reinforcement Learning from Human Feedback (RLHF)",
"gradient descent-based algorithm",
"theoretical foundations",
"active no-regret learning",
"preference feedback",
"trajectory preferences",
"multi-way feedback",
"human-AI alignment",
"practical impact."
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9b23f950529b9d34cc468e893b3d1bf20f8210fb.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/51c8e10039c74dd4bfcf536a746db10526c0c9c9.pdf"
},
"title": {
"value": "Dueling in the Dark: An Efficient and Optimal $O(\\sqrt{T})$ Mirror Descent Approach for Competing against Adversarial Preferences"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z1mLNhWFyY | Gradient Routing: Masking Gradients to Localize Computation in Neural Networks | main | Active | representation learning;modularity;unlearning;reinforcement learning;scalable oversight | alignment, fairness, safety, privacy, and societal considerations | 3;5;5;6 | 4;5;3;4 | 3;2;2;3 | 2;2;3;3 | 3;2;4;4 | 4.75 | 4 | 2.5 | 2.5 | 3.25 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Why does \"absorption\" happen? Is the absorption effect reliable across multiple settings?\n\nQuestions on the robust unlearning experiment\n- In Fig 4(a), why does the validation loss increase with more training steps? Can we see what happens with a more realistic number of finetuning steps (say 1000)?\n- Why does RMU appear to outperform ERA on the 4-stories task? Does it just have overall higher loss?\n- In fig 4(b) it appears that post-finetuning retain set los is also increased. How much of the validation forget loss in ERA and RMU is just due to overall slightly worse models?\n\nIs ERA expected to work when the pretraining model and dataset are big (such as for modern LLMs) and the representations in the original model have good understanding of the forget set?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed method of gradient routing is novel and interesting, and has applications for multiple important problems in safety and interpretability. \n\nImpressively, the proposed method enables forgetting undesirably capabilities even without labelling full forget sets."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a new method for training networks called Gradient Routing (GR) with the goal of isolating capabilities to specific subregions of the network by controlling which network subregions are updated by which data points.\n\nThe paper shows applications for gradient routing towards a variety of settings and problems:\n\n- The authors first show that by applying GR to an MNIST-autoencoder, it is possible to isolate the representation of digits 0-4 to the first half of an embedding and digits 5-9 to the second half. Additionally, the authors show that GR can be used for activation steering in language models. \n\n- The authors then propose to use GR for robust unlearning in already learned language models, via an approach called expand-route-ablate (ERA), where pretrained networks are expanded to include new subregions, some already learned capabilities are re-routed to new subregions, and then new subregions are ablated from the model, unlearning these previously learned capabilities from the network. Experiments show that ERA is successful in unlearning topics for small language models, even if only a fraction of \"to-unlearn\" samples are labelled.\n\n- The paper then shows that vanilla GR can be successfully applied to learn a 0.7B language model that lacks specific harmful capabilities such as bioweapon related capabilities.\n\n- Finally, the paper demonstrates an application of GR in reinforcement learning, to learn a policy that avoids certain target squares in a grid using a limited oversight signal to route gradients."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The vanilla GR method requires splitting data into \"retain\" and \"forget\" sets before training begins, and then routing gradients during a potentially long and expensive training process. This limits applicability (except of the ERA method) and raises the question if GR is ever necessary for language model unlearning applications. If we knew the split during pretraining, why would we not simply omit the forget set during pretraining, learning the \"pure\" model considered in the paper?\n\n- The ERA method requires more experiments and analysis/ablations to show why it works -- it is not clear why it is possible to introduce new network units, set low learning rate on old units for forget set samples, ablate the new units, and observe poor performance on forget set samples when running the old units. Is this a one-off result? Does this generalize to more tests and bigger models that may have more significant understanding of forget-set content?\n\n- GR requires isolating each capability to a subset of examples before training and establishing network subregions for each capability, which may be challenging when there are lots of examples or lots of capabilities."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Q1: How the gradient masks are selected / initialized for different tasks / data samples / features? Are there meaningful heuristics that generally work well?\n\nQ2: How to choose the network subregions that gradients should be routed through? Would certain architectures or tasks require a different approach to subregion selection? Have you experienced any achitecture-specific challenges?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper makes a new contribution to the field: GR is a new method to control gradient propagation based on data, enabling modular learning within a network.\n- GR can make neural networks more interpretable by designing which part of the network is responsible for a specific task;\n- GR provides support for targeted unlearning and scalable oversight;\n- Broad applicability of GR (and many interesting questions to spark future work);\n- GR can reduce interference and improve accuracy on specialized tasks;\n- The method helps to control data access in privacy-sensitive scenarios."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces gradient routing (GR), which allows localizing computation in neural networks by masking gradients based on data-dependent paths. GR controls gradient flow within specific network subregions, isolating capacities and enabling the model to focus on particular data features / tasks. It promotes modular network design, making it suitable for selective forgetting and scalable oversight in reinforcement learning and language models. GR is evaluated on MNIST, gridworlds, and language tasks to demonstrate that it can improve task-specific performance, support targeted unlearning, and improve model interpretability by maintaining distinct functionalities in different parts of the network."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- GR requires careful selection of mask weights, data subsets, and regions to localize. How should these be chosen? At least in its current form, the method faces difficulties scaling up to larger models.\n- No comparison to similar methods mentioned that can also achieve localization (DEMix and Interchange Intervention Training, both mentioned in the text).\n- Unclear relevance of GR to safety-critical applications mentioned in the paper. Demonstrating benefits in a high-stakes scenario would strengthen the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Clarification of Contributions\n\n 2. Choosing the Routing Strategy:\nSince gradient routing relies on predefined, data-dependent masks, an essential question is how to select an effective routing strategy. Could the method involve dynamic routing decisions based on the characteristics of each input, allowing gradients to flow through particular layers or weights? Further clarification on decision criteria for different types of data would strengthen the approach's versatility and practicality.\n\n3. Impact of Initialization and Routing Interactions:\nGradient routing may be influenced by the initialization of weights, particularly concerning the Lottery Ticket Hypothesis [1]. If certain weights are initially unproductive and specific inputs route gradients exclusively to these weights, it could hinder learning.\n\n\n[1]. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper is well written and easy to follow.\n2. The idea is simple and effective\n3. The topic is really important for helping understand neural networks and learned representations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work introduces gradient routing, a training method that enhances neural network safety by isolating specific capabilities within distinct network subregions. By applying data-dependent, weighted masks to gradients during backpropagation, gradient routing allows users to control which parameters are updated for particular data points. This approach achieves three main outcomes: (1) interpretable partitioning of representations, (2) effective unlearning by ablating targeted subregions, and (3) improved oversight of reinforcement learning by confining behaviors to distinct modules. The results indicate that gradient routing can localize capabilities even with limited data, suggesting its potential for complex, real-world applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. My main concern is about the contributions. Similar ideas has already been proposed by previous papers\n(1) Piggyback [1] and PackNet[2] tries to learn different tasks better with different subsets of the weights by leveraging a binary mask.\n(2) Parameter efficient tuning method for Large language models(LLMs) such as lora[3], can efficiently learn different downstream tasks using different adaptor modules.\n\nSo the contributions of this paper are not enough.\n\n2. Need more large scale experiments. Given the idea is not that new, only small scale experiments may not enough.\n\n\n\n[1] Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights\n[2] PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning\n[3] LoRA: Low-Rank Adaptation of Large Language Models\n[4] S-LoRA: Serving Thousands of Concurrent LoRA Adapters"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. For representations learning, the gradient routing method can be applied to more common image classification tasks?\n2. Can you provide a more detailed description in Section 3? \n3. Could you add a Certificate using the top half encoding for comparison in Section 4.1?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Based on the gradient, a training method is proposed to isolate the ability to a specific sub-region of the neural network.\n2. Gradient routing has implications for safely deploying AI systems, especially in high-risk scenarios where black-box methods are not robust enough."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a gradient-routing training method for isolating capabilities to specific subregions of neural networks. We show that gradient routing can be used to (1) learn representations that are partitioned in an interpretable way; (2) achieve robust unlearning by ablating pre-specified subregions of the network; and (3) achieve scalable supervision of reinforcement learners by localizing modules responsible for different behaviors."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In Section 3, authors did not clarify the gradient routing, thus make this paper hard to understand. For example, how gradient mask is generated.\n\n2. Authors did not clarify how gradient mask is generated. I hope you can provide the formula or pseudocode for the mask generation at line 178 of the pseudocode.\n\n3. The MNIST dataset has too little data to prove the generalizability of the method. Experiments on more widely used datasets such as ImageNet and COCO can be more convincing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024gradient,\ntitle={Gradient Routing: Masking Gradients to Localize Computation in Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z1mLNhWFyY},\nnote={under review}\n}"
},
"abstract": {
"value": "Neural networks are trained primarily based on their inputs and outputs, without regard for their internal mechanisms. These neglected mechanisms determine properties that are critical for safety, like (i) transparency; (ii) the absence of sensitive information or harmful capabilities; and (iii) reliable generalization of goals beyond the training distribution. To address this shortcoming, we introduce gradient routing, a training method that isolates capabilities to specific subregions of a neural network. Gradient routing applies data-dependent, weighted masks to gradients during backpropagation. These masks are supplied by the user in order to configure which parameters are updated by which data points. We show that gradient routing can be used to (1) learn representations which are partitioned in an interpretable way; (2) enable robust unlearning via ablation of a pre-specified network subregion; and (3) achieve scalable oversight of a reinforcement learner by localizing modules responsible for different behaviors. Throughout, we find that gradient routing localizes capabilities even when applied to a limited, ad-hoc subset of the data. We conclude that the approach holds promise for challenging, real-world applications where quality data are scarce."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"representation learning",
"modularity",
"unlearning",
"reinforcement learning",
"scalable oversight"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/0cd4553beb2e8e39bef86fbe281eea0a0e5bda08.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Gradient Routing: Masking Gradients to Localize Computation in Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z1nSpA2dAW | FLOPS: Forward Learning with OPtimal Sampling | main | Active | stochastic optimization;gradient estimation | optimization | 3;5;5;5 | 3;3;3;3 | 2;2;3;3 | 2;3;3;2 | 1;1;3;2 | 4.5 | 3 | 2.5 | 2.5 | 1.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "For enhanced clarity, please focus on the weaknesses section, which includes my raised questions.\n\nConcerning reproducibility, I reviewed the supplementary material, presumably the code. However, the absence of a README file and the presence of numerous extraneous files make it challenging to determine which files are essential. Additionally, the provided code lacks meaningful comments and contains a lot of debug information, which further complicates understanding its logic. If the authors aim to demonstrate reproducibility through the attached code, I recommend including a clean version of the code with detailed instructions in a README file to at least guide reviewers through the main logic."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The empirical accuracy results for ViT and CLIP appear to surpass those of the baselines. The authors have also conducted essential ablation studies to further validate their findings.\n\n2. While the text in Figures 2 and 3 is smaller than the standard text size, making it challenging to read, the color combinations used in these figures are visually appealing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper examines perturbation-based gradient computation methods tailored for forward-only learning. The authors introduce an optimal sampling strategy based on a Gaussian Allocator designed to maximize performance improvements incrementally. They evaluate this approach using pretrained transformers and demonstrate that it outperforms selected baseline methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Sections 3 and 4 introduce several undefined annotations, leading to ambiguity in the exposition. For instance, Line 148 mentions a term \"a\" whose role and relation to Equation (1) are unclear—is it a distribution, a hidden representation, or something else? Furthermore, \"G(·)\" on Line 160 and \"y_j\" on Line 164 are undefined, with no clarification of the indexing or distribution from which j is sampled. Additionally, Equation (9)'s term \"K\" lacks a defined scope. The abbreviation \"LR\" is also used without definition. The paper introduces many terms that, while relevant, obscure the main contributions. I recommend a thorough review of these sections to clarify the foundational concepts and distinctly outline the problems the paper aims to solve.\n\n2. The rationale behind formulating the optimization of the query allocator as maximizing Equation (5) is not sufficiently clear. A deeper analysis of this formulation is necessary. Moreover, the paper mentions only the initialization of a Gaussian Allocator, which might suggest a broader applicability than is actually the case. Either a comparative analysis of different allocators should be included, or the focus on the Gaussian allocator should be explicitly stated in the abstract and introduction to manage expectations.\n\n3. I have several concerns regarding the experimental results presented in Table 1, which utilizes a pretrained ViT network. Firstly, the rationale for the notably small number of queries for baseline methods, such as only 2 for Mezo, is not explained. What is the expected number of queries for your OPS-LR model in comparison? In Line 344, you mention alignment with Mezo's original memory-efficient setting, but detailed statistical data on memory consumption and training time for both baseline methods and your approach are lacking. Additionally, the experiments are conducted on pretrained models like ViT and CLIP, but the performance of models trained from scratch is not shown. It is crucial to demonstrate whether OPS-LR offers any advantages over these baselines when applied to models not pretrained. Furthermore, the validation datasets used are generally small, with all except ImageNet containing under 150,000 images. There is also concern that using pretrained transformers on the well-known ImageNet dataset could lead to performance biases due to data leakage."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Please refer to Weakness 1. Is it possible to integrate the proposed method into a callable class or function without rewriting model architectures like Linear and Conv2d to implement the specific add_noise operation?\n\n2. Please refer to Weakness 2. Could you provide detailed memory usage for different methods during training for a more thorough comparison? I noticed that the code uses a repeat operation to expand the batch for varying numbers of queries on different data. Does this operation increase memory usage?\n\n3. The authors mention in the main text that 'All the methods in the experiments use the same query budgets, except for Mezo, which uses only 2 queries per data point in accordance with its original memory-efficient setting.' However, could you provide a more detailed comparison of runtime (e.g., clock time) compared to other ZO methods and the BP baseline?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The idea of dynamically allocating different numbers of queries to each data point within a batch during training is novel, which is indeed a point that previous zeorth-order optimization (forward learning) methods have not considered.\n\n2. The proposed method is intuitive. The approach of leveraging a Gaussian Allocator (GA) combined with a likelihood ratio method introduces a creative solution to minimize gradient estimation variance. Through appropriate approximations, the computational cost is effectively reduced. Theoretical analysis is also provided.\n\n3. The experimental setup is extensive and reasonable, and the results are convincing. Both prompt tuning for large models and multimodal alignment for foundation models are promising application scenarios for zeroth-order (ZO) methods, and the proposed approach demonstrates good performance on these tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces FLOPS: Forward Learning with Optimal Sampling, which aims to improve the efficiency of gradient estimation in forward-only learning methods by optimally allocating computational resources (queries) across data points within each mini-batch. The approach is motivated by the limitations of backpropagation, particularly in settings where only forward passes are feasible or desirable, such as in black-box optimization scenarios. With a simplified proxy objective and a reparameterization technique, the authors derive a novel plug-and-play query allocator with minimal parameters. Extensive experiments show the superior performance of this method. Theoretical analysis is also provided."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the authors provide part of the source code, I believe the coding is not advisable. Specifically, the authors override nn.Linear to create a custom Linear class and similarly override nn.Conv2d to create a custom Conv2d class. This approach results in the proposed method being tied to a specific model architecture, making it difficult to adapt to other architectures. In fact, existing zeroth-order optimization methods, such as ZO-SGD [1], ZO-AdaMM [2], and DeepZero [3], all have core optimization logic that can be implemented by inheriting from torch.optim.Optimizer, thereby aligning with gradient-based methods like SGD. Alternatively, they can be integrated into a specific function for easier migration.\n\n2. One important reason why zeroth-order optimization is suitable for large model prompt fine-tuning is that these methods do not require backpropagation, which significantly saves memory compared to gradient-based methods like SGD. However, in the experimental results presented in the main paper, only the fine-tuning results are provided, without comparing their memory usage with backpropagation and other zeroth-order optimization methods.\n\n[1] Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013.\n\n[2] Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, and David Cox. ZO-AdaMM: Zeroth-order adaptive momentum method for black-box optimization. NeurIPS, 32, 2019.\n\n[3] Chen A, Zhang Y, Jia J, et al. Deepzero: Scaling up zeroth-order optimization for deep model training[J]. arXiv preprint arXiv:2310.02025, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) I am curious about why other methods that utilize all queries would perform worse than this method, that utilizes limited quries for each data. Since MEZO is tailored to another baseline, using the same hyperparameters might not be entirely fair. Have you considered tuning MEZO with more available queries per data point for comparison?\n2) What is the exact computational cost when using all queries for each data point compared to your allocation method under different budget constraints?\n\nIf reasonable answers are provided, I will consider rasing scores accordingly."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) The motivation is clear: optimizing the allocation of queries to effectively reduce computational overhead.\n2) The experimental results show strong performance relative to the baselines.\n3) The study provides both experimental and theoretical results, offering a well-rounded evaluation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an efficient query allocation strategy for forward-learning algorithms in gradient computation, reducing query usage by focusing on data points that need it most. Using a simplified objective and reparameterization, the authors introduce a lightweight query allocator that minimizes gradient estimation variance with low computational cost. Both experiments and theoretical analysis are provided."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) I am curious about why other methods that utilize all queries would perform worse than this method, that utilizes limited quries for each data. \n2) The comparison of exact computational cost between equally using all queries for each data point and your allocation method is unknown. However, it is one of the main motivation. \n\nMinor: \n3) In the abstract, the phrase “propose to allocate the optimal number of queries over each data” isn’t entirely accurate, as a total query budget must be pre-defined rather than learning an optimal number. Perhaps rephrasing to “allocate the optimal number of queries within a set budget” would be clearer. \n4) Table 2 is not well-formatted and appears misaligned."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. **Clarification on LR and OPS-LR**: The distinction between LR and OPS-LR is not clear. More explanation would be appreciated.\n\n2. **Clarification on experiments**: Do the experiments include cross-validation with multiple random seeds? If not, please show the experiment results with multiple seeds with dataset cross-validation. If yes, please provide more details.\n\n3. **Ablation studies**: The proposed algorithms update four parameters. What if only three or two of them are updated? Which parameters are dispensable for this process? Conducting these experiments would provide more insights into the paper's contributions."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The empirical evaluation is comprehensive, demonstrating significant performance improvements through query sampler optimization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an approach for optimizing a differentiable sampler in forward learning in foundation models, supported by theoretical proofs and extensive empirical evaluations. The work lies at the intersection of zeroth-order optimization and sampling optimization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Overall outline and structure**: The paper builds upon DeepZero and Mezo by devising an optimization allocation for forward learning. However, the idea of optimal allocation is not new in the ML context. Papers such as \"Stochastic Optimization with Importance Sampling\" or \"A General Analysis of Example-Selection for Stochastic Gradient Descent\" (and several derived works) have explored similar concepts. For me, the main difference here is the focus on forward learning (or zeroth-order optimization) rather than backpropagation. The authors are encouraged to: (1) Review and mention this existing research (strongly encouraged) and (2) Compare these methods with their proposed algorithm (encouraged). Addressing these points would strengthen the paper's contribution and contextualize it within the field.\n\n2. **Introduction and framing**:\nThe decision to begin with biologically plausible algorithms (BioPA) seems unexpected and may not effectively frame the paper's contributions. Nevertheless, the general scheme could be unfolded more clearly:\n 1. The citation of Jacot et al. for \"learning high-level representation\" appears unrelated in the BioPA context.\n 2. Consider including more relevant BioPA works such as \"Direct Random Target Projection\" [1], \"SoftHebb\" [2], and \"Counter-Current Learning\" [3]\n\n3. **Writing and proofreading**:\n - Correct typographical errors (e.g., \"Current Literature\" should be \"Current literature\")\n - Address factual inaccuracies (e.g., L48 states that the FF algorithm is only capable of training MLPs on MNIST, but results for CIFAR are also presented)\n - Provide explanations for abbreviations (e.g., SPSA, LR)\n\n4. **Related Work section**: The first subsection could be restructured. When discussing backpropagation-free learning, it's typically in the context of multi-layered neural networks. Also, including evolution theory and particle swarm optimization seem tangential. I suggest reorganizing this section and incorporating the suggested papers for a more focused discussion.\n\nReferences\n\n[1] \"Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks\" (Frontiers in Neuroscience, 2021)\n\n[2] \"Hebbian Deep Learning Without Feedback\" (ICLR 2023)\n\n[3] \"Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning\" (NeurIPS 2024)"
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Design an optimal query allocator for forward learning"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024flops,\ntitle={{FLOPS}: Forward Learning with {OP}timal Sampling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z1nSpA2dAW},\nnote={under review}\n}"
},
"abstract": {
"value": "Given the limitations of backpropagation, perturbation-based gradient computation methods have recently gained focus for learning with only forward passes, also referred to as queries. Conventional forward learning consumes enormous queries on each data point for accurate gradient estimation through Monte Carlo sampling, which hinders the scalability of those algorithms. However, not all data points deserve equal queries for gradient estimation. In this paper, we study the problem of improving the forward learning efficiency from a novel perspective: how to reduce the gradient estimation variance with minimum cost? For this, we propose to allocate the optimal number of queries over each data in one batch during training to achieve a good balance between estimation accuracy and computational efficiency. Specifically, with a simplified proxy objective and a reparameterization technique, we derive a novel plug-and-play query allocator with minimal parameters. Theoretical results are carried out to verify its optimality. We conduct extensive experiments for fine-tuning Vision Transformers on various datasets and further deploy the allocator to two black-box applications: prompt tuning and multimodal alignment for foundation models. All findings demonstrate that our proposed allocator significantly enhances the scalability of forward-learning algorithms, paving the way for real-world applications."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"stochastic optimization",
"gradient estimation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b1e6da7d84990a149944cefbea989fc1cda3e027.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/6b044c4e6810419afe0ea4737c4403c787405fa4.zip"
},
"title": {
"value": "FLOPS: Forward Learning with OPtimal Sampling"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z1ohBxWeL2 | SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation | main | Active | LLM;Inference;System;Compression;Distillation | foundation or frontier models, including LLMs | 3;5;6;6 | 4;4;3;4 | 3;2;3;2 | 2;2;2;2 | 4;3;2;2 | 5 | 3.75 | 2.5 | 2 | 2.75 | -0.471405 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "This paper studies LLM core technology. I don't find anything that needs ethics review in this paper."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Q1. In Table 1, there is a significant drop in performance for 8B model on the math dataset GSK-8K. I suppose this is the harder case, meaning that the proposed method may not work well for the case of small models on tasks demanding more logic and reasoning. An analysis on why this performance drop occurs would be interesting. \n\nQ2. In Table 3, to show the impact of distillation, AcrossKV is disabled. However, to reduce KV cache, AcrossKV should be enabled for inference serving, right?\n\nQ3. In Table 4, the performance of \"our fine-tuned model\" is significantly inferior to the base model. The result seems to be negative to the usefulness of your techniques. I don't quite understand the logic here."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "S1. Inference optimization for in Transformer-based LLMs is an important topic which has been extensively studied in recent years.\n\nS2. Several key components have been proposed in this paper, with their usefulness showcased in the evaluation. \n\nS3. The proposed method is orthogonal to many existing optimizations and they can be used jointly to further optimize the performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies the optimization of inference in Transformer-based LLMs. It presents SwiftKV, a solution to reducing the KV cache and inference time to long contexts up to 128K. The proposed method features three parts: SingleInputKV, AcrossKV, and knowledge recovery. Experiments show the effectiveness of the proposed method and the usefulness of its components, as well as how they work jointly with other optimization techniques."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. This paper borrows borrowing observations and ideas from SingleInputKV, as stated in the submission (such observation has also been utilized in the InfiniGen paper published at OSDI 2024).\n\nW2. A core technique in the proposed method is cross-layer KV cache compression. The comparison/discussion with state-of-the-art KV cache compression/merging/cross-layer works is missing, e.g., PyramidKV and infini-attention. It is encouraged to discuss the difference and the novelty compared to existing KV cache compression techniques in the related work. Some surveys can be found here:\nhttps://github.com/October2001/Awesome-KV-Cache-Compression\n\nW3. Whereas the paper discusses long inputs, it lacks discussions with recent works on long contexts (see the above link), such as MInference, which optimize the prefilling of long contexts. Some of them exploit the sparsity to reduce KV cache and speed up inference, e.g., ALISA. \n\nW4. The proposed method is not training free, yet only Llama-3.1 models are evaluated. It is unclear if the performance (and its optimal parameter settings) also translates to models. Extension to other open models, such as Mistral, would be beneficial to understanding the contributions of this work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please refer to W1 and W2."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "S1. The paper proposes two techniques derived from insights from prior research, demonstrating their efficacy in reducing computational and memory costs during LLM inference.\n\nS2. The authors show that fine-tuning can alleviate the decline in benchmark scores, emphasizing the practicality of the proposed methods without notably sacrificing model performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes two methods to reduce the cost of the prefill stage during LLM inference. \nThe first method, called SingleInputKV, reuses the output hidden state vector from the i-th layer in attention layers as the input vector to generate key and value (KV) vectors of the subsequent layers. In previous methods, the j-th layer used the output hidden state vector from the (j-1)-th layer as input.\nThe second method, called AcrossKV, enables the KV vectors generated by the i-th layer to be reused by the following layers. In previous methods, each layer generates its own KV vectors by multiplying the input vector with its weight.\nThese techniques reduce computational costs by reusing the input and KV vectors of earlier layers for later layers. They also decrease the number of KV vectors that need to be cached for the decode stage. The proposed methods build on prior work [1], which showed minimal differences in the values of input vectors across layers as the number of layers increases in transformers.\nThe authors implemented these techniques in Llama-3.1-8B and Llama-3.1-70B models, showing that while the performance on the LLM benchmark remains largely unaffected, both time and memory usage in the prefill stage are reduced by almost two times.\n\n[1] Songwei Liu, Chao Zeng, Lianqiang Li, Chenqian Yan, Lean Fu, Xing Mei, and Fangmin Chen. Foldgpt: Simple and effective large language model compression scheme, 2024c. URL https://arxiv.org/abs/2407.00928."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. The experiments in the paper are somewhat limited.\n- The authors evaluate the proposed techniques only on Llama-3.1 models. Testing a wider variety of models would strengthen the results. If the proposed methods could demonstrate their benefits across transformer models with different attention mechanisms (e.g., sparse attention, low-rank attention), scaling approaches (e.g., wide scaling, deep scaling, sparse scaling), and sizes (Llama-3.2-1B, Llama-3.2-3B, Llama-3.2-8B, Llama-3.2.-11B, Llama-2-13b, and Llama-3.2-70B, Llama-3.2-90B, and Llama-3.1-405B), it would enhance the paper’s contribution. \n- The authors need to demonstrate whether applying SwiftKV to larger models yields more significant results compared to small models. Incorporate models such as Llama-2-13B and Llama-2-7B. If applying the proposed methods to Llama-2-13B yields better results than Llama-2-7B in terms of both cost and the benchmark scores, it would strengthen the contribution of the paper.\n- There is no experiment to show the independent effect of AcrossKV without the presence of SingleInputKV, leaving the isolated impact of AcrossKV unexplored. The authors need to compare a baseline model to one with only AcrossKV applied.\n\nW2. The justification for the claimed reduction in computational cost is insufficient. The paper needs to specify which operations are being skipped by SingleInputKV clearly by providing a detailed breakdown of the computational costs for each component of a Transformer model, comparing the baseline to SwiftKV. In Figure 1, SingleInputKV still appears to need to generate the output hidden state vector of every attention layer, which is a primary computational task in Transformer models. This is because the proposed method needs to generate the query vector for each attention layer in Figure 1, and the output hidden state vector from the (i-1)th layer is required to compute the query vector for the i-th layer."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* Section 3.4, Knowledge Recovery: The description suggests that the distillation is done for every of the later layers, but Figure 1 suggests that at least W_K and W_V are only trained for the initial layer of each AcrossKV block.\n* Why are the results more or less consistently better for 4-way caching compared to 2-way caching for the 70B model? That seems kind of counterintuitive.\n* footnote 5, page 7: What are the end-to-end results?\n* Section 4.3: \"a combined throughput of over 16K toks/sec over 4xH100 GPUs which corresponds to 560 TFLOPS/GPU\"\n * So that is around 4k token per second for each GPU compared to 30k tokens/sec/GPU for 8B Llama model. But because the 70B model is much more complex, there are more floating point operations necessary?\n * Any notion why the pure compute performance increases despite a more \"distributed\" setting (multiple GPUs)?\n* Any notion why the full model fine-tuning performs so much worse than the partial model fine-tuning?\n* Section 5.3: \"This may be due to the lack of math and coding examples in the two datasets we picked to train the model.\"\n * Why did you choose these datasets, if at least the coding use case is serving as a motivational example?\n* Doesn't the discussion in Appendix B the whole point of the paper, i.e. trying to optimize a part than accounts for less than 5% of the total compute time?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The ideas for the various optimizations are presented reasonable clearly and they seem novel as well, especially their combination. The evaluation on a number of models and datasets/benchmarks supports their performance claims and a reasonable ablation study is provided as well."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "SwiftKV proposes several techniques to improve to reduce computation and memory footprint for LLM inference while maintaining a similar level of accuracy. In particular, they propose to skip the pre-fill stage of later layers by rewiring the model and rely instead on intermediate computation results from an earlier layer, leading to a reduced amount of computation. The authors also reduce the memory footprint of later layers by sharing a KV cache across multiple subsequent layers. Additional they use a distillation/fine-tuning process of the affected model part to reduce the difference in accuracy compared to the original model. They show in their evaluation computation improvements for throughput as well as latency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "For me, the biggest issue is that end-to-end results are missing, which makes it hard for me to put the presented inference results (throughput, latency) into context, which also makes me question how useful the presented numbers are.\n\n* apart from SingleInputKV, all the other optimizations are not properly motivated regarding the reasoning why they should work (some form of microbenchmark)\n* end to end results are missing, especially since some of their writing, if I am not mistaken, suggests that they target a part of the pipeline that only compromises 5% of the \"runtime\"\n* line 314: \"The accuracy drops by another 0.32, going from 2-way to 8-way\" - it is actually 1.32 according to the table\n * similarly the number for 16-way is wrong as well\n* The authors might consider removing 5.5 to have more space for presenting the other content/result in more detail.\n* not clear why the used benchmarks are representative for the use cases mentioned as motivation in the introduction\n\ndetailed copy editing comments:\n* related work:\n * \"their optimized implementations in TensorRT (NVIDIA, 2019)\" - all previously mentioned techniques were published after 2019\n* Figure 2, right side:\n * The parameters used in the legend are not explained at all in the caption. It is possible to understand after reading the subsequent text.\n * The subfigure is never actually referenced in the text, except in the appendix, as \"proof\" for some statement later and in a footnote.\n* Figure 4: Artifacts in the layering of the curves, sometimes a dot is at the top for one datapoint and and then further down for other datapoints. But maybe that was intentional?\n* Table 3: There is a horizontal line missing after \"(a) The effect of distillation\".\n* line 447: \"which suggest that MLP layers player a more prominent role\" - missing verb, probably it should be \"play\" instead of \"player\"\n* Figure 5 is not readable\n* minor issues:\n * typos:\n * line 119: \"Tensor-Parallelism(Shoeybi et al., 2020)\" - missing space\n * lines 130 to 138: additional brackets around the year for the citations\n * line 157/158: \"(Holmes et al., 2024; Agrawal et al., 2024))\" - additional bracket at the end \n * line 360: \"toks/sec\" - probably \"token/sec\"\n * line 859: \"superme\" - perhaps \"supreme\"?\n * line 923: \"hyper-paramter\" - missing e\n * line 923: \"but did not invest deeper\" - probably \"investigated\"\n * references:\n * Clark et al. 2018: cited differently than the other arXiv papers\n * Cobbe et al. 2021: misses place, where the paper was published\n * Dao et al. 2024:\n * year states 2024, but conference abbreviation suggests 2022\n * conference abbreviation is nowadays NeurIPS\n * Ding et al. 2021: misses place, where the paper was published\n * Elhage et al. 2021: url not clickable\n * GretelAI 2024: url not clickable\n * Hendrycks et al. 2021: cited differently than the other ICLR papers\n * Hinton et al. 2015: cited differently than the other arXiv papers\n * Kuzim et al. 2024:\n * year states 2024, but conference abbreviation suggests 2022\n * conference abbreviation is nowadays NeurIPS\n * Lewis et al. 2020: conference abbreviation is nowadays NeurIPS\n * Liu et al. 2024a:\n * cited differently than the other arXiv papers\n * cited twice (2024b)\n * Liu et al. 2024d:\n * year states 2024, but conference abbreviation suggests 2023\n * conference abbreviation is nowadays NeurIPS\n * Meng et al. 2024:\n * year states 2024, but conference abbreviation suggests 2022\n * conference abbreviation is nowadays NeurIPS\n * Pourreza and Rafiei 2024:\n * year states 2024, but conference abbreviation suggests 2023\n * conference abbreviation is nowadays NeurIPS\n * Sakaguchi et al. 2019: cited differently than the other arXiv papers\n * Wei et al. 2023: cited differently than the other arXiv papers"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The code link appears to be invalid. Could you make the code open-source to enhance reproducibility?\n2. SwiftKV focuses primarily on optimization during the prefill stage. How should we interpret the decrease in TPOT shown in the performance results?\n3. Could you provide results comparing the performance of SwiftKV with more competitive baselines, such as Minicache, as mentioned in your paper? Could you clarify the connections and differences between your method and existing work, including its strengths and weaknesses? Could you demonstrate whether your method can be integrated with other approaches? Additionally, could you outline the potential application scenarios for your method?\n4. As I understand, most datasets used in your paper consist of multiple-choice questions, leading to longer prefill times and shorter decoding times. I’m interested in seeing SwiftKV's performance on more diverse datasets."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The writing is good. \n2. Experimental results indicate that the proposed method effectively reduces latency while preserving knowledge."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes SwiftKV, a method that reduces LLM inference latency while preserving knowledge. SwiftKV combines Early Exit, KV cache compression, and Knowledge Distillation techniques, demonstrating latency improvements in performance evaluation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited Efficient Experiments: VLLM serves as the only baseline in the performance results, which limits the demonstration of this work’s necessity and effectiveness. The authors’ method is a lossy optimization approach, and they should compare it with more serving systems to demonstrate respective performance improvements and knowledge retention. Although other methods may not conflict with the authors' approach, they may not be easily integrated (e.g. the strategy of Early Exit can hardly apply to Speculative Decoding[1], or combined with certain sparse attention methods like PowerInfer[2], quantization method like GPTQ[3] may result in significant performance degradation.). If the authors cannot demonstrate the effectiveness of their method compared to others, or show that it can integrate with other methods for added benefits, the significance of this work is greatly diminished.\n\n2. Lack of Key Assumptions: Some critical assumptions are missing, such as noting that latency-sensitive servers often adopt disaggregated systems to handle the prefill and decode stages separately. This omission could impact the reported TTFT and TPOT performance results, because in the disaggregated systems, TPOT will hardly be influenced due to improvements in the prefill stage.\n\n[1] Cai, Tianle, et al. \"Medusa: Simple llm inference acceleration framework with multiple decoding heads.\" arXiv preprint arXiv:2401.10774 (2024).\n\n[2] Song, Yixin, Zeyu Mi, Haotong Xie, and Haibo Chen. \"Powerinfer: Fast large language model serving with a consumer-grade gpu.\" arXiv preprint arXiv:2312.12456 (2023).\n\n[3] Frantar, Elias, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. \"Gptq: Accurate post-training quantization for generative pre-trained transformers.\" arXiv preprint arXiv:2210.17323 (2022)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper presents a model transformation and distillation approach that reduces prefill computation by 50%, memory usage by 62.5%, and delivers 2x higher inference throughput, all with minimal impact on model quality."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024swiftkv,\ntitle={Swift{KV}: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z1ohBxWeL2},\nnote={under review}\n}"
},
"abstract": {
"value": "LLM inference for popular enterprise use cases, such as summarization, RAG, and code-generation, typically observes orders of magnitude longer prompt lengths than generation lengths. This characteristic leads to high cost of prefill and increased response latency. \nIn this paper, we present SwiftKV, a novel model transformation and distillation procedure specifically designed to reduce the time and cost of processing prompt tokens while preserving high quality of generated tokens. SwiftKV combines three key mechanisms: i) SingleInputKV, which prefills later layers' KV cache using a much earlier layer's output, allowing prompt tokens to skip much of the model computation, ii) AcrossKV, which merges the KV caches of neighboring layers to reduce the memory footprint and support larger batch size for higher throughput, and iii) a knowledge-preserving distillation procedure that can adapt existing LLMs for SwiftKV with minimal accuracy impact and low compute and data requirement. For Llama-3.1-8B and 70B, SwiftKV reduces the compute requirement of prefill by 50% and the memory requirement of the KV cache by 62.5% while incurring minimum quality degradation across a wide range of tasks. In the end-to-end inference serving using an optimized vLLM implementation, SwiftKV realizes up to 2x higher aggregate throughput and 60% lower time per output token. It can achieve a staggering 560 TFlops/GPU of normalized inference throughput, which translates to 16K tokens/s for Llama-3.1-70B in 16-bit precision on 4x H100 GPUs. Our training, inference, and model implementations are open-sourced at https://anonymized.link."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM",
"Inference",
"System",
"Compression",
"Distillation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7e9778695578197e549111e4c0dd19004cc881c1.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z1pydjd4XQ | YESNO-PRO: A HIGH-PERFORMANCE POINTWISE RERANKING ALGORITHM BRIDGING ENCODERDECODER AND DECODER-ONLY LLMS | main | Active | zero-shot text reranking;Large Language Models | applications to computer vision, audio, language, and other modalities | 1;3;3;3 | 4;4;5;4 | 1;3;2;2 | 1;2;2;1 | 2;3;2;1 | 2.5 | 4.25 | 2 | 1.5 | 2 | 0.333333 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Can your method be applied to first-stage ranking? \n2. How is this method improve (other than the score merging) improves the previous LLM-based text ranking methods in your related work section?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper studies an important and practical problem\n2. The proposed method, although standard, is reasonable"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a new approach called \"yesno-pro\" for zero-shot text reranking. The authors claim this new method can improve prompt design and support both encoder-decoder and decoder-only models. Experiments on TREC19/20 and BEIR datasets show this method can achieve better ranking results compared to other baseline methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper has very limited contributions. The prompt design is standard and the way to prompt LLMs is also the same as the previous approaches. The re-ranking idea (in equations 4-6) is just a score merging, which also requires an existing pre-ranking stage. \n\nThe paper presentation is terrible, lots of typos, grammatical and formatting issues, Just list a few below:\n\n1. Line 044 ”sliding window” and ”all pair”, the quotations are wrong\n2. Line 057 “... passage,such as ”yes/no””, no space between comma\n3. Line 059, “However, this approach suffer from these drawbacks”, \"suffer \" -> “suffers” \n4. Figure 1 caption is completely wrong\n5. Table 1 is terribly formatted\n\nOverall, I think the paper is clearly below the acceptance threshold."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the weaknesses."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "Many experiments are conducted."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a new LLM-for-ranking model called YesNo-Pro. The authors write a new prompt and design a score computation that includes scores from first-stage retrieval process. However, the designed prompt is not much different from previous prompts and does not provide new insights. The idea of using the scores from first-stage retrieval is also quite common. Some statements in the paper are confusing or even wrong."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The designed pointwise-ranking prompt does not significantly differ from existing prompts and offers no additional insights. There are also papers([1,2]) discussing the prompt designs for zero-shot LLM-based rankers. The experiments in Section 4.3.2 also infer that the influences of different prompts on ranking performances are low.\n\n2. The idea of using scores from retrieval stages is also not novel. Many previous works([4,5]) in hybrid retrieval have discussed it.\n\n3. Some statements in this paper are confusing or even wrong.\n - The authors state that ``Pointwise approaches are not supported by decoder-only LLMs``. This is wrong, many approaches([1,3]) apply decoder-only LLMs for pointwise ranking.\n - The authors state that ``At times, the outputs of LLMs do not conform to predefined relevance labels, and current approaches fail to effectively mitigate this\nissue, lacking corresponding solutions``. This is wrong, we can still calculate the probabilities by resorting to the logits.\n - The example in Section 2.1 in line 101 is very confusing. Why LLM based on relevance-based prompts cannot yield correct results? Authors need to explain why LLMs cannot understand ``relevance`` in detail instead of simply stating it.\n\n4. The authors didn't show how much improvements of YesNo-Pro model come from the ranking scores in the first-stage retrieval models. To my knowledge, most previous LLM-for-ranking models don't use scores in first-stage retrieval, which makes the comparison unfair. \n\n5. The typos in the paper strongly affect the reading. For example, the description of Figure 1 is totally irrelevant to the contents in it.\n\n[1] Sun, S., Zhuang, S., Wang, S., & Zuccon, G. (2024). An Investigation of Prompt Variations for Zero-shot LLM-based Rankers. SIGIR 2024.\n[2] Zhuang, H., Qin, Z., Hui, K., Wu, J., Yan, L., Wang, X., & Bendersky, M. (2023). Beyond yes and no: Improving zero-shot llm rankers via scoring fine-grained relevance labels.\n[3] Ma, X., Wang, L., Yang, N., Wei, F., & Lin, J. (2024, July). Fine-tuning llama for multi-stage text retrieval. \n[4] Bruch, S., Gai, S., & Ingber, A. (2023). An analysis of fusion functions for hybrid retrieval. ACM Transactions on Information Systems, 42(1), 1-35.\n[5] Kuzi, S., Zhang, M., Li, C., Bendersky, M., & Najork, M. (2020). Leveraging semantic and lexical matching to improve the recall of document retrieval systems: A hybrid approach."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "None"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This paper proposes a new pointwise prompting technique that outperforms previous methods.\n\n- The proposed method has been implemented for both encoder-decoder and decoder-only language models (LMs) and has been accelerated using vLLM."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces an LLM prompting method for pointwise text reranking. It compares different prompt templates and designs a fusion function to combine reranker scores with first-stage retriever scores. The method is applied to both encoder-decoder and decoder-only LMs. Evaluation on TREC-DL and BEIR shows improvement over other pointwise prompting approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed method involves only minor modifications to the prompt wording, which lacks novelty. And there is existing work on automatic prompt optimization for LLM rerankers (https://arxiv.org/pdf/2406.14449).\n\n- This paper claims that “pointwise approaches only support encoder-decoder LLMs”, which is incorrect, as RankGPT (https://arxiv.org/abs/2304.09542) already applies pointwise prompting using the GPT API. And the use of vLLM is a common practice for LLM deployment and can also be applied to other baselines.\n\n- This paper lacks comparison to state-of-the-art prompting methods such as setwise ranking (https://arxiv.org/abs/2310.09497), graph-based ranking (PRP-Graph), or tournament-based ranking (https://arxiv.org/abs/2406.11678).\n\n- There are some formatting issues: the caption of Figure 1 appears to be an uncleaned template, and Table 1 exceeds the right margin of the page."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weakness section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- A valuable proposal to change the strategy to prompt design for retrieval/ranking since often we don't want just passages/documents that answer a given query, but provide information that can be used to answer the query.\n- Proposal to rescale the scores before combining the scores from the two stages.\n- Good set of ablations as described in the summary section."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes an improvement to how LLM rankers can be used for document reranking. In this direction they propose:\n- A new prompt designed to make the LLM focus on whether the content in the passage can be used to answer the query, instead of focusing on whether the passage/document answers the query.\n- The paper also proposes a way to combine the scores from the retrieval stage and claim that this is better than using the score for the ranking stage alone. It scales the scores so that they are in comparable range before combining them.\n- Ablations on different prompt strategies are provided to support effectiveness of the new prompt design.\n- Ablations on weight of scores from the two retrieval/ranking stages.\n- Ablations on different models, and ranking strategies (pointwise, listwise)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The abstract mentions \"Traditional pointwise approaches\nprompt the LLM to output relevance labels such as yes/no or fine-grained labels, but they have several drawbacks.\", but then the paper proceeds to use the same strategy of using the logits corresponding to the yes/no labels which is the common way. There is no proposal to improve upon this.\n- It is unclear from the experiments whether combining the scores from the two stages is really beneficial. From the ablation in table 2, we see that the first stage ranker is better for NDCG@10, while the second stage ranker is better for NDCG@{1,5}. The paper mentions that \\alpha can be tuned across datasets, but the ablation is for different NDCG metrics on the same dataset.\n- Paper mentions \"Current pointwise approaches are applicable to encoder-decoder LLMs but do not support decoder- only LLMs.\". However, later on, the paper again mentions \"For decoder-only LLMs, to overcome the limitation that pointwise approaches only support encoder- decoder LLMs, we optimized the vllm framework(Kwon et al., 2023), a widely used LLM server, to enable the LLMs to ouput not only generated tokens but also the logits corresponding to each token.\". So this is not a limitation of the decoder only model, but a limitation of the serving stack used by the authors. Infact, after stating that \"For encoder-decoder LLMs, following traditional methods, we can caculate ranking scores\", they use the same traditional methods for decoder only LLMs as well with no novelty in the modeling.\n- \"For decoder-only LLMs, to overcome the limitation that pointwise approaches only support encoder- decoder LLMs, we optimized the vllm framework\" --> This should be updated to \"we modified the vllm framework\" instead since there is no optimization involved and the only change here is to return the logits as part of the API response.\n- line 206: \"If the output of the LLM contains neither ”yes” nor ”no,”\" --> how can the output of the LLM sometimes contain yes/no tokens and sometimes miss them. Are you returning only top k token logits from the server? If yes, Won't it be better to just return the logits corresponding to the yes and no tokens?\n- \"S_{i} = s_i (r_{max} − r_{min}) + r_{min} + α ∗ r_{i}\" --> Instead of rescaling the output of stage 2 in the same range as the output of stage1, it might be better to scale the output of stage1 in the range [0,1] as well making the formula much cleaner.\n- In ablations, why are pointwise and listwise baselines included, but no pairwise even after mentioning about it in the paper?\n- For the model comparisons, it would be better to mention the size of each model used in terms of number of parameters.\n- line 409: \"effectively reduces the occurrence of tokens outside predefined labels\" --> It might be better to quantify this so that we can segregate the gains from the two proposed changes in the prompt, which are 1) shift from passage answering the query, and 2) including \"directly\" in the prompt.\n- Even though having \"directly\" in the prompt helps in the extraction of answer from the model's response, CoT (Chain of Thoughts -- https://arxiv.org/pdf/2201.11903) emphasizes that letting the model reason about its response before generating the answer is much more performant and is widely used knowledge. A commonly used strategy is to ask the model to reason about the answer it is about to generate, but then end its response in a very specific format \"<reason>.. Hence, the final answer is <yes/no>\".\n- line 163: relevant-based --> relevance-base\n- \"addresses the challenge that relevant-based promps' inability\" --> This part can be reworded for clarity."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "YESNO-PRO: A HIGH-PERFORMANCE POINTWISE RERANKING ALGORITHM BRIDGING ENCODERDECODER AND DECODER-ONLY LLMS"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024yesnopro,\ntitle={{YESNO}-{PRO}: A {HIGH}-{PERFORMANCE} {POINTWISE} {RERANKING} {ALGORITHM} {BRIDGING} {ENCODERDECODER} {AND} {DECODER}-{ONLY} {LLMS}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z1pydjd4XQ},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent research has shown significant progress in the field of zero-shot text reranking for large language models (LLMs). Traditional pointwise approaches prompt the LLM to output relevance labels such as \"yes/no\" or fine-grained labels, but they have several drawbacks. Firstly, these prompts struggle to capture complex correlations between queries and passages and lack robustness for outputs not covered by predefined labels. Secondly, ranking scores rely solely on the likelihood of relevance labels, leading to potential noise and bias. Lastly, existing pointwise approaches are not supported by decoder-only LLMs, as ranking requires LLMs to output prediction probabilities. In response to these challenges, a novel pointwise approach called yesno-pro has been designed, which redefines both prompt design and score computation mechanisms to better align with the intrinsic nature of text reranking. Additionally, a comprehensive reranking framework based on LLM services has been proposed to support concurrent ranking calls and quickly adapt to any open-source decoder-only large models. Experimental results have demonstrated that this method outperforms existing pointwise and some pairwise/listwise methods on TREC19/20 and BEIR datasets, achieving the state-of-the-art performance. Due to its concurrency features, this work is applicable to practical applications with high real-time requirements."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"zero-shot text reranking",
"Large Language Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d64d37023bb60f48b0b58681a41c88af453b5c37.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "YESNO-PRO: A HIGH-PERFORMANCE POINTWISE RERANKING ALGORITHM BRIDGING ENCODERDECODER AND DECODER-ONLY LLMS"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z1td6fBKpG | Conjuring Semantic Similarity | main | Active | Semantic Similarity;Interpretability;Diffusion Models | interpretability and explainable AI | 3;5;5;6 | 3;3;3;3 | 2;2;3;3 | 2;2;2;2 | 3;3;3;3 | 4.75 | 3 | 2.5 | 2 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "**Questions:**\nIs it possible to detect poor representations automatically in the underlying text encoder using this similarity measure?\n\n\n**Other:**\nI had a hard time understanding Figure 1, consider redesigning it or breaking it down to two figures"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**S1:** The idea of learning semantic textual similarity through the images such expressions evoke is creative, original, and intriguing.\n\n**S2:** Related Work provides a detailed and relevant account of existing work in the subject."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a novel similarity measurement–textual similarity based on the imagery the texts evoke. They propose learning this similarity by computing the Jensen-Shannon divergence between diffusion processes conditioned on the two compared prompts, using Monte-Carlo sampling. The method is then compared to existing similarity measurements on the STS benchmarks and is ablated."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**W1:** The premise of the paper–obtaining a similarity measurement based on the imagery texts evoke–is unique and interesting, but I don’t understand what use-case is it tailored to address. If the use-case is no different than measuring similarities in text-only environments, I don’t see why it is preferable over methods that are inherently text-only, are easier to scale, and are equipped to represent abstract notions, which are difficult to visualize. I find that conclusion is supported in section 4.2 and table 1, too. \n\nThe paper will be improved if such motivation will be clarified, with an experiment to demonstrate this use-case. \n\n**W2:** Isn’t the method hindered by whatever the text encoder does not represent well, or what the diffusion process did not accurately learn? Seeing as these are inherent components, this seems like a significant drawback, which effectively nullifies possible advantages of this method. You touch on this matter in lines 312–317 and section 5. I would appreciate a clarification here."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) on line 238, \"denoising with either y1 and y2\" is a little confusing in that \"either\" seems to imply an exclusive OR while \"and\", Algo 1 and the definition d_ours(y1,y2) show the need for both? I could be mistaken, but I think the later is the case, in which case you should change \"either\" to be \"both\"\n\n2) The line starting at 257 was a little unclear to me. Specifically \"we note that they do not usually correspond exactly\" ?\n\n3) nit: 313 \"is\" -> \"are\""
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The work proposes a novel method for assessing semantic similarity between two text sequences by comparing the distance between image distributions generated over time by diffusion models conditioned on them. \n\nThe authors show the method does well on human annotation semantic similarity baselines.\n\nThe method offers a new possible avenue for the evaluation of text-conditioned generative models which allows some interpretability of representation similarities for text conditioned image generation models. While its not the most performative method, it seems like one that could be expanded/built upon both to improve its performance, but also possibly as a way of guiding/improving diffusion model training."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors propose to use the distance between image distributions generated by a diffusion model conditioned on two text sequences as a novel measure of semantic similarity. They specifically propose using the Jensen-Shannon divergence between the reverse-time diffusion stochastic differential equations (SDEs) induced by two text sequences which can be directly computed via Monte-Carlo sampling. \n\nEmpirically, while not performing as well as embedding models trained specifically for semantic comparison tasks ( CLIP, SimCSE-BERT, etc ), the author's approach outperforms zero-short encoder models and aligns well with human-annotated scores from the Semantic Textual Similarity and Sentences involving Compositional Knowledge benchmarks.\n\nThe authors provide some findings from ablations over the choice of prior distributions (uniform vs dirac), number of monte carlo steps and the choice of diffusion model as well."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the authors cited weaknesses with the approach, the authors could do a better job motivating possible applications opened up by their method. \n\nThe paper qualitative experiment is a little shallow. Expanding it and doing error analysis ( where does the method perform well/poorly ) on results would help add more clarity into the method.\n\nThe interpretability angle of the paper is also a little lacking and under explained. Outside of the few examples given (one in the paper and two in the appendix ), is there anyway to automate interpretability results or use them for error analysis in a useful way.\n\nThe paper could have done an ablation compared their symmetric JSD approach to just using KL-Divergence to show the boost obtained.\n\nThe last line of the 2nd paragraph ( about pixel values not depending on distant knowledge or cultural background ) seems debatable/unnecessary since a text passage (about democracy, celebration, a feast, etc ) could be portrayed visually in different ways depending very much on knowledge/cultural background while still referring to the same semantic concept"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tIs there a specific set of STS that you found the proposed method works?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.\tThis paper proposes an interesting perspective linking text and diffusion models. The idea of contextual and concept-level similarity is natural and convincing.\n2.\tThis paper is well-written and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper discusses the measurement of semantic similarity at the concept level. The authors propose an interesting method to measure semantic similarity via the distribution of the image with guidance from paired different textual expressions. On various STS tasks, the authors show comparable performance between the proposed methods and various textual or multi-modal semantic similarity measurements."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe empirical experiments are too weak. On the one hand, the performance is not as good as the previous methods. The baselines listed are slightly outdated. It would be interesting if OpenAI ada embeddings, Llama 3.1, and Gemma can be compared. Surpassing BERT-CLS is not very convincing. On the other hand, STS may not be the best arena for the novel method the authors proposed, since there are a lot of examples related to paraphrasing, instead of conceptual relevance. It would be great if the authors study a specific slice that is more relevant to the idea, e.g., examples with a not-complex scene and the difference mainly comes from the concepts. Knowledge graphs (e.g., COMET, Atomic) or WordNet depth distance may provide better comparative performance or qualitative examples.\n2.\tThis paper can be linked with more recent advances in this area in NLP, e.g., C-STS (https://arxiv.org/abs/2305.15093) discusses the conditional semantic textual similarity in addition to STS; Instructor (https://github.com/xlang-ai/instructor-embedding) and PIR (https://arxiv.org/abs/2405.02714) discuss the changes in semantic embeddings under different instructions. The related work and experiment suite can help further improve the draft in discussing the semantic similarity.\n3.\tIt would be good if further content is added to the paper especially since there are 7.5 pages in the current draft. For example, the authors can include experiments on potential performance improvement over various downstream tasks: Z-LaVI (https://arxiv.org/abs/2210.12261), for example, explores how visual imagination helps improve model performance on tasks such as word sense disambiguation in a zero-shot manner. Similar tasks can be included in the draft."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper is well-written and easy to read.\n* The idea of a visually grounded similarity metric between sentences is very interesting and could be a useful addition to the community’s toolbox for measuring different aspects of semantic similarity.\n* The authors provide a theoretical understanding and justification for the introduced distance metric."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel method for calculating the similarity between two pieces of text that is visually grounded. For each sentence, the authors use the sentence to guide a text-conditioned diffusion model to remove the noise and generate a new image given some initial noisy image.\nEach sentence guides the diffusion model to generate intermediate denoised images with a distinct distribution. To compare two sentences, the authors define the distance between the two sentences as the distance between the corresponding distributions of the intermediate denoised images."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concern with this paper is the lack of a clear contribution that is well-motivated and also well-supported by the presented empirical evidence. In the last paragraph of section 1, the authors claim three separate contributions. Here are my concerns about each claimed contribution:\n\nFirst contribution (Lines 82-83): “we propose an approach for evaluating semantic similarity between text expressions that is grounded in the space of visual images.”\n\nAlthough I agree that a novel notion of similarity metric is interesting, the paper lacks a solid argument to support why it is needed or useful. Some of the unanswered questions are:\n* Why do we need a new similarity metric? In paragraph 2, the authors provide some arguments about why comparing images is easier for humans than comparing text due to language and knowledge differences. But, that argument does not hold for models since the similarity between texts is often calculated by a single model. So, there is no difference in knowledge or judgment between models. Moreover, even if this limitation is actually valid for text similarity measures based on dense vectors, it also applies to the proposed similarity metric. After all, just like there is model that is creating the dense vector, there is a model that is doing the denoising and image generation. So, there is still a model involved with all the specific and often unknown limitations and biases that come with any model.\n* As the authors mentioned in Line 312, these generative models rely on some encoder to encode the text. So, all the limitations of the encoder model (i.e., cosine similarity between dense vectors) also apply to the proposed similarity metric. So, why should we add the extra level of complexity?\n* Even beyond the motivation, the experimental results do not suggest that the new similarity metric is consistently better than the vector-based methods. The most interesting comparison is the performance of the proposed similarity metric compared to the performance of the text encoder that is used in the stable diffusion generative model, which is clip-vit-14. Comparing these two, it seems like in just three out of the seven datasets, the new similarity metric performs better, which is less than half the times.\n* In general, to show the superior performance of the proposed similarity metric, more datasets that cover more diverse domains are needed.\n\nSecond contribution (Lines 84-85): “Our method has a unique advantage over traditional language-based methods that, in addition to providing a numerical score, it also provides a visual expression, or ‘explanation’, for comparison, enabling better interpretability of the learnt representations”:\n\nI completely agree with this advantage of the proposed method, and I think it might be the exact case where their method shines. But, other than three figures that show qualitatively what this interpretation might look like, there is no other discussion or experiment on this.\nTo claim this as the main contribution, the authors should provide extensive experiments and discussion on the types of explanation that their method provides in different use cases, discuss how these explanations can be useful for the community, and how their method compares to existing interpretability methods.\n\n\nThird contribution (Lines 86-87): “our method is the first to enable quantifying the alignment of semantic representations learnt by diffusion models compared to that of humans, which can open up new avenues for the evaluation text-conditioned diffusion models”\n\nAgain, I completely agree that this can be a strength of the proposed method. But, similar to my previous point, to claim this as the main contribution, the authors should evaluate several diffusion models, discuss helpful insights that their metric provides, compare their evaluation results with previous evaluation metrics, and explain their differences, advantages, and weaknesses.\n\n---\n\nI also have an issue with the lack of a baseline to measure the merits of the proposed similarity metric. This paper proposes two different concepts. One is the notion of text similarity through images, and the second is the proposed similarity metric to accomplish this. Assuming that the authors provide adequate motivation and support for the importance of the notion of text similarity through evoked images, they should also prove that the complexity of their similarity metric is justified. For example, what if I just generate the final fully denoised images using each sentence and then measure the similarity of the generated images using something like CLIP or DINO embeddings?\n\nFinally, I think the limitations that the authors mention in Section 5 deserve more than a one-sentence acknowledgment. For example, the authors mention that their method is computationally expensive. It would be helpful for the community to know the exact computational resources used for the experiments. \nThe authors also mention the limitations that are caused by the use of text encoders such as CLIP by diffusion models. It is important to explore and analyze these limitations and also show the merits of the proposed method compared to just using the outputs of the text encoder with cosine similarity in the first place."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We represent textual expressions based on the distribution of images they conjure, using which we define a notion of \"visually-grounded\" semantic similarity between text."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024conjuring,\ntitle={Conjuring Semantic Similarity},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z1td6fBKpG},\nnote={under review}\n}"
},
"abstract": {
"value": "The semantic similarity between sample expressions measures the distance between their latent 'meaning'. Such meanings are themselves typically represented by textual expressions, often insufficient to differentiate concepts at fine granularity. We propose a novel approach whereby the semantic similarity among textual expressions is based {\\em not} on other expressions they can be rephrased as, but rather based on the imagery they evoke. While this is not possible with humans, generative models allow us to easily visualize and compare generated images, or their distribution, evoked by a textual prompt. Therefore, we characterize the semantic similarity between two textual expressions simply as the distance between image distributions they induce, or 'conjure.' We show that by choosing the Jensen-Shannon divergence between the reverse-time diffusion stochastic differential equations (SDEs) induced by each textual expression, this can be directly computed via Monte-Carlo sampling. Our method contributes a novel perspective on semantic similarity that not only aligns with human-annotated scores, but also opens up new avenues for the evaluation of text-conditioned generative models while offering better interpretability of their learnt representations."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Semantic Similarity",
"Interpretability",
"Diffusion Models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2432ce34d11b4c3802285aa295b009be9a7b6aeb.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Conjuring Semantic Similarity"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z1yI8uoVU3 | Measuring Effects of Steered Representation in Large Language Models | main | Active | in-context learning;activation steering;large language models | foundation or frontier models, including LLMs | 3;3;3;3 | 3;4;4;4 | 2;2;2;2 | 2;1;2;2 | 2;2;2;2 | 3 | 3.75 | 2 | 1.75 | 2 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. What does “increase the concept of output” exactly mean in line 073? Providing more background and context of activation steering in the introduction could help.\n\n2. In Figure 1 bottom panel, the order of “steered output” and “additional prompt” is confusing. The prompt is not input after the output.\n\n3. In Figure 2, what are “in-context knowledge” and “parametric knowledge”? While I understand what they mean, what is the purpose of drawing them in the illustration of the evaluation prompt?\n\n4. Also in Figure 2, why are the activations on the same layer as the steered representations not affected by the steering, indicated by the darker color?\n\n5. What is the reason behind using [S] and [A] tokens introduced in line 131? If they are special tokens for some LLMs, or from other papers on activation steering, please explain.\n\n6. Typo: line 207, in the set for $\\mathrm{rel.fail}$, the second term $S_{y\\rightarrow y}$ should be $S_{y\\rightarrow *}$."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The experimental design and results are well-presented, highlighting key observations effectively.\n\n- The paper proposes a robust evaluation framework for investigating the activation steering effect across various dimensions, encompassing both steering-related and unrelated dimensions.\n\n- The research explores the impact of steering position within a sequence, providing valuable insights into the technique and its implications."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an evaluation framework for activation steering in language models, a technique that modifies the models’ hidden representations by adding task-specific vectors towards a certain direction. Activation steering is used to control the model generation with desired properties by manipulating the hidden states. The paper studies the effect of this particular way of influencing model generation on various axes such as whether output format is preserved, the success rate of steering, the side-effect rate on irrelevant tasks, and the position of steering vector injection in the prompt sequence. Experimental results show varying behaviors across different dataset and models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The novelty of the paper may be limited. The paper conducts an analysis on LM representation steering and presents the observations. Although the comprehensive analysis results can be practically useful to guide the usage of representation steering, the contribution might be slim.\n\n- The paper presentation could become clearer by providing more background information of how activation steering is typically used (e.g. in the beginning sections). The paper studies a particular type of hidden representation modification, and it is better to contextualize the technique along with its variations.\n\n- There are many vague expressions that are not clearly defined and underspecified experimental choices, which affects the overall research clarity. For example, in the end of Introduction, “increase the concept of output” is confusing. For some other examples, see questions below."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "How do these metrics generalize to larger models?\nClarify to the reviewers what metrics of those proposed produce consistent trends across all models and datasets (when using the same normalization term across all settings)."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The authors introduce metrics, such as Format-Preserving Rate (FPR), Steering Success Rate (SSR), and Side-Effect Rate (SER), this can be useful for formalizing some of the properties desired in steerability. Given steerability is such a desirable property, having formalized metrics to understand this is meaningful.\n\nThe authors find that format preservation improves as steering is applied to higher layers. In contrast, steering to layers below the 0.4 percentile leads to significant format disruption. However, the authors find that \"The steering outcome can succeed or fail, regardless of whether the format is preserved\", so achieving high performance in one metric does not ensure positive directionality in the other. Furthermore, it appears the output form and the analysis of steering success depends on the normalization criteria -- \"That is, the\nanalysis of steering success can vary depending on the criteria used for normalization.\""
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper explores the impact of activation steering on the performance of large language models (LLMs). Activation steering involves modifying hidden representations during the forward pass to guide the output in a desired direction. The authors propose a counterfactual-based steering evaluation framework to compare the output of base and steered generations. The paper evaluates LLMs, including Llama3-8B, Llama2-7B, and Exaone-8B, and diverse datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Controllability tends to differ across models of different sizes, but this papers fails to include models above 8b or shows that findings generalize across different model sizes. This is important to understand the degree of generalization of the findings and metrics.\n\nEven within two models from the same lab, of similar sizes Llama3-8B, Llama2-7B – it would be helpful to have further clarification from the authors if the key findings generalize across both. As it appears the trends differ across datasets. A clear understanding of what metrics actually hold in generalization across datasets and models during rebuttals is important.\n\nAs is, the major weakness of this paper is that it appears the metrics remain sensitive to model, dataset and the normalization criteria used. An understanding of how sensitive steering success is to the choice of normalization would also be helpful. For example, the statement \" The normalization term can be chosen for the condition-specific measures.\" suggests that one needs to tailor the normalization -- which would require signficant hyperparameter turning and limit utility of the measures as generalizable."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- L268-269: Citations for Llama3-8B-inst and Exaone-8B seem broken.\n- Typically, I thought steering methods are applied to the last token of each generation step. Is steering at a single token a standard approach?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This paper presents a novel evaluation framework for representation steering, studied independently from task-specific performance, which has not been done before.\n- The proposed framework captures critical aspects of representation steering, such as format preservation and the effect of unrelated tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Recently, many representation steering methods have been proposed to enhance alignment and task-specific performance. However, existing evaluations primarily rely on performance metrics, lacking a comprehensive framework for assessing their broader effects. To address this, this paper introduces a counterfactual-based steering evaluation framework that compares base and steered outputs, evaluating both steering success and potential side effects. The proposed framework uncovers model- and task-specific tendencies, including format preservation, query success rates, steering position effects, and impacts on irrelevant tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The experiments are limited to a few tasks and models. Adding more tasks would greatly strengthen the paper. Additionally, although many representation steering methods have been proposed, this paper evaluates only one specific type. While the experiments appear valid, the limited experimental settings raise questions about the generalizability of the findings produced by the proposed framework.\n- The motivation for studying steering methods in the context of in-context learning is somewhat unclear. Since steering methods are commonly applied outside few-shot settings, it would be helpful to clarify any specific reasons for focusing on in-context abilities.\n- Some evaluation aspects lack clarity. For example, providing examples or deeper discussion on what constitutes \"side effects\" would enhance the interpretability of the results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- I suggest the authors test their steering method on open-ended settings; if it is robust I believe that they should be able to arbitrarily steer the model to any given set of possible labels rather than simply binary pairs.\n- I am skeptical that the counterfactual framework proposed is truly measuring the effects of the steering explicitly as there doesn't appear to be any specific controls put in place and therefore measuring there may be a correlation/causation issue here."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The problem itself is relevant, particularly from the perspective of interpretability and faithfulness research. Being able to create direct links between how we prompt models and how they act can have particularly useful outcomes especially as these models continuously grow in scope."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work looks at the problem of activation steering, which is notion of modifying hidden activations in order to guide the model towards a desired output. The work specifically looks at the direct effects of steering such activations on the consequence generation, which is the part of the generation that follows the steering. The authors develop a framework for this evaluation through the use of counterfactuals, showing how and when steering can have specific effects on the output."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The work, despite the problem it investigates, does not feel like it is complete. The authors present some interesting results, but the actual contribution here is not very clear and it appears to be quite minimal since the primary difference with prior works is simply using some additional parameters to adjust intermediate hidden activations. Though the method is explained, there's a lack of clarity as to what problem its explicitly attempting to solve and there isn't enough of an exploration of different settings. Furthermore, evaluation is done solely through a lens of a self-defined metric, therefore there may be some direct biasing of the method towards these.\n\nFurthermore, sections 6.3 and 6.4 have been explored quite a bit in the past, such as through needle-in-the-haystack style tasks or simply other long-context problems. The particular link with steered representations here is not presented in a convincing enough way and therefore comes off as nothing more than simply the result of similar observations as those made in prior works with long-context reasoning."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "This paper introduces a framework to evaluate how steering hidden representations in Large Language Models impacts their outputs by comparing pre- and post-steering, offering a precise assessment of subsequent generations."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024measuring,\ntitle={Measuring Effects of Steered Representation in Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z1yI8uoVU3},\nnote={under review}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) show advanced performance and adaptability across various tasks. As the model size becomes more extensive, precise control by editing the forward process of LLMs is a challenging problem. Recent research has focused on steering hidden representations during forward propagation to guide model outputs in desired directions, yielding precise control over specific responses. Although steering shows a broader impact on diverse tasks, the influence of steered representations remains unclear. For instance, steering towards a refusal direction might lead the model to refuse even benign requests in subsequent generations. This work tackles the problem of evaluating activation steering. We introduce a counterfactual-based steering evaluation framework that compares the output of base and steered generations. Within the framework, we propose a steering effect matrix that eases the selection of generations base and steered output types. We experimentally evaluate the effects of steered representation for consequence generation with Llama3-8B, Llama2-7B, and Exaone-8B across diverse datasets. We conclude that steered representation changes the original output severely in longer contexts."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"in-context learning",
"activation steering",
"large language models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a035de681ce42c827947bc886aa3137262d77825.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/9b5d4fa846ddf7b2c78dbc026d83557a1af2fe77.pdf"
},
"title": {
"value": "Measuring Effects of Steered Representation in Large Language Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z21DkDDdgq | Integral Performance Approximation for Continuous-Time Reinforcement Learning Control | main | Active | Continuous-Time Reinforcement Learning (CT-RL);Optimal Control;Integral Performance Approximation (IPA);Adaptive/Approximate Dynamic Programming (ADP);Flight Control;Hypersonic Vehicles (HSVs) | reinforcement learning | 5;5;5 | 3;4;4 | 2;3;3 | 2;2;3 | 3;3;3 | 5 | 3.666667 | 2.666667 | 2.333333 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Does the state-dependent part of the cost need to be quadratic? Specifically, can the $x^T Q x$ term be generalized to some positive semi-definite function Q(x). I can understand the challenges generalizing so for the $u^T R u$, but $x^T Q x$ must be easy if linearizations are being used in the integral approximation anyway.\n\n2. Does the linearization in (13) need to be necessarily around x=0? What would be difficult about constructing a function $K_i(x)=-\\frac{\\partial}{\\partial x}\\mu_i (x)$? across a range of values of x, similar to gain scheduling methods?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written, the contributions are clear and of significant importance for solving continuous-time optimal control problems. The simulation and ablation studies are extensive and make a compelling case for the developed controller."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces integral performance approximation (IPA) for continuous-time model-based reinforcement learning control of control-affine nonlinear systems."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My main concerns are regarding the general mathematical rigor in the theoretical results. Specifically, there are a lot of linearized approximations throughout the paper, which by itself is not necessarily problematic. However, then one would expect the unknown approximation error to be accounted for in the analysis with some appropriate bounds. Instead, there are a lot of $\\approx$ relations in the paper (e.g., (14), (15), (30) etc.) without accounting for the errors. Specifically, in Eq. (13), K_i is defined as $$-\\frac{\\partial}{\\partial x}\\mu_i (x)|_{x=0},$$ and then it is said \n$$\\mu_i \\approx -K_i x.$$\nThis linearization is foundational to the entire paper. At that point one can also linearize the dynamic model itself, i.e., \n$$f(x)+g(x)u \\approx Ax+Bu$$ and then apply the entire development accordingly. This makes me wonder if the development is appropriate for the nonlinear system in Eq. (1) like it is claimed.\nInstead of writing $\\mu_i \\approx -K_i x,$ it would be more appropriate to write $\\mu_i = -K_i x + O(||x||^2)$. The linearization error would then grow quadratically with the states, but for the region $||x||\\leq r$, there exists some constant $c_1$ such that can be bounded by $ O(||x||^2) \\leq c_1 r^2$. Similar analysis needs to be performed for all of the other approximations in the paper to achieve a local stability result with robustness to small perturbations. The way the paper stands now, the analysis is valid only in an infinitesimal neighborhood of the origin.\n\nBesides, the literature review surrounding CT-RL and ADP methods is sparse. In Appendix M, the following vague comment is made about ADP methods:\n\n \"As a result of ADP’s theoretical frameworks in adaptive and optimal control, Lyapunov arguments are available to prove qualitative properties including weight convergence and closed-loop stability results. However, the results require restrictive theoretical assumptions which are difficult to satisfy for even simple academic examples, and as a result these methods exhibit empirical issues.\" \n\nHowever, it is not specified what theoretical assumptions are restrictive and difficult to satisfy. Besides the literature and ablation study on ADP is mostly focused on the old result by Vamvoudakis and Lewis (2010). Besides, a lot of further development has happened after this classical work. For example, see the following works and references citing them/therein:\n\nKamalapurkar, R., Walters, P. and Dixon, W.E., 2016. Model-based reinforcement learning for approximate optimal regulation. Automatica, 64, pp.94-104.\n\nVamvoudakis, K.G., 2017. Q-learning for continuous-time linear systems: A model-free infinite horizon optimal control approach. Systems & Control Letters, 100, pp.14-20.\n \nIn Appendix F, the authors mention \"Many SOTA ADP CT-RL algorithms require the persistence of excitation (PE) condition in proofs\nof algorithm properties\". There are many newer results which relax the PE condition with initial/interval/finite-time excitation conditions. See the following references for example:\n\nJha, S.K., Roy, S.B. and Bhasin, S., 2019. Initial excitation-based iterative algorithm for approximate optimal control of completely unknown LTI systems. IEEE Transactions on Automatic Control, 64(12), pp.5230-5237.\n\nYang, Y., Pan, Y., Xu, C.Z. and Wunsch, D.C., 2022. Hamiltonian-driven adaptive dynamic programming with efficient experience replay. IEEE Transactions on Neural Networks and Learning Systems.\n\nThe authors can discuss these newer results that relax the PE condition and analyze how they relate to or could potentially improve the proposed IPA method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "All the questions are listed in the weakness part."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper introduces a CT-RL method, IPA CT-RL, which leverages an affine nonlinear dynamic model and quadratic cost structure for data-efficient, robust control. It provides theoretical guarantees for convergence, optimality, and stability, validated through extensive evaluations. Finally, it demonstrates IPA CT-RL's successful application to HSV control."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces Integral Performance Approximation (IPA), a new method for continuous-time reinforcement learning (CT-RL) control. IPA utilizes an affine nonlinear dynamic model that partially captures the environment's dynamics, alongside state-action trajectory data, to enable highly data-efficient and robust control. The approach incorporates structures from the Kleinman algorithm to ensure theoretical guarantees for learning convergence, solution optimality, and closed-loop stability. The effectiveness of IPA is demonstrated across three CT-RL environments, including hypersonic vehicle (HSV) control, which presents additional challenges due to unstable and non-minimum phase dynamics."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "However, there are still some aspects of this paper that require clarification:\n\n1. The paper mentions several SOTA methods that are not restricted to control-affine systems, such as Yildiz et al. (2021). However, the authors did not include these methods in their simulations for comparison. Could the authors explain the reasoning behind this choice?\n\n2. The authors briefly mention the discretization of continuous-time environments in Yildiz et al. (2021), suggesting that this process could lead to significant numerical issues for real-world systems. However, their proposed method also relies on discrete data points when performing integration, which could introduce discretization errors. A previous study [1] has analyzed this issue in detail. I believe the authors should at least discuss the impact of discretization error in their approach.\n\n3. The authors suggest that HSV represents a SOTA environment, but it still appears to be relatively low-dimensional. It’s unclear why their approach cannot scale to higher dimensions, especially since the method in Yildiz et al. (2021) seems capable of handling more complex systems. The authors should discuss the limitations that might prevent their approach from scaling up.\n\n4. I am unclear about the theoretical advantage of the IPA method. Is this benefit primarily due to a linearization structure or another feature? I recommend that the authors provide a clear explanation of this through a example. Additionally, it’s unclear if IPA applies effectively to all control-affine systems or only to those with high linearity. The systems simulated by the authors do not exhibit a high degree of nonlinearity, so clarification here would be helpful.\n\nIf the authors can address these four points clearly in the updated version of the paper, I would be inclined to raise my score.\n\n[1] Cao W, Pan W. Impact of Computation in Integral Reinforcement Learning for Continuous-Time Control[C]. The Twelfth International Conference on Learning Representations."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "(1) It is suggested to introduce the value function $V$ (formula (7)) before Section 2.1. In addition, please use different notations to represent the value function (e.g. $V$) and the critic network (e.g. $\\hat V$). In this paper, the authors sometimes replace the value function with the critic network in their theoretical analysis.\n\n(2) In (5), the notation $V$ in the function $H$ should be $V^*$.\n\n(3) In Fig. 1, the HJB equation is unrelated to the CT temporal difference equation. \n\n(4) In (13), why do you linearize the controller $\\mu (x)$ at $x=0$? For states which are far away from the origin, the linearized function could not approximate $\\mu (x)$ accurately.\n\n(5) The reviewers are confused with the content in Appendix C, as formulas (16) and (17) can be obtained directly based on the Bellman equation (8), the critic network (9), and linearized controller (13).\n\n(6) The proof of Theorem 3.1 is hard to follow. It seems the authors overlook that the system considered in this paper is affine nonlinear.\n\n(7) In appendix F, the authors should explain clearly that how the reference command input and the error influence the control policy, which could be an important trick in improving the learning performance of the proposed method.\n\n(8) Since the system is known, the authors are encouraged to further compare their method with some classic control methods which do not consider optimality. In addition, could the authors explain why the SOTA baselines are significantly sample-inefficient?\n\n(9) Is it possible to discuss the learning robustness of the proposed method from a theoretical perspective?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "(1) Extensive simulations are conducted.\n\n(2) The proposed method is robust to model uncertainty empirically."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a continuous-time reinforcement learning control method. The critic network design is novel. However, the theoretical analysis is questionable, which is difficult to follow. Simulation results on three optimal control tasks show that the proposed method outperforms SOTA methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1) The proposed method is model-based, which only considers affine nonlinear systems.\n\n(2) There are some theoretical issues in the paper.\n\n(3) The advantage of the proposed method in improving learning robustness is not analyzed theoretically."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024integral,\ntitle={Integral Performance Approximation for Continuous-Time Reinforcement Learning Control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z21DkDDdgq},\nnote={under review}\n}"
},
"abstract": {
"value": "We introduce integral performance approximation (IPA), a new continuous-time reinforcement learning (CT-RL) control method. It leverages an affine nonlinear dynamic model, which partially captures the dynamics of the physical environment, alongside state-action trajectory data to enable optimal control with great data efficiency and robust control performance. Utilizing Kleinman algorithm structures allows IPA to provide theoretical guarantees of learning convergence, solution optimality, and closed-loop stability. Furthermore, we demonstrate the effectiveness of IPA on three CT-RL environments including hypersonic vehicle (HSV) control, which has additional challenges caused by unstable and nonminimum phase dynamics. As a result, we demonstrate that the IPA method leads to new, SOTA control design and performance in CT-RL."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Continuous-Time Reinforcement Learning (CT-RL)",
"Optimal Control",
"Integral Performance Approximation (IPA)",
"Adaptive/Approximate Dynamic Programming (ADP)",
"Flight Control",
"Hypersonic Vehicles (HSVs)"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ad609e6a57ed1663d70ccd57dbc3af1a1eab5cf4.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/c0c046773e5444c9ad875ed066e553eea332f012.zip"
},
"title": {
"value": "Integral Performance Approximation for Continuous-Time Reinforcement Learning Control"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z2QdVmhtAP | Efficient Multi Subject Visual Reconstruction from fMRI Using Aligned Representations | main | Active | fMRI;Computational Neuroscience;Neuroimaging;Diffusion;CLIP;alignment;neuroAI | applications to neuroscience & cognitive science | 3;3;3 | 4;5;4 | 3;1;1 | 3;1;2 | 3;2;2 | 3 | 4.333333 | 1.666667 | 2 | 2.333333 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "+ According to MindEye2's settings, the shared images are used for testing rather than training. If the authors used these shared images to train the alignment, how was the test set constituted?\n+ Figure 5(b) states that good results can already be reconstructed at 0 epochs (i.e., no cross-subject training); is this a typo.?\n+ Table 3 has shown that using more training data leads to better model performance, while Figure 7(a) shows that the proposed data selection algorithm is able to obtain better results using less data. This brings up the trade-off question, is it better to train with more data? Or is it better to use the proposed selection algorithm? The authors do not clarify this question in this paper.\n+ Insufficient validation on the effectiveness of data selection algorithms. The authors only considered fewer evaluation metrics, used only 1 session of data for the experiment, and did not report the error of the experiment with different random number seeds. The authors should take richer experiments to prove the effectiveness of the method."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "+ In this paper, the rationalization of the shared visual representation space proposed by MindEye2 is slightly explained from a neuroscience perspective.\n+ This paper explores the interpretability of the proposed method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper achieves cross-subject brain visual decoding by training subject-specific adapters on subject-shared visual stimuli. To reduce the reliance on data, the authors propose a greedy selection algorithm to pick the more important data for cross-subject transfer. Experimental results show that the proposed method achieves results slightly, compared to normal fine-tuning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "+ This proposed alignment strategy relies on visual stimuli shared by multiple subjects, however this assumption is often difficult to realize in real scenarios, i.e., the images in the fMRI-image pairs used to train the model are hardly shared across subjects. This severely limits the usability of the method. In almost all papers that use NSD for visual decoding [1-4], the visual stimuli of different subjects do not overlap in the training set, which is more accepted setting.\n+ The innovations in this paper are limited. Compared to MindEye2 [2], the only difference is simply the addition of a training phase for MindEye2's ridge regression supervised with MSE loss, and the results achieved are less than impressive.\n\n**Reference**\n\n[1] Paul S. Scotti et al. Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors. NeurIPS 2023.\n\n[2] Paul S. Scotti et al. MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data. ICML 2024.\n\n[3] Weihao Xia et al. Dream: Visual decoding from reversing human visual system. WACV 2024.\n\n[4] Shizun Wang et al. MindBridge: A Cross-Subject Brain Decoding Framework. CVPR 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "What is an adapter? It is not defined anywhere.\n\n\nThe non-linear adapter is not described. What does it mean? \n\nHow is the network trained? Does it minimize the reconstruction loss? \n\n\nWill the method first require an extended fMRI scan (> few hours) to train the initial subject?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The experimental results are superior compared to other methods. However, see weaknesses."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tackles the problem of reconstructing visual images from fMRI signals. Based on prior work that has shown that fMRI signals can be embedded in a common space, where similar behavior and image semantics are represented along separate dimensions after a singular value decomposition. \n\nIn this paper, the authors claim that instead of training on a large number of subjects, one can train on just a single subject to construct a representation space, where other subjects are automatically aligned. While the introduction motivates the problem and the background literature is adequately represented, there are no technical details about the method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Details of the method are completely missing from the paper. Thus it was difficult to determine what was the contribution of the paper.\n\nSeveral concepts are mentioned and introduced, but no technical details are provided. \n\n\nIt seems the adapter network is an encoder-decoder architecture. However, details are missing. \n\nThe greedy image selection algorithm is not described anywhere. \n\nThe authors mention, \"Recent works have achieved impressive results by mapping fMRI data to latent diffusion model (LDM) spaces (Takagi & Nishimoto, 2023; Scotti et al., 2023; Lu et al., 2023; Xia et al., 2024), while simultaneously integrating multiple modalities. Despite this progress, these methods have not been thoroughly tested for their generalization performance across a larger population of subjects.\" However, in this paper, it doesn't seem that they have overcome this challenge. \n\n\nThe method is tested on a limited set of subjects. In such runs, the authors show a superior performance. However, it is not clear if it will generalize to new data."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The primary concern remains the limited number of subjects and datasets used in this work, despite the originality of the proposed idea. The reviewers have several questions related to specific weaknesses:\n\n1). Training Times and Computational Requirements: The proposed method reportedly achieves similar performance at epoch 1 compared to traditional methods at epoch 160. Could the authors provide specific training time consumption and computational requirements for both approaches, particularly for the first epoch, which seems critical for comparison?\n\n2). Greedy Heuristic Algorithm for Image Selection: The authors mention that the greedy algorithm for image selection achieves a $(1 - 1/e)$ approximation ratio; however, this is neither proven nor cited in the paper. Reviewers request a formal proof of this approximation ratio or a reference citation. Additionally, the authors should report the time consumption for the greedy heuristic search, as heuristic search algorithms are often time-intensive.\n\n3). Scalability of Adapter Alignment (AA): How would the AA method handle scaling in high-data or high-subject scenarios, where alignment and computational demands would likely increase? Reviewers recommend that the authors provide more details on the scalability of AA.\n\n4). 40-Dimension Threshold in Table 5: In Table 5, the method demonstrates notable performance when using 40 singular values, particularly in high-level metrics. Could the authors clarify whether this 40-dimensional threshold aligns with existing findings on the dimensionality of visual representations in the brain? Additionally, how does this threshold vary across different subjects?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "First, the authors introduce a novel approach for aligning subject-specific fMRI signals to a common visual representation space through Adapter Alignment (AA). This method efficiently manages multi-subject fMRI reconstruction by pre-training on a reference subject and using lightweight adapters to align new subjects, eliminating the need for end-to-end training for each individual.\n\nSecond, the authors provide compelling evidence for the existence of a shared visual representation space. They show that brain signals naturally align within this common space during training, even without explicit alignment mechanisms. This discovery is significant as it sheds light on how visual information is consistently represented across different human brains.\n\nMoreover, the authors present a novel data selection strategy using a greedy algorithm to identify representative images. If effective, this strategy could substantially reduce data collection demands, which is particularly valuable in neuroscience research where fMRI acquisition is costly and resource-intensive. Impressively, this approach achieves a 40% reduction in required training data while maintaining performance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this work, the author addresses the challenge of reconstructing visual images from fMRI data, particularly with limited data and computer resources. In the paper, they introduce a shared representation space to align brain patterns from different people, allowing a single model to work across multiple individuals. The key innovations of this paper include Adapter Alignment (AA) for aligning fMRI data across subjects and a greedy algorithm for selecting optimal images, reducing training time and data by 40% while maintaining quality."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The reviewers have several concerns regarding this work:\n\n1). Limited Subjects and Datasets: The authors aim to reconstruct visual images from fMRI using the proposed method; however, they only utilized data from a few subjects (e.g., a total of 4) within the NSD dataset. Additionally, only a single dataset is involved in this work. This limitation in subjects and datasets can impair the generalizability of the proposed framework and restrict its broader applicability. The reviewers suggest incorporating additional task-based fMRI data, such as from the HCP dataset, to reconstruct diverse cognitive activities like language and emotional responses.\n\n2). Potential Overfitting: The Adapter Alignment (AA) method may be prone to overfitting due to the limited training subjects and images. The limited shared images may not provide a comprehensive representation across diverse datasets, and the authors do not discuss strategies to mitigate overfitting in training. \n\n3). Details on AA. In this work, several critical aspects of AA, such as the selection of a reference subject, bin size for image selection, and potential configurations for non-linear adapters, are not clearly addressed. A more in-depth discussion on these points could help enhance and demonstrate the advancement of AA.\n\n4). Details on Greedy Heuristic Search: The authors employ a greedy heuristic algorithm for selecting image subsets; however, the methodology lacks sufficient detail. For instance, the authors state, “a greedy heuristic such as given below achieves an (1 − 1/e) approximation ratio.” Reviewers would like clarification on whether this approximation ratio was proven by the authors or referenced from existing literature."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We present a novel, subject-agnostic training method for efficient fMRI-based visual reconstruction that aligns brain signals in a common representation space, enabling faster, data-efficient training and improved generalization across subjects."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024efficient,\ntitle={Efficient Multi Subject Visual Reconstruction from f{MRI} Using Aligned Representations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z2QdVmhtAP},\nnote={under review}\n}"
},
"abstract": {
"value": "Reconstructing visual images from fMRI data presents a challenging task, particularly when dealing with limited data and compute availability. This work introduces a novel approach to fMRI-based visual image reconstruction using a subject-agnostic common representation space. We show that subjects' brain signals naturally align in this common space during training, without the need for explicit alignment. This is leveraged to demonstrate that aligning subject-specific adapters to a reference subject is significantly more efficient than traditional end-to-end training methods. Our approach excels in low-data scenarios, where training the adapter with limited data achieves faster and better performance. We also introduce a novel method to select the most representative subset of images for a new subject, allowing for fine-tuning with 40\\% less data while maintaining performance. These advancements make fMRI data collection more efficient and practical, reducing the burden on subjects and improving the generalization of fMRI reconstruction models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"fMRI",
"Computational Neuroscience",
"Neuroimaging",
"Diffusion",
"CLIP",
"alignment",
"neuroAI"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7969b1e52cf370c5c87b48d7e9495aa5a12d5e94.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to neuroscience & cognitive science"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Efficient Multi Subject Visual Reconstruction from fMRI Using Aligned Representations"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z2VBHpRT14 | SpaceSet: A Large-scale Realistic Space-based Image Dataset for Space Situational Awareness | main | Active | space situational awareness;object detection and tracking;space image dataset;high resolution image | datasets and benchmarks | 5;5;6;10 | 3;4;2;1 | 3;2;3;4 | 2;2;3;4 | 3;2;3;4 | 6.5 | 2.5 | 3 | 2.75 | 3 | -0.867722 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Please provide multi-run results with mean and variances on the object detection and object tracking benchmarks to ensure the variances of multiple-runs are small.\n2. The authors could include baseline results of specialized small object detection methods, such as approaches [1] that have shown promising performance on the VisDrone [2] benchmark.\n\n[1] QueryDet: Cascaded Sparse Query for Accelerating High-Resolution Small Object Detection.\n\n[2] VisDrone-DET2019: The Vision Meets Drone Object Detection in Image Challenge Results."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper proposes a new large scale dataset for space situational awareness. Comparing to previous public avaiable dataset available in this filed, the proposed dataset is the first realistic image dataset at the photon level.\n2. This paper constructs object detection and tracking benchmark on the proposed dataset, and implemented several baselines methods on the dataset."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a large-scale realistic space-based image dataset for space situational awareness. The dataset contains 20k images with 673 objects. The paper describes the detailed pipeline of data curation and data annotation. Additionally, the authors construct an object detection and tracking benchmark based on the proposed dataset and provide several baseline results."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Considering the relatively small number of target objects (673) in the dataset, it is crucial for the authors to perform multiple experimental runs for object tracking tasks to obtain statistical mean values and variances. The authors should ensure that the variation in evaluation metrics across different runs (e.g. 3 or 5 runs) remains below a specified threshold (e.g., 0.5) to establish the reliability and stability of the results.\n2. Given that object scale is a critical factor influencing detection performance, the authors should provide comprehensive statistical analysis of object scales in the dataset, such as the distribution of object sizes (e.g., small, medium, and large objects). This information would help readers better understand the dataset characteristics and evaluation results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "I mentioned all comments including reasons and suggestions in the above sections. I recommend that the author will provide all the concerns, and improve the completeness of the paper. If the rebuttal period resolves the above-mentioned concerns, I will gladly raise my score. Also, there are many vague sentences and grammatical errors in the paper. I recommend that the author will revise the paper."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "First of all, the motivation of the paper seems to be meaningful and pragmatic in the perspective of addressing limitations in the existing dataset. \n\nSpaceSet contains high-resolution scenes and more RSOs than existing datasets and also contains multi-camera images that consider various physical factors including noise, object properties, camera models, and locations.\n\nTo validate its effectiveness as a benchmark for object detection and tracking, the authors selected the most widely used models and conducted meaningful experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To improve Space Situational Awareness (SSA), the authors introduced SpaceSet, a comprehensive, large-scale dataset of high-resolution spaced-based images, designed to address the limitations of existing simulated datasets. This datasets consists of images generated with accurate orbital dynamics and a physical camera model with various noise distributions, capturing observations from altitudes between 19 km and 63,000 km. In experimental section, the authors provided the object detection and tracking benchmarks. The benchmark indicates that a series of YOLO-8 can show the better performance in the computational overhead and the accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "As a benchmark paper, it lacks many details and analysis on dataset itself. Except for data generation, the information such as the properties of RSOs such as size and orientations, the positional distribution of frequently appearing locations in the images, and criterion for image splits is needed. Furthermore, there is a lack of detail regarding the benchmark experiments such as whether all models used input images of the same size or why do existing SOTA object detection and tracking methods not perform as well as expected on SpaceSet-100?\n\nAnother concern is the completeness of the paper. Although the completeness is an essential aspect of a paper, this paper contains awkward sentences, typos (e.g., L.140, 210, obit), grammatical errors (e.g., L.216 box are -> is). Regardless of whether these issues hinder understanding of the content, the lack of attention to such things is problematic.\n\nLastly, the content in A.5 intended to validate the proposed dataset’s effectiveness in real-world data is not clearly conveyed. It would be beneficial to emphasize in the main paper why these metrics provided in Table.8 and 9 are meaningful."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Plz refer to my comments in \"Weakness\"."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The main contribution of this paper is a large-scale, realistic image dataset designed to enhance space situational awareness (SSA) for tracking and monitoring resident space objects (RSOs). The dataset has a few merits compared to existing ones:\n\n+ The proposed dataset incorporates realistic orbital dynamics and a camera model with photon-level noise, enhancing its applicability to real-world SSA tasks.\n\n+ The image resolution in the proposed dataset is high (4418 × 4418 pixels). It covers multiple orbital altitudes (LEO, MEO, GEO) and observation distances (19km to 63,000km)"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces SpaceSet, a large-scale, realistic image dataset designed to enhance space situational awareness (SSA) for tracking and monitoring resident space objects (RSOs). Unlike previous datasets, SpaceSet incorporates accurate orbital dynamics and a physical camera model with photon-level noise distributions to produce realistic space-based images. Simulated from multiple orbital perspectives (LEO, MEO, GEO), the dataset covers distances from 19 km to 63,000 km and provides high-resolution images (4418 × 4418 pixels) suitable for advanced SSA methods. It includes three subsets—SpaceSet-100, SpaceSet-5000, and SpaceSet-full—addressing various image processing needs, along with a benchmark evaluation for detection and tracking algorithms."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My major concern is the limited contribution. A pure dataset contribution may not align with the topic of the ICLR conference (Learning Representations). This paper benchmarks many existing detection and tracking algorithms on the proposed subset (SpaceSet-100). However, no new algorithms regarding learning representations are provided. \n\nAnother concern is the dataset setup. I understand the proposed one is already closer to real settings than previous ones, but it is still a synthetic dataset. I wonder what the gap is between the proposed synthetic and realistic settings. For example, the dataset uses a fixed camera setup, four overlapping cameras, and a fixed rotation angle. Is this always the real application scenario? Will any missions require a different number of cameras with other relative pose settings (e.g., unstructured)?\n\nIs there any available real dataset that could be used to evaluate the quality of the synthetic data or the synthetic-to-real generalization ability of an algorithm? \n\nSome illustrations might be clearer. For example, from Fig.1, I cannot figure out why (a) shows the realistic exposure with noise distribution, how exposure is reflected, what the noise distribution is, and why they are realistic. (b) shows a picture from an existing dataset, SPARK, but there isn't an image of the proposed dataset for comparison. (c) is almost black and I do not know where to put my focus."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 1
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Given my limited expertise in the field of space observations, I find myself ill-equipped to provide a comprehensive evaluation of this paper. The specific challenges and nuances within this domain are not within my area of specialization. Therefore, I recommend that you consult with a diverse group of reviewers who possess a deeper understanding of space-related research to ensure a thorough and informed assessment of the paper's content and significance."
},
"rating": {
"value": 10
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "Given my limited expertise in the field of space observations, I find myself ill-equipped to provide a comprehensive evaluation of this paper. The specific challenges and nuances within this domain are not within my area of specialization. Therefore, I recommend that you consult with a diverse group of reviewers who possess a deeper understanding of space-related research to ensure a thorough and informed assessment of the paper's content and significance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Given my limited expertise in the field of space observations, I find myself ill-equipped to provide a comprehensive evaluation of this paper. The specific challenges and nuances within this domain are not within my area of specialization. Therefore, I recommend that you consult with a diverse group of reviewers who possess a deeper understanding of space-related research to ensure a thorough and informed assessment of the paper's content and significance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Given my limited expertise in the field of space observations, I find myself ill-equipped to provide a comprehensive evaluation of this paper. The specific challenges and nuances within this domain are not within my area of specialization. Therefore, I recommend that you consult with a diverse group of reviewers who possess a deeper understanding of space-related research to ensure a thorough and informed assessment of the paper's content and significance."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "SpaceSet, a large-scale realistic space-based image dataset for space situational awareness and benchmark with SOTA object detection and tracking algorithms."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024spaceset,\ntitle={SpaceSet: A Large-scale Realistic Space-based Image Dataset for Space Situational Awareness},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z2VBHpRT14},\nnote={under review}\n}"
},
"abstract": {
"value": "Space situational awareness (SSA) plays an imperative role in maintaining safe space operations, especially given the increasingly congested space traffic around Earth. Space-based SSA offers a flexible and lightweight solution compared to traditional ground-based SSA. With advanced machine learning approaches, space-based SSA can extract features from high-resolution images in space to detect and track resident space objects (RSOs). However, existing spacecraft image datasets, such as SPARK, fall short of providing realistic camera observations, rendering the derived algorithms unsuitable for real SSA systems. In this research, we introduce SpaceSet, a large-scale realistic space-based image dataset for SSA. We consider accurate space orbit dynamics and a physical camera model with various noise distributions, generating images at the photon level. To extend the available observation window, four overlapping cameras are simulated with a fixed rotation angle. SpaceSet includes images of RSOs observed from $19 km$ to $63,000 km$, captured by a tracker operating in LEO, MEO, and GEO orbits over a period of $5,000$ seconds. Each image has a resolution of $4418 \\times 4418$ pixels, providing detailed features for developing advanced SSA approaches. We split the dataset into three subsets: SpaceSet-100, SpaceSet-5000, and SpaceSet-full, catering to various image processing applications. The SpaceSet-full corpus includes a comprehensive data-loader with $781.5GB$ of images and $25.9MB$ of ground truth labels. We also benchmark detection and tracking algorithms on the SpaceSet-100 dataset using a specified splitting method to accelerate the training process."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"space situational awareness",
"object detection and tracking",
"space image dataset",
"high resolution image"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/e9856486add02455b82a940b747008693f4fb7d1.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/7f0c1c3c40e7c03ca32851c5c000b02052cb8cb0.zip"
},
"title": {
"value": "SpaceSet: A Large-scale Realistic Space-based Image Dataset for Space Situational Awareness"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z2WCyBO923 | Four eyes see more than two: Dataset Distillation with Mixture-of-Experts | main | Active | dataset distillation;mixture-of-experts | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 5;5;5;5 | 4;5;4;4 | 2;2;2;3 | 2;2;2;2 | 3;2;3;3 | 5 | 4.25 | 2.25 | 2 | 2.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Could the authors elaborate on how the specific values for mixup parameters (e.g., Beta distribution parameter) were chosen and whether these values impact performance significantly?\nHas the method been tested on tasks beyond classification, such as object detection, to verify its generalizability?\nIs there a limit to the number of experts that can be effectively used before diminishing returns set in?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Innovative Framework \nThe use of multiple expert models in dataset distillation is a novel approach that addresses the prevalent issue of cross- architecture performance degradation.\nComprehensive Experimental Validation \nThe paper provides a thorough set of experiments across different architectures and datasets, demonstrating the effectiveness of the proposed method.\nClear Methodology\nThe methodology is well-documented and includes ablation studies to justify the inclusion of distance correlation minimization and mixup-based fusion.\nImproved Performance\nThe multi-expert framework consistently shows better cross-architecture performance than single-expert baselines, especially in low-data settings."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a mixture-of-experts MoE approach to dataset distillation DD aimed at mitigating cross-architecture performance degradation. Traditional DD methods struggle when the distilled dataset is applied to architectures different from those used in the distillation process. This work tackles this limitation by involving multiple expert models, each responsible for distilling a distinct subset of the data, to enhance diversity within the distilled dataset. A distance correlation minimization strategy encourages experts to learn distinct representations, and a mixup-based fusion strategy further improves the generalizability of the distilled dataset. Experimental results show significant cross-architecture performance improvements, especially in low-data regimes."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Theoretical Justification\nWhile the experimental results are compelling, the paper could benefit from a more in-depth theoretical analysis of why the multi-expert approach performs better in cross-architecture scenarios.\nPotential Overfitting on Small Datasets\nThe method may require further validation on very large datasets or real-world applications to confirm its scalability.\nLimited Discussion on Limitations\nThe paper does not sufficiently address potential downsides or scenarios where the multi-expert approach may not yield improvements."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. It is recommended to provide more theoretical insights or empirical analysis on how distance correlation minimization specifically enhances generalizability across different architectures.\n\n2. How does the computational cost of training multiple experts compare to a single-expert setup, and are there any optimizations that could make this approach more efficient?\n\n3. Are there any alternative diversity-promoting strategies, such as ensemble regularization techniques, and if so, how do they compare to distance correlation minimization?\n\n4. In scenarios where the storage budget is not equal across experts, how does this affect performance, please explore the impact of imbalanced storage distributions.\n\n5. Please provide empirical evidence showing how distance correlation relates to feature diversity across architectures, or conduct ablation studies comparing distance correlation to other diversity metrics.\n\n6. It is suggested to provide concrete metrics on training time and memory usage for their method compared to single-expert baselines, especially as the number of experts or dataset size increases.\n\n7. It is better to compare performance with different ratios of storage allocation between experts (e.g., 70-30 split vs 50-50) on a particular dataset.\n\n8. It is suggested to include experiments with transformer architectures (e.g., ViT) or larger CNNs (e.g., EfficientNet) to demonstrate the method's generalizability across a wider range of modern architectures."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "### Strengths\n1. The paper tackles a challenging and relevant problem in dataset distillation, specifically addressing cross-architecture performance degradation. The application of mixture-of-experts, along with distance correlation minimization, is an innovative approach to promoting diversity in distilled data.\n \n2. The authors conduct extensive experiments across various datasets (e.g., CIFAR-10, CIFAR-100) and network architectures (e.g., ConvNet-3, ResNet18, VGG-11, AlexNet). The results consistently demonstrate the advantages of the multi-expert framework in improving the performance and generalizability of the distilled datasets, especially under low-data conditions.\n \n3. The mixup fusion strategy is thoughtfully designed and implemented to leverage complementary information across experts, enhancing the generalizability of the synthetic dataset.\n\n4. The paper is generally well-organized, with mathematical formulations and a clear explanation of the methodology."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel framework for dataset distillation using a mixture-of-experts (MoE) approach aimed at improving cross-architecture generalizability. The method, called Four Eyes See More Than Two, assigns different parts of the dataset distillation task to multiple expert models, each trained to distill a distinct subset of the data. To further promote diversity, the authors employ a distance correlation minimization strategy, encouraging each expert to capture unique data representations. Finally, the mixup-based fusion technique integrates the synthetic data from different experts, creating a more comprehensive distilled dataset for training across diverse architectures."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. While distance correlation minimization is used as a diversity-promoting technique, there is limited theoretical justification as to why this particular metric would enhance generalizability across architectures. A stronger theoretical or empirical explanation connecting distance correlation with cross-architecture transferability would be beneficial.\n\n2. Although the MoE approach is shown to improve generalization, it requires training and maintaining multiple experts. The paper lacks a discussion on the computational trade-offs involved and how they may impact scalability, particularly when larger datasets or more complex architectures are involved.\n\n3. While the proposed method performs well, it would be insightful to compare the performance against more traditional regularization techniques or other diversity-promoting methods to strengthen the uniqueness of the approach.\n\n4. The paper assumes an equal storage budget per expert, which might not always be practical. An analysis of how unequal storage distribution among experts impacts performance could add depth to the evaluation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How much computation is required to train a ConvNet-3 on the full CIFAR-100 dataset compared to the distillation and training process? Could the authors provide specific numbers to illustrate this comparison?\n\n- Would using different architectures to generate distilled subsets yield better results than using the same architecture across subsets?\n\n- What are the computational and time costs for distillation on larger datasets like ImageNet, and how do these compare to training directly on the full dataset?\n\n- How does the proposed multi-expert approach scale with larger datasets and more complex architectures? Is there a significant increase in overhead?\n\n- Could the authors provide a breakdown of resource usage and time for each stage of the distillation process? This would help evaluate its efficiency relative to traditional training."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper addresses a relevant challenge in dataset distillation, proposing a ”mixture-of-experts\" approach to improve cross-architecture performance and reduce model-specific overfitting.\n\n- The method combines known techniques such as distance correlation minimization and Mixup fusion to enhance data diversity, demonstrating promising results in low-data regimes.\n\n- Experimental results on multiple datasets indicate some potential benefits of using multiple experts to improve generalization across different architectures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a dataset distillation method using a Mixture-of-Experts (MoE) framework to improve cross-architecture performance. By dividing the dataset distillation process across multiple expert models, each focusing on different data subsets, the approach aims to enhance data diversity and reduce architecture-specific overfitting."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Limited Practicality of Dataset Distillation Methods: While dataset distillation aims to reduce training costs, current methods—including this one—often require **significantly more computation to create distilled datasets than training directly on the full dataset.** This undermines the primary goal of efficiency, which the paper does not address by comparing training time and resource use with traditional methods.\n\n- Lack of Novelty: The approach lacks substantive innovation, **primarily combining existing techniques** (MoE, distance correlation, Mixup) without substantial theoretical or methodological advances. The MoE structure used here is a **simple model ensemble** rather than a true sparsely activated mixture of experts with adaptive routing, limiting its contribution to the field.\n\n- Outdated Baselines and Datasets: The experiments primarily use small datasets (e.g., CIFAR-10/100) and outdated models (e.g., ConvNet-3, VGG11), which do not adequately demonstrate the method's effectiveness or scalability on modern, large-scale tasks. In 2024, using such datasets does not reflect practical applicability, especially when larger datasets can be handled efficiently on current hardware."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "n/a"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weakness. 1-3 are my main concerns."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed method seems to be compatible with most of the previous dataset distillation methods since the images from each expert can be obtained in any distillation method.\n2. Authors claim that they achieve better cross-architecture performance for dataset distillation.\n3. Authors give a detailed introduction to their implementation details. (It will be better if codes are released)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a MOE framework for dataset distillation, which applies multiple experts to distill synthetic images in multiple views. During distillation, a distance correlation metric is utilized to improve the diversity of the images from multiple experts. During the evaluation phrase, authors merge the images from multiple experts by applying mix up."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The first problem is whether the MOE framework really brings benefits. In Table 2, the authors give the ablation study for using distance correlation and the mixup-based fusion. The setting that Mixup-based fusion is utilized while distance correlation is not utilized is not reported, while is important to prove the effectiveness of the multiple experts. \n2, The ablation study can be further improved by the following experiments: using one expert for distillation with IPC=20, and performing fusion from 2 to 1, resulting in an IPC=10, comparing this result with the IPC=10 fused from two experts.\n3. Authors mainly show performance improvements in cross-architecture experiments. How does this method work for models in the same architecture? Besides, why does such a MOE framework improve cross-architecture results but not work very well for the same architecture? It seems that the methodology of this work has no direct relation to cross-architecture experiments.\n4. Some typos. Line 508.5, quotes are wrong.\n5. CIFAR100 experiments are missing (not a very big problem).\n6. It's really difficult for people to capture the idea of \"diversity\" from images in Figure 2. Why not provide images in high resolution such as ImageNet?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024four,\ntitle={Four eyes see more than two: Dataset Distillation with Mixture-of-Experts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z2WCyBO923},\nnote={under review}\n}"
},
"abstract": {
"value": "The ever-growing size of datasets in deep learning presents a significant challenge in terms of training efficiency and computational cost. Dataset distillation (DD) has emerged as a promising approach to address this challenge by generating compact synthetic datasets that retain the essential information of the original data. However, existing DD methods often suffer from performance degradation when transferring distilled datasets across different network architectures (i.e. the model utilizing distilled dataset for further training is different from the one used in dataset distillation). To overcome this limitation, we propose a novel mixture-of-experts framework for dataset distillation. Our goal focuses on promoting diversity within the distilled dataset by distributing the distillation tasks to multiple expert models. Each expert specializes in distilling a distinct subset of the dataset, encouraging them to capture different aspects of the original data distribution. To further enhance diversity, we introduce a distance correlation minimization strategy to encourage the experts to learn distinct representations. Moreover, during the testing stage (where the distilled dataset is used for training a new model), the mixup-based fusion strategy is applied to better leverage the complementary information captured by each expert. Through extensive experiments, we demonstrate that our framework effectively mitigates the issue of cross-architecture performance degradation in dataset distillation, particularly in low-data regimes, leading to more efficient and versatile deep learning models while being trained upon the distilled dataset."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"dataset distillation",
"mixture-of-experts"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/3d2f3ab8230d39f4a73bc8e6e3741438d64f7ade.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Four eyes see more than two: Dataset Distillation with Mixture-of-Experts"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z2z9suDRjw | GOAL: A Generalist Combinatorial Optimization Agent Learning | main | Active | neural combinatorial optimization;generalist models;transfer learning;fine tuning | foundation or frontier models, including LLMs | 5;5;6;8 | 4;4;4;4 | 3;2;3;4 | 2;3;3;4 | 3;3;3;3 | 6 | 4 | 3 | 3 | 3 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "(1)Please address the issues raised in the weaknesses section.\n\n(2)The paper assumes that solving strategies for various combinatorial optimization problems share common knowledge. How can it be proven that such knowledge exists, and how can the overlap of this knowledge across different combinatorial optimization problems be demonstrated?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "(1)The paper is novel , as it designs a multi-task learning approach to solve various combinatorial optimization problems through an end-to-end model. The authors developed a mixed-attention block to effectively achieve this objective.\n\n(2)The paper is well-organized, concisely written, and has good readability.\n\n(3)This paper demonstrates substantial work, conducting experiments on various combinatorial optimization problems and showcasing the effectiveness of the proposed method in terms of solution quality and speed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a generalist model designed to address various combinatorial optimization problems. Unlike traditional machine learning approaches, which require a specialized and separately trained model for each problem, this method utilizes a shared backbone network with lightweight, problem-specific adapters for input and output processing. The backbone incorporates mixed-attention blocks that accommodate different combinations of node, edge, and instance-level features, while a multi-type Transformer architecture handles heterogeneous node and edge types. Experiments show that this method performs nearly as well as specialized models in a multi-task setting across diverse problems and demonstrates strong transfer learning capabilities, adapting effectively to new problems through fine-tuning."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "(1)The description of dimension transformations and the learning process of the model is not very illustrative. It is recommended to add figure and text to enhance the explanation.\n\n(2)In Table 1, only one problem size is tested, and it is relatively small. It is recommended to include experiments with larger problem sizes.\n\n(3)The paper lacks a theoretical analysis of the method's effectiveness, and it is recommended to include this section.\n\n(4)There are few effective baselines in Table 1. For ATSP, CVRPTW, OP, KP, MVC, and JSSP, there is only one or two baselines, and UMSP lacks an NCO baseline, which weakens the convincing power. It is recommended to add more baselines."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The following questions should not be considered as \"weaknesses\", but I believe are worth discussing:\n* What is the parameter size of GOAL and how does it compare to other peer methods?\n* How to implement the post-processing step (such as MCTS, 2-opt search) that has been proven to be prominent for neural network CO solvers?\n* For problems where there could be different ways of defining nodes and edges, how will different node and edge definitions affect the solver's performance?\n* To what extent does the current version of GOAL scale up to larger-sized problems?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "* A generalized combinatorial optimization solver is favored by the research community.\n* The proposed GOAL transformer architecture seems interesting and promising, especially given the fact that it outperforms other neural networks when trained for a specific problem.\n* The generalized training and fine-tuning result seems sound and promising. The greedy version of GOAL is comparative to other greddy peer methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a new transformer-like architecture, named GOAL, for general types of combinatorial optimization training. GOAL is an auto-regressive model with a shared architecture across different problem types with specialized input and output layers for each problem. This architecture learns a generalizable rule that solves different COs and can generalize to new problems by fine-tuning. Experiment result shows that GOAL could be comparative or outperform problem-specific state-of-the-arts (greedy version)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Some important details are missing in this paper:\n* What are the implementation details of the \"codebook\"? Please specify\n* Definitions of BQ-MDP and tail-recursive are needed in the main text to make this paper self-contained.\n\nMisc:\n* The first paragraph is too long\n* There are multiple misuses of \\citep and \\citet in this paper. For example, In L112, please use \\citet for Khalil et al., 2017. In L116, please use \\citep for Kool et al. (2019). Please proofread and fix all the misusages\n* If the oracle solver is not an optimal solver, the results should not be claimed as \"optimal gap\" in Table 1"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "* How exactly does the proposed multi-type architecture improve performance? (see Weakness section)\n* Considering that the multi-task architecture performs slightly worse than the single-task models, have the authors considered using some multi-task learning techniques (GradNorm, etc...) to potentially close this gap? \n* For the fine-tuning experiments, how different would the performance be if one freezes the backbone and only trains the adaptors?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The problem of multi-task pretraining on multiple CO problems is novel and interesting.\n* The main experiments are fairly comprehensive and consider a wide range of problems.\n* The reported results are promising and suggest that pretraining can significantly improve convergence speed when fine-tuning for a new problem."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the problem of training a joint multi-task model on multiple combinatorial optimization tasks (COP). \nTo this end, the GOAL architecture is proposed. This (graph) transformer architecture learns a shared transformer backbone that is shared across COPs. It further maintains COP-specific adaptors that are applied before and after the backbone.\nThe experiments show that the model trained on multiple tasks is competitive and yields slightly worse than single-task models of the same architecture. It is further shown that fine-tuning a pretrained multi-task model to new tasks yields significantly faster convergence than training a single-task model from scratch."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I am skeptical about the architecture design for multi-type problems, which is why I currently rate this work as a borderline reject. If I understand correctly, two changes are made when working on a multi-type problem:\n1. Multiple adaptors are learned, one for each node type (and edge type), respectively.\n2. The same backbone model is applied separately to each type and each node type now applies the attention module twice: once to self-attend to other nodes of the same type and a second time to cross-attend to nodes of other types.\n\nThe first modification seems useful to map the features of each type to different positions in the latent space. However, it is not clear to me why the second change is helpful. Is this superior to applying the single-type backbone to embeddings produced by heterogeneous adaptors? If so, why? The ablation study seems to combine both changes at once, so the individual contribution of either modification is not demonstrated.\n\nI also think the illustration of the multi-type mechanism in Figure 1 is a bit misleading as it suggests that the output of the self-attention is passed as input to the cross-attention. However, the source code applies both operations in parallel and sums the results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- The mixed attention works similar to MatNet. So why MatNet cannot be run on ATSP500 or larger, but GOAL can? Another question, why GOAL outperforms MatNet, which parts of GOAL contribute to the improvement?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Writing is easy to follow. Motivation makes sense. Experiments are sufficient. \n\nThe paper gives an interesting trial on the multi-task neural CO solver with a common encoder and task-specific adaptor."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces GOAL, a generalist machine learning model for solving various combinatorial optimization problems. It uses a single backbone model with lightweight problem-specific adapters and shows strong performance in solving multiple optimization tasks. The model also exhibits efficient transfer learning capabilities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Contributions can be over claimed. Authors claim that it is ``the first model that solves such a variety of CO problems.'' As I know, [1] proposes a multi-task solver where encoders for different tasks share a common part, just like GOAL. So GOAL might not be the first one. Many other works e.g. [2] also have a similar idea. So, the idea of a multi-task solver may not be that new.\n\n[1] Efficient training of multi-task combinarotial neural solver with multi-armed bandits. arXiv preprint arXiv:2305.06361, 2023.\n\n[2] Multi-Task Learning for Routing Problem with Cross-Problem Zero-Shot Generalization\n\n- The running efficiency is not competent compared with other methods, as shown by Table 1 and 2. In ATSP and CVPR, it may take several more times of the running time compared with other methods (AM, MatNet, MDAM, etc.)\n\n- There is still a significant gap between GOAL and the optimal solutions as shown in Table 1 and 2. Though it is understandable that some heuristics can be simultaneouly fast and effective, the gap between GOAL and the best solver may make GOAL useless in solving real-world problems. Such a gap may also suggest that the time of ``general solvers'' has not yet arrived, and more efforts should be paid on specific models.\n\n- Though the method empirically works, there is still a lack of necessary theoretical explanations that why the method has the ability to generalize to instances of a larger scale while other baselines do not have such an ability."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a generalist model capable of efficiently solving multiple COPs and which can be fine-tuned to solve new COPs."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024goal,\ntitle={{GOAL}: A Generalist Combinatorial Optimization Agent Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z2z9suDRjw},\nnote={under review}\n}"
},
"abstract": {
"value": "Machine Learning-based heuristics have recently shown impressive performance in solving a variety of hard combinatorial optimization problems (COPs). However they generally rely on a separate neural model, specialized and trained for each single problem. Any variation of a problem requires adjustment of its model and re-training from scratch. In this paper, we propose GOAL (for Generalist combinatorial Optimization Agent Learning), a generalist model capable of efficiently solving multiple COPs and which can be fine-tuned to solve new COPs. GOAL consists of a single backbone plus light-weight problem-specific adapters for input and output processing. The backbone is based on a new form of mixed-attention blocks which allows to handle problems defined on graphs with arbitrary combinations of node, edge and instance-level features. Additionally, problems which involve heterogeneous types of nodes or edges are handled through a novel multi-type transformer architecture, where the attention blocks are duplicated to attend the meaningful combinations of types while relying on the same shared parameters. We train GOAL on a set of routing, scheduling and classic graph problems and show that it is only slightly inferior to the specialized baselines while being the first multi-task model that solves a wide range of COPs. Finally we showcase the strong transfer learning capacity of GOAL by fine-tuning it on several new problems. Our code is available at https://anonymous.4open.science/r/GOAL-10/ ."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"neural combinatorial optimization",
"generalist models",
"transfer learning",
"fine tuning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fb633078528296ef590df287b7bdfa518ea22231.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "GOAL: A Generalist Combinatorial Optimization Agent Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z3DMFpaP6m | On the Entropy of Language Models in Getting Semantic from Tokens | main | Active | LLM evaluation | foundation or frontier models, including LLMs | 1;3;5 | 3;3;2 | 1;1;3 | 1;2;2 | 1;1;2 | 3 | 2.666667 | 1.666667 | 1.666667 | 1.333333 | -0.866025 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. What is the motivation for the estimation function in Section 3.2? Since the experiments primarily use open-weight models, we have access to the hidden states. These are discrete distributions and we can compute/approximate KL divergence more precisely than learning a separate estimator, f. And a related question is why we should measure the IE at each layer of the Transformer instead of only the final layer(s).\n\n2. I’m also confused by the batch size choice. Is the batch size chosen to be the full size of the data, or is it 300K? Or am I missing something about number of samples vs. batch size?\n\n3. For ICL, the comma is always at even positions for every example in the dataset. Then wouldn’t the macro-level MI (first term) already be small or near 0; the 2nd term will also be 0, and so the IE for that token would be basically 0. Is this what is actually happening, or is the macro MI nonzero?\n\n4. And maybe I’m misunderstanding the micro variables: when calculating the micro variables, are padding tokens used? For example, in L220, for $h_{l}^{mi_1}$, is there a padding/zero token before the word ‘language’ or not?\n\n5. I assume by $l$ and *block*, this is referring to layers of the transformer. The word \"layer\" was never used, so please correct me if this is the wrong understanding.\n\nMinor edits suggestions:\n\nIn Definition 1: Define $h_{l}^{mi\\_t}$ earlier -- it isn't defined until the next page.\n\nL227: “Notably, We” -> “Notably, we”\n\nL364: “Moreover, We” -> “Moreover, we”\n\nL242: “how confidence” -> “the confidence”\n\nL753: “ICl” -> “ICL”\n\nFig 3: “divise” -> “division” or “divide”\n\nThe title, and in general, the use of the word \"semantic\" (adjective) and \"semantics\" (noun) needs to be more careful. It should probably be \"semantics\" in the title."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The paper proposes a formalism of “macro” and “micro” variables to describe the notion of IE in LLMs. This formalism corresponds with their supervenience hypothesis, which in turn is inspired by the term “emergence” from philosophy/information theory.\n\n2. In the experiments, the calculated IE value appears to correlate positively with model size and task accuracy."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a method for determining “information emergence” (IE) that is roughly focused on entropy reduction upon observing a new token. This is introduced as an idea where a model which better understands the semantics of tokens should have higher entropy reduction than smaller models. This can be estimated by calculating the difference between mutual information between the token hidden state distributions between layers for the full sequence and for individual tokens. Using the framing of IE, the authors describe its implementation for LLMs and follow-up findings on GPT-2, GEMMA, and OpenLlama, like the correlation between IE, model size, and accuracy."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The experimental methodology is not explained in a way I can follow. I do not understand the connection between Section 3.3 and Section 4.3. Semantic faithfulness is not defined in this paper or included as a citation, nor is semantic sensitivity. Big-Bench was not mentioned as a dataset until Section 4, and seems to no longer be affected by the token position problem?\n\n2. The findings in the introduction are not strongly supported by the experiments with sufficient evidence. Of the 3 main interesting findings: \n\na.) Finding 1: Sec 5.1, is somewhat supported, that IE increases token-by-token in natural texts, but the ICL setting is too contrived because the example itself is only a single token, and the other token (comma) is not information-heavy. A equally plausible explanation is that IE does not increase on punctuation (which is semantically less meaningful).; \n\nb), Finding 2: Sec 5.2 Is it surprising that the variance increases when IE increases? To me, this suggests that IE is invariant to scaling, and not that it corresponds to hallucination. IE actually goes down as the number of shots increases, and so I expect the SD is going down too. Even looking closely at Fig 4, it is not clear to me that SD is increasing as shots increases (e.g. 4(d)) has low SDs throughout, 4(a) has SD only increase at the 5th/6th/7th token, but it looks relatively stable as a fraction of E(t) for 4(c)\n\nc) Finding 3: Sec 5.3 This result is interesting as a means for detecting LLM-generated and human-generated text, and perhaps can be the main focus of the 3 findings. However the results are still a bit limited because the methodology is unclear. Questions like how many samples (questions) were taken from OpenHermes? How were these answers formatted into 8 tokens (what if they were too long)? How were the LLMs prompted to answer the questions?\n\n\n3. I’m confused why there needs to be a distinction between “emergence” as defined in this paper from “emergent abilities” as used for LLMs, and neither this paper nor the cited papers appear to explain this. So, what precisely, is the definition of emergence and LLM emergence, and why are they different? \n\nSome prominent citations (like Wei et al., 2022) are not used correctly while 3 of the 4 main citations consistently used to support claims about emergence, including how it is defined do not make strong claims about emergence in models (Liu et al., 2024 – about prompts for power emergency plans; Yu & Dong 2022 – about emergence of complex language learning in L2 (human) students; and Srivastava et al., 2022 – the dataset paper for BIG-Bench. *Note: this is based on reading the abstract as some of these papers are not open-access*). Many of the other citations in the paper, however, look reasonable.\n\n4. As acknowledged by the paper, the limitation of the analysis to specific sequences of equal length and similar token positions is a major obstacle to for using IE at all. Still, it should be possible to create figures like Fig.4 which goes from 0... max_seq_length; why can that not be done in this work?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- L077: Consider revising these bibtex entries: GEMMA (Team et al., 2024). This clearly should be \"(GEMMA Team, 2024)\".\n - L082: Please elaborate what is \"ICL-style texts\" here.\n - L151: With embeddings as inputs and outputs, a Transformer is deterministic, thus not even stochastic. Please explain.\n - L160: It is not straightforward to sample sequences from BERT et al., so technically these are not language models (defined as distribution over sequences).\n - L208: so $h_l^{\\rm ma}$ is the last token?\n - Figure 2: \"Increasement\" => \"increase\".\n - \"Texts generated from LLMs vs humans\": Since you are eliciting *new* responses from humans, these are unseen data for LLMs. The phenomena you observed may not be true if you test LLM with human-generated text in the training set of LLMs. \n - \"LLM-generated text exhibited greater IE than human text\": Clearly, LLM-generated text exhibits lower perplexity than newly elicited human text. This experiment does not show that IE is superior than existing measures."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "N/A"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a new metric for measuring the capability of modeling semantics of LLMs. The metric concerns the reduction of entropy if conditioned on a longer sequence than a single token. The authors consider the process of generation in LLMs a Markov chains, and measures the mutual information of token embeddings between different layers."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The argument of the paper is hard to understand. The authors claimed to mathematically model the entropy of the tokens, but the log-probability of each token and the perplexity metric exactly contains the entropy of the tokens. I do not understand the argument behind the authors' proposal.\n - The authors did not properly introduce their notion of \"semantics\". In prior research, semantics can be understood under denotational, operational, or distributional contexts. It should be discussed under which setting the author is situating their research.\n - The token embedding of layer $(i+1)$ given layer $i$ is a deterministic process, but the authors cast this as a Markov stochastic process. Please elaborate on why this is stochastic."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Could you please provide some straightforward baselines and compare them with your metric?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The concept of Information Emergence (IE) is novel and provides a fresh perspective on evaluating the semantic understanding of LLMs. 2. The paper is methodologically rigorous, with a clear mathematical formulation of IE and a practical estimation algorithm based on mutual information. \n3. The proposed IE metric has broad applications, including detecting hallucinations and distinguishing between human and LLM-generated texts."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To quantify the semantic understanding capability of LLMs, this paper introduces a novel metric called Information Emergence, IE for short . IE is defined as the difference in entropy reduction between individual tokens and entire sequences representations within transformer models. Authors propose a mathematical formalism and a practical estimation algorithm to compute IE, which is validated through comprehensive experiments across various scenarios. The paper demonstrates that IE correlates with specific hallucination behaviors and can distinguish between human-written and model-generated texts."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The motivation is not clear, which is my main concern. I'm not convinced by the introduction on why we need a metric to quantify the behavior of finer-grained tokens, and why other methods fails. \n2. Need more discussion on related works and baseline comparison. And I hope to see more baseline comparisons (even designing a straightforward metric or adapting some other approaches for this problem).\n3. While the experiments are comprehensive, the paper could benefit from a broader range of model sizes and types. E.g., at least a model >=7B.\n4. The method need for a large number of samples to ensure the accuracy of joint and marginal distributions."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We designed a new metric to quantify the entropy reduction between semantic level and token level to represent the capability of capturing semantic from tokens"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024on,\ntitle={On the Entropy of Language Models in Getting Semantic from Tokens},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z3DMFpaP6m},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) are widely recognized for their exceptional capacity to capture semantic meaning. Yet, there remains no established metric to quantify this capability. In this work, we introduce a quantitative metric, Information Emergence (IE), designed to measure LLMs’ ability to extract semantics from input tokens. We formalize “semantics” as the meaningful information abstracted from a sequence of tokens and, leveraging information theory, quantify this through comparing the reduction in entropy observed for a sequence of tokens (macro-level) and individual tokens (micro-level). To achieve this, we design a light-weight estimator to compute the mutual information at both micro and macro levels for each transformer layer, which is agnostic to different tasks and language model architectures. We apply IE in both synthetic in-context learning (ICL) scenarios and natural sentence contexts. Experiments show a high-level informativeness of our metric reflected in semantic faithfulness, sensitivity, and connection with emergence. In addition, we highlight some interesting findings: 1) IE explains why ICL offers clearer semantics and benefits compared to natural text through changes\nin entropy. 2) We could associate certain hallucination phenomenon with increased variance in IE. 3) IE can effectively differentiate between human-written and LLM generated text, proving especially useful for extremely large and closed-source language models. Our codes are available at: https://anonymous.4open.science/r/Emergence/."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM evaluation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/ae9797e276cb33805841888fdb5538909cd42a35.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "On the Entropy of Language Models in Getting Semantic from Tokens"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z3KmG5JIN4 | CodeCloak: A Method for Mitigating Code Leakage by LLM Code Assistants | main | Active | privacy;DRL;LLM;code assistant;generative models | alignment, fairness, safety, privacy, and societal considerations | 3;5;5 | 3;3;3 | 2;2;2 | 2;2;2 | 2;2;1 | 4.333333 | 3 | 2 | 2 | 1.666667 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See the weaknesses section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposal of a deep reinforcement learning agent to address code leakage in LLM-based code assistants is both timely and innovative, aligning well with current industry needs.\n2. The focus on mitigating real-world risks associated with proprietary code in commercial settings is highly relevant and adds considerable value to the research."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents \"CodeCloak,\" a novel method leveraging deep reinforcement learning to minimize the exposure of proprietary code in the use of LLM-based code assistants, such as StarCoder and Code Llama. By manipulating code prompts before submission, the method aims to secure proprietary code from potential leaks while maintaining the utility of the code assistant's responses. The authors demonstrate CodeCloak's effectiveness across multiple LLM models and code repositories, achieving a significant reduction in code leakage with minimal loss of suggestion relevance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major Comments:\n1. The metrics used for evaluating the preservation of code intent are not adequately justified. The reliance solely on edit distance may not effectively capture the semantic preservation of the code, which is crucial for assessing leakage risks.\n2. Table 1 is unclear as the best results are not consistently highlighted, and some indices show inferior performance compared to a random baseline, which is confusing.\n3. The concept of new leakage risks introduced by the authors lacks substantial support from prior studies. The paper fails to clarify how these risks are quantified and mitigated, particularly the risk of intent leakage, which is critical for understanding the effectiveness of CodeCloak.\n4. The criteria for code leakage are vague. The paper should clarify how it measures whether the intent behind the code has been leaked, considering that an adversary's ability to reconstruct the intent could still pose significant risks.\n\nMinor Comments:\nThere are several typographical errors that need correction to enhance clarity and professionalism. For instance, the term \"deep learning learning\" should be corrected to \"deep reinforcement learning.\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "There are no ethical concerns with this work."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "My primary concern lies in the technical contribution of this work, particularly in the modeling approach. To clarify this aspect, I would appreciate if the authors could address the following questions:\n\n1. Could the authors elaborate on the specific innovations introduced in the DRL modeling process for CodeCloak beyond existing reinforcement learning techniques? For instance, how does the architecture adapt to unique challenges in privacy preservation for code prompts?\n\n2. How does CodeCloak’s reward function balance code similarity with privacy protection, and was any tuning process developed to optimize this balance? Detailed insights into parameter selection and adjustments would be helpful (i.e., if more than a linear combination, what do you plan to change)\n\n3. Given the dependency on CodeBLEU and GPT-based metrics, did you consider alternative metrics tailored to privacy risk or sensitivity? If so, why were these not implemented, and how might future versions of CodeCloak address this gap?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Major Strengths\n\n1. Timely response to an important issue: The paper addresses the growing concern of code leakage through LLM-based code assistants, a pertinent issue given the increasing adoption of such tools in development. CodeCloak’s focus on prompt manipulation as a privacy-preserving mechanism is both timely and relevant.\n\n2. Direct and practical solution: The authors define the problem of code leakage through LLM prompts clearly and systematically approach it with DRL-based prompt manipulation. The solution, CodeCloak, is a streamlined method that operates locally without altering the assistant model, making it more practical for real-world applications.\n\nMinor Strengths\n\n1. Clear figure presentation: The figures are effectively used, especially the workflow and action selection heatmap, which illustrate the core operations of CodeCloak and the logic behind its prompt manipulations. These visuals enhance the clarity of the methodology.\n\n2. The design of developer simulator: The developer coding simulator is a thoughtful addition, simulating real-world coding behaviors like pauses, cursor movement, and typo corrections. This setup provides a controlled yet realistic environment for testing CodeCloak, making the results more relatable and grounded in practical usage.\n\n3. Multi-dimensional evaluation strategy: By employing CodeBLEU[1], GPT-based similarity measures, and user studies, the authors take a holistic approach to evaluation. This variety provides a solid foundation for assessing CodeCloak’s ability to mitigate leakage without compromising functionality, making the claims more convincing.\n\n[1] Ren S, Guo D, Lu S, et al. Codebleu: a method for automatic evaluation of code synthesis[J]. arXiv preprint arXiv:2009.10297, 2020."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents CodeCloak, a reinforcement learning-based method for mitigating code leakage in LLM-based code assistants. The approach manipulates prompts by applying various transformations to preserve the confidentiality of the developer's proprietary code while maintaining the quality of the code suggestions. Using a DRL agent, CodeCloak selectively manipulates prompts to minimize sensitive information exposure, tested across models like StarCoder and Code Llama. The evaluations focus on a coding simulator and metrics like CodeBLEU and two GPT-based similarity measures to validate the system’s effectiveness in balancing privacy and usability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Major Weaknesses\n\n1. Limited technical contribution: While CodeCloak addresses a relevant problem, its technical novelty is limited, as it primarily combines established RL methods (specifically recurrent PPO [1]) with prompt manipulation techniques. This reliance on recurrent PPO introduces challenges, such as difficulty in reward modeling and a complex training pipeline. Additionally, selecting a specific RL algorithm may restrict the generality of the approach. Expanding the RL architecture or incorporating more advanced prompt adaptation methods could enhance the contribution and broaden its applicability.\n\n2. Over-reliance on similarity for effectiveness assessment: Relying solely on CodeBLEU for code similarity may not capture the full effectiveness of the method. CodeBLEU assesses syntactic and structural similarity, which could miss nuanced privacy risks. While PrivacyGPT and SugSimGPT provide insights into leakage and suggestion quality, PrivacyGPT may not fully evaluate the sensitivity of the leaked information, and SugSimGPT focuses on relevance rather than privacy-preserving quality. Adding targeted metrics for prompt leakage or sensitivity would provide a more complete evaluation of CodeCloak’s impact.\n\n3. Inconsistent formula presentation: The mathematical expressions, especially around the reward function, could benefit from clearer notation and consistent formatting. Improving this aspect would make the technical approach more rigorous and easier to follow. For example:\n\n- The cumulative reward formula $G_t = \\sum_{k=0}^{\\infty} \\gamma^k R_{t+k+1}$ lacks definitions for variables like $G_t$ and $R_{t+k+1}$.\n\n- The agent’s action distribution formula is shown in the action heatmap, but it doesn’t fully explain the state-action mapping, which is essential for understanding decision-making.\n\n- In Table 2, parameters $\\lambda_1$ and $\\lambda_2$ are used to balance relevance and leakage but lack explanation on how each impacts the model’s outputs.\n\nThese can make the paper hard to follow and create confusion. Clearer notation and additional context would improve readability and technical rigor.\n\nMinor Weaknesses\n\n1. Repeated sentences and paragraphs: The paper contains multiple repetitions, particularly around the reward process, DRL modeling, and LLM choices (e.g., StarCoder[2] vs. Code Llama[3]). Streamlining these explanations could improve clarity and reduce redundancy.\n\n2. Possible issues in updating the DRL module: The paper mentions using PPO for the DRL agent but does not provide sufficient detail on how the module adapts to diverse prompt types. Expanding on how the DRL module handles varied developer prompts and how it recalibrates under different conditions would make the approach more robust and credible.\n\n[1] Pleines M, Pallasch M, Zimmer F, et al. Generalization, mayhems and limits in recurrent proximal policy optimization[J]. arXiv preprint arXiv:2205.11104, 2022.\n\n[2] Li R, Allal L B, Zi Y, et al. Starcoder: may the source be with you![J]. arXiv preprint arXiv:2305.06161, 2023.\n\n[3] Roziere B, Gehring J, Gloeckle F, et al. Code llama: Open foundation models for code[J]. arXiv preprint arXiv:2308.12950, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Why use RL in this work? What are the advantages of using RL to avoid data leakage?\n- How to define the actions in code completion, any missing actions in the defined sets? \n- What choose StarEncoder for prompt embedding? Any experiments to compare its performance with other encoders like CodeBERT?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The security problem is important and meaningful in the era of LLMs.\n- Easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed CodeCloak, a deep reinforcement learning-based technique to manipulate code prompts to mitigate the risk of code exposure. In particular, CodeCloak consists of states, actions and rewards, which are common in RL. For states, a code encoder embeds the prompt, and the position information is also included in the constructed embeddings. For actions, it defines several code manipulation actions including delete, change. For rewards, the reward function is based on the CodeBLEU metric. Some experiments are conducted to evaluate the effectiveness of the proposed technique."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The technique contributions are weak. The used RL is standard, which includes actions and rewards. \n- The evaluation is weak and misses state-of-the-art baselines to confirm the effectiveness of the proposed techniques.\n- The motivation for using RL should be strengthened.\n- Code and data are not open-sourced."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A Method for Mitigating Code Leakage by LLM Code Assistants"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024codecloak,\ntitle={CodeCloak: A Method for Mitigating Code Leakage by {LLM} Code Assistants},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z3KmG5JIN4},\nnote={under review}\n}"
},
"abstract": {
"value": "LLM-based code assistants are becoming increasingly popular among developers.\nThese tools help developers improve their coding efficiency and reduce errors by providing real-time suggestions based on the developer’s codebase. \nWhile beneficial, the use of these tools can inadvertently expose the developer’s proprietary code to the code assistant service provider during the development process. \nIn this work, we propose a method to mitigate the risk of code leakage when using LLM-based code assistants. CodeCloak is a novel deep reinforcement learning agent that manipulates the prompts before sending them to the code assistant service.\nCodeCloak aims to achieve the following two contradictory goals: (i) minimizing code leakage, while (ii) preserving relevant and useful suggestions for the developer. \nOur evaluation, employing StarCoder and Code Llama, LLM-based code assistants models, demonstrates CodeCloak’s effectiveness on a diverse set of code repositories of varying sizes, as well as its transferability across different models.\nWe also designed a method for reconstructing the developer’s original codebase from code segments sent to the code assistant service (i.e., prompts) during the development process, to thoroughly analyze code leakage risks and evaluate the effectiveness of CodeCloak under practical development scenarios."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"privacy",
"DRL",
"LLM",
"code assistant",
"generative models"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f0e392ce1cc357bead0ca592cabb728130753454.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "CodeCloak: A Method for Mitigating Code Leakage by LLM Code Assistants"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z3vplLsIve | Learn to Synthesize Compact Datasets by Matching Effects | main | Active | Deep Learning;Dataset Distillation | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 1;3;5;5 | 5;4;4;4 | 2;1;2;3 | 3;2;2;2 | 2;4;3;2 | 3.5 | 4.25 | 2 | 2.25 | 2.75 | -0.870388 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Can you provide more insights into the choice of hyperparameters for the experiments, particularly the selection of time steps?\nGiven the efficiency focus of the proposed method, could you elaborate on any specific memory or computational optimizations employed?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Originality\nThe concept of effect alignment in dataset distillation is innovative, focusing on endpoint effects rather than intermediate training states.\nTheoretical Foundation\nThe method is grounded in theory, with error approximation guarantees that lend robustness to the approach.\nExperimental Results\nThe method demonstrates strong performance in several datasets, showing robustness in handling biases."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a new method for data distillation called \"effect alignment,\" which aims to create compact datasets by matching the endpoint effects of training, instead of aligning intermediate training states. The proposed method estimates the impact of replacing real data with synthetic data, aiming to generate synthetic datasets that yield similar final model performance. Through extensive experimentation, the authors demonstrate that their method is efficient and achieves competitive accuracy, especially in bias-sensitive settings."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Scope of Experiments\nThe paper only evaluates the method on classification tasks, which might limit its applicability to other machine learning tasks such as regression.\nDataset Diversity\nThe experiments are conducted on a limited number of datasets, which raises questions about the method's generalizability to other data distributions.\nComputational Complexity\nWhile more efficient than some alternatives, the methodʼs computational cost could still be a concern for large-scale datasets or real-time applications.\n\npls add comparisons with more recent methods"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "see weakness"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. This method is novel, which is a combination of MTT [1] and DD [2].\n2. Good writing, easy to follow.\n\n\n[1] Dataset Distillation by Matching Training Trajectories, cvpr 2022.\n\n[2] Dataset Distillation, 2018."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The method proposed In this paper is a variant of MTT (dataset distillation by matching training trajectory). Specifically, after optimizing the surrogate network on synthetic data for a few iterations, the synthetic data are optimized to let the networks' predictions be similar to the ones trained on the original dataset, unlike MTT, which chooses to minimize the difference between parameters directly."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Instead of directly matching training trajectories, this method is proposed to match the 'effect', which is measured by the differences between probability distribution predicted by models trained on synthetic data and real data. This means that naturally, this method will have worse performance than matching training trajectories (MTT) [1]. Because MTT directly minimizes the differences between parameters of models trained on synthetic data and real data, where models' predictions will be the same ideally (which is the optimization goal of the method proposed in this paper). Coinciding with this, TESLA [2] (following work of MTT), which also uses soft labels, always performs better than this method.\n\n2. I notice this method outperforms TESLA in large IPC cases, is it because this method uses the difficulty alignment trick (control matching range) proposed by DATM [3]? The author should report the hyper-parameters to improve clarity.\n\n3. What are the benefits of replacing matching parameters with 'effects'? Being more efficient? Have better generalizability? The paper only reports one comparison, I think more comprehensive comparisons can improve the quality of this paper.\n\n\n[1]. Dataset Distillation by Matching Training Trajectories, CVPR 2022.\n\n[2]. Scaling up dataset distillation to imagenet-1k with constant memory. ICML 2023.\n\n[3]. Towards lossless dataset distillation via difficulty-aligned trajectory matching. ICLR 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see the weaknesses."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The idea of effect alignment for dataset distillation is reasonable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors point out that current dataset distillation mainly focuses on aligning the representation of synthetic data and real data through methods such as trajectory and gradient matching. However, these methods are limited by the strict alignment between the synthetic data and the real data. To overcome these limitations, the authors propose a new effect alignment, which only pursues the consistency of the final training results, to make the synthetic data set achieve similar training performance to the real data set."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It seems that this paper is not finished. There is only one incomplete table in the experiment. Although the proposed method performs better than the other methods on digital datasets with IPC=1/10, it performs worse on the other datasets. So, the experiments cannot demonstrate the superiority of the proposed method.\n2. An ablation study on the hyper-parameters is required.\n3. The summarised contributions are not matched to the method.\n4. For Eq.(6), reducing the steps of network optimization $T$ can help to close the gap of approximation error. However, a smaller $T$ usually means a sub-optimal performance of a network."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "1. The paper is well-written. I am able to fully follow the method and the experiments.\n2. The proposed formulation is novel in the literature of dataset distillation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new optimization objective for dataset distillation termed matching effects. The idea is to minimize the errors in real data between models trained by synthetic data and real data respectively. Since the raw objective is hard to computation, the authors propose an efficient approximation regarding matching the distance of gradients. Some special cases in the experiments show some advantages."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the proposed formulation is novel in the literature on dataset distillation, I cannot get the motivation of the proposed objective against the original formulation of BPTT. Specifically, BPTT wants to minimize the error on real data for models trained by synthetic data, i.e., $\\mathcal{L}(P,\\theta^*_{A})$, while the matching effect is to minimize $|\\mathcal{L}(P,\\theta^*_{D})-\\mathcal{L}(P,\\theta^*_{D-G+A})|=|\\mathcal{L}(P,\\theta^*_{D})-\\mathcal{L}(P,\\theta^*_{A})|$, given that we would like to replace all the real data with synthetic data and thus we can assume $D=G$. \n 1. I do not get why the authors believe the latter can be better than the former.\n 2. In the practical cases of dataset distillation, since the synthetic dataset is small, the error on the real data of models trained by the synthetic dataset is usually larger than that trained by the real dataset, in most cases $|\\mathcal{L}(P,\\theta^*_{D})-\\mathcal{L}(P,\\theta^*_{A})|=\\mathcal{L}(P,\\theta^*_{A})-\\mathcal{L}(P,\\theta^*_{D})$, which is equivalent to the original formulation because the term $\\mathcal{L}(P,\\theta^*_{D})$ is not relevant to optimization. From this point of view, the proposed method can conduct some rectification for the opposite case. But I am not sure if this is how the method works in fact and how we could benefit from this rectification. In summary, a more comparative analysis is necessary.\n 3. It seems that the above analysis is also applicable to the proposed approximation, which can be viewed as a variant of the previous gradient matching scheme.\n2. From Eq. 6, it seems that the approximated error is quite large because it is dominated by the farthest distance the neural network parameters move away from their initial state during training when any subset is used as the training set. The authors can provide some analysis on whether the bound is tight. If it is indeed tight, I am not sure whether it is useful in practice. The authors are encouraged to provide some toy experiments to illustrate this approximation.\n3. Accordingly, in the experiments, I recommend the authors provide more ablation studies to compare the proposed method with the original formulation, i.e., BPTT and gradient matching while maintaining other factors the same. Given the current results in Tab. 1, which are not evidently strong, we cannot state that the proposed method is better. More analysis is encouraged to figure out in what cases the method is superior and in what cases it is not."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Deep Learning,AIGC"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024learn,\ntitle={Learn to Synthesize Compact Datasets by Matching Effects},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z3vplLsIve},\nnote={under review}\n}"
},
"abstract": {
"value": "The emerging field of data distillation aims to compress large datasets by aligning synthetic and real data representations to create a highly informative dataset. The optimization objectives of data distillation focus on aligning representations by using process alignment methods such as trajectory and gradient matching. However, this approach is limited by the strict alignment of intermediate quantities between synthetic and real data and the mismatch between their optimization trajectories. To address these limitations, a new data distillation method called effect alignment is proposed, which aims to only push for the consistency of endpoint training results. The approach uses classification tasks to estimate the impact of replacing real training samples with synthetic data, which helps to learn a synthetic dataset that can replace the real dataset and achieve effect alignment. The method is efficient and does not require costly mechanisms, and satisfactory results have been achieved through experiments."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Deep Learning",
"Dataset Distillation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/7e3b1a9d3ed0644508bab7d561b3a08d5725f600.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Learn to Synthesize Compact Datasets by Matching Effects"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z4Ho599uOL | STARJOB: DATASET FOR LLM-DRIVEN JOB SHOP SCHEDULING | main | Active | JSSP;Large Language Models;supervised dataset;Starjob;artificial intelligence;sampling method;LLM | datasets and benchmarks | 3;3;3;3 | 3;4;5;2 | 2;2;2;3 | 2;2;1;2 | 3;2;2;3 | 3 | 3.5 | 2.25 | 1.75 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Does the problem have the standard optimal answer, for example using an external solver? If so why should we use LLMs for it. The only difference is the consumed time, is it in this case?\n\n- Does the model could be applied to a problem that has a much larger scale? \n\n- How does this paper contribute to a geneneral ICLR audience, or any specical groups of ICLR community?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "**Novel Application of LLMs**: This is the first work applying LLMs to JSSP, pushing the boundaries of LLM applications beyond traditional language processing tasks. The concept of fine-tuning an LLM on a scheduling problem is innovative.\n\n**Dataset Contribution**: The introduction of the Starjob dataset, consisting of 120,000 natural language descriptions of JSSP problems and their solutions, is a valuable resource for future research. It bridges the gap between optimization tasks and natural language models.\n\n**Performance Evaluation**: The paper provides thorough comparative analyses, demonstrating that the LLM-based approach significantly outperforms traditional PDRs and improves upon neural methods in certain benchmarks. The reported improvements in average makespan gap are notable: 11.28% on the DMU benchmark and 3.29% on the Taillard benchmark.\n\n**Interpretability of Data**: The transformation of matrix-based JSSP data into human-readable natural language format for LLM training is a clever approach that enhances the model’s interpretability and generalization ability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Starjob, a dataset designed to fine-tune large language models (LLMs) for solving the Job Shop Scheduling Problem (JSSP). JSSP is a complex optimization task requiring efficient allocation of jobs to machines while minimizing the makespan (total processing time). The authors demonstrate the potential of LLMs in scheduling, specifically by fine-tuning the LLaMA 8B model on their dataset using LoRA (Low-Rank Adaptation). Their LLM-based scheduling approach is benchmarked against priority dispatching rules (PDRs) and a neural method (L2D), showing superior performance in reducing the makespan. The paper presents a novel application of LLMs for end-to-end scheduling in JSSP, with the potential to exceed traditional and neural approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Computational Complexity**: The fine-tuning and inference stages are computationally intensive, requiring significant GPU resources (30GB) and long training times (70 hours for one epoch). This limits the accessibility and scalability of the approach, particularly for larger JSSP instances.\n\n**Generalization Concerns**: The model is only for JSSP. I do not know what the general audience could learn anything from this paper. It seems someone could also train new models for many other indivisual problems. What is the specicial part of JSSP to a general ICLR audience?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. It would be interesting to see the comparison between not fine-tuned Llama and the Llama fine-tuned with the dataset proposed in the paper. How much is the improvement by using the proposed dataset?\n2. I would like to see the training curves.\n3. Sampling is not optimal for handling hallucinations (i.e., infeasible solutions). Do you have better ways?\n4. How are 120K training data samples distributed across problem sizes?\n5. I would like to see the generalisation of the method, e.g., training on small sizes and testing directly on large sizes."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. It is interesting to see an LLM finetuned with JSSP data represented in natural language actually has better performance than a neural-based solver (L2D).\n2. The paper is organised in a clear structure that is easy to follow. The intuition and the method is easy to understand even for readers outside the field of combinatorial optimization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a dataset for training LLM to solve traditional job shop problems (JSSP) containing 120K data samples. LLaMA 3.1 8B is chosen as the supervised-finetuned LLM with RSLoRA and 4-bit quantisation techniques for saving memory. The whole idea of the paper is straightforward. The JSSP instance is represented by natural language. Since the LLM is prone to generate infeasible solutions, the paper uses sampling to get feasible solutions."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The evaluation of the proposed method needs to be stronger. The baselines are relatively simple: mainly are dispatching rules and a neural-based method surpassed by many existing methods\n2. LLM is prone to suffering from hallucinations. Therefore, not feasible solutions can be guaranteed at all time. This is the main drawback of using LLM for solving CoP problems.\n3. The size of the evaluation is too small, e.g., in L2D, the largest size is 100 x 20.\n4. No running time cost comparison is given for the proposed method and baselines."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1 \"At inferce max seq length = 20000 is used and sampling strategy (do sample = True) with the default hyper-parameters and with num return sequences = 10.\" The hyperparameter definitions were not elaborated. What is their meaning? \n\n2 a. How are proposed techniques used to tackle larger JSSPs? b. How do the trained LLM apply to JSSPs described by texts differing from the format in the paper?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "This paper investigates the potential of Large Language Models (LLMs) for addressing JSSP. The trial is good to exhibit the difficulty to finetune an LLM for a reasoning task, JSSP that seems quite hard by LLMs but can be done efficiently by heuristics and L2D."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the potential of Large Language Models (LLMs) for addressing JSSP. To generate labels to train LLM, authors employed Google’s OR-Tools in 300s to collect feasible solutions. Lora adapted LLM with the collected solutions and JSSP problem descriptions. The LLM achieved performance better than PDRs and L2D in TAI, DMU Dataset but suffers long inference time."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Motivation of applying LLM to a simple task is not reasonable. If a supervised dataset is accessible, it’s more reasonable to train a neural network for the task, rather than forcibly converting the problem in natural language and finally parse solutions back. It takes too much extra time in inference and training. LLM is good at aligning task descriptions so that different LLM foundation models were developed for downstream tasks. The trained LLM only applicable to JSSP does not make sense to me. That is, training an extremely heavy model for a single task does not deserve the effort put in. Section 6.1 parsing procedure didn’t harmonize LLM, which introduced much heuristics.\n\nRelated work missed too much recent work of DRL techniques to tackle JSSP. L2D is definitely not a SOTA model currently. Comparison to more recent work is suggested. LLMs for optimization work are missing. The solved problems are small and not practical. Inference time is not reported in the paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I would be curious to understand how the method compares to non-neural approaches, both, in terms of achieved accuracy as well as computational cost of LLM based JSSP, other neural JSSP, and non neural approaches.\n\nDo you have a feeling on how much of the fine-tuning is for learning the representation of the problem, vs really improving the problem solving capabilities. Have you experimented with other approaches, e.g. prompt engineering?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Proper evaluation of LLMs is an open problem, specifically when it comes to reasoning. There are multiple angles: a) and most of the traditional approaches to assess NLP models are not good evaluation metrics/criteria to assess proper reasoning b) models often have been trained on published benchmarking datasets, and there is a lack of problem diversity.\n\nThis paper introduces a new problem domain into LLM evaluation that requires proper reasoning/optimization, and has objective and quantifiable target outcomes that are clearly separated from style of the generated output."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "StarJob assess how well LLMs can perform the task of Job Shop Scheduling. The authors generate dataset by converting an existing benchmark (Tai and DMU) to an LLM readable format, fine-tune a LLama 8B model on this dataset and demonstrate, that the LLM can perform the JSSP task reasonably well compared to other neural approaches after fine-tuning (at least for a subset of the benchmarking dataset)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Limited novelty. The papers main contributions in the generation of a JSSP dataset for LLM evaluation including fine-tuning a 8B LLama model. This overall feels like a narrow contribution. For an evaluation paper highlighting capabilities of LLMs, I would have expected to see a more comprehensive evaluation of JSSP and related problems. For a method paper, I would have expected to see more novelty rather than just fine-tuning a single LLM (or better SLM as the model used is fairly small)\n\nI would like to at least some of the following additions:\n1. Assessment over a larger range of models to contrast their capabilities, e.g. other SLMs, proper LLMs such as GPT models (in this case only via prompt engineering, not fine-tuning), etc. \n2. I have reservations how this approach would scale to larger JSSP problem sizes. Evaluation is only performed over subset of the available benchmark datasets. I would like to see at least some analysis and discussion on scaling behaviour with JSSP problem complexity. The authors list this under limitations.\n3. A wider range of reasoning tasks in the job scheduling domain."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce the very first supervised dataset specifically designed to train LLMs for JSSP. Surprisingly, our findings demonstrate that LLM-based scheduling can achieve performance comparable to other neural approaches."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024starjob,\ntitle={{STARJOB}: {DATASET} {FOR} {LLM}-{DRIVEN} {JOB} {SHOP} {SCHEDULING}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z4Ho599uOL},\nnote={under review}\n}"
},
"abstract": {
"value": "The Job Shop Scheduling Problem (JSSP) presents a significant challenge in opti-\nmizing production processes. This problem requires efficient allocation of jobs to\na limited number of machines while minimizing total processing time (makespan).\nAlthough recent advancements in artificial intelligence have produced promising\nsolutions, such as reinforcement learning and graph neural networks, this paper\ninvestigates the potential of Large Language Models (LLMs) for addressing JSSP.\nWe introduce the first supervised 120k dataset called Starjob specifically designed\nto train LLMs for JSSP and we subsequently fintune the LLaMA 8B model on\nthis dataset using Lora. We compare the average makespan gap of our end-to-\nend LLM-based scheduling method with that of the most widely used priority\ndispatching rules (PDRs) and neural methods such as L2D. Surprisingly, our find-\nings indicate that LLM-based scheduling not only surpasses traditional PDRs but\nalso achieves on average 11.28% on DMU and 3.29% gap improvement on the\nTailard benchmarks compared to the state-of-the-art L2D method."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"JSSP",
"Large Language Models",
"supervised dataset",
"Starjob",
"artificial intelligence",
"sampling method",
"LLM"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/9fb63dd43bbfdf0850aab7d5263b62c2cd0363d2.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/eeae09e989a0d3fd987f6d95d460116f08ece9b8.pdf"
},
"title": {
"value": "STARJOB: DATASET FOR LLM-DRIVEN JOB SHOP SCHEDULING"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z4bfNsrum4 | Decoding Generalization from Memorization in Deep Neural Networks | main | Active | Generalization;Memorization | other topics in machine learning (i.e., none of the above) | 1;3;3;6;6 | 4;3;4;4;4 | 1;1;2;3;3 | 1;2;2;3;2 | 1;2;3;2;3 | 3.8 | 3.8 | 2 | 2 | 2.2 | 0.206284 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "See weakness."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "Very comprehensive experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper investigates the generalization and memorization phenomena in training overparameterized models with the presence of label noise. The authors propose MASC, which matches the sample representation angle with each class's representation primary subspace to determine the sample's possible true class. And they state the method can decouple generalization from overfitted network learned from noisy labels."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of literature: First, the paper lacks related works that don't provide pictures of previous work, the discussion of previous work only appears in the first paragraph of the introduction and most papers are focused on experiments. While there are many theoretical papers trying to understand the problem[1][2].\n2. Novelty: The entire Sec.3 tries to convey the idea: that the top component of the corrupted model still contains the class information to some degree. That is not a surprise since [1][2] both indicate that the model first learns the primary eigenspace that contains the correct label information and only learns and memorizes the label noise slowly in the later stage. The proposed method, which tries to cut off the representation space and only keep the main component using PCA, is not that novel because it is also the top eigenspace. Although learning the label noise reduces the energy of the clean label subspace that was learned at the beginning of training, it is not a surprise that the top eigenspace still contains meaningful information.\n3. Novelty: Sec.4 tries to convey the idea: that the MASC built on a corrupted model using the correct label is a good classifier to reflect the true class distribution. However, the MASC is just a clustering algorithm, where we use the PCA to get a class vector and then use angle (cosine similarity) to cluster the example. It even somehow works with the original picture. Since the model already established some representation while learning the clean space, it is expected that it can perform well.\n\n4. Writing: The paper is hard to read and follow due to its dense language, complex sentence structures, and grammar errors. For example, ln 76-96 has multiple dense complex sentences and creates a hinge for readers to understand the main contribution. Ln 304-318 are also hard to follow. The Methodology part only verbally describes the algorithm as well as the terminology part, it took me much longer time to understand the algorithm. For grammar errors, ln 305 we wanted to -> want, etc. Also, the table elements are not well separated in Table 1, Fig 1,2,3 exceeds the page margin.\n\n[1] Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks.\n[2] When and how epochwise double descent happens."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "None."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I would like the authors to respond to points I listed under weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Understanding the relationship between generalization and memorization is an important and worthy area of study. \n- Good empirical breadth, with the analysis being applied across 5 standard datasets and three different architectures.\n- The authors seem reasonably well versed in at least some of the vast literature on the topic of memorization and generalization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to investigate the sometimes confusing relationship between neural network generalization and memorization. To this end, the authors propose a method of analysis they call Minimum Angle Subspace Classifier (MASC) which is a kind of combination between nearest neighbour classifier and a dimensional reduction method: it's a sort-of nearest subspace classifier with distance defined via the angle. Using MASC as their probe they find that the internal representations of the neural networks are able to generalize significantly better than the neural networks themselves."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I believe there is a methodological issue with the MASC based analysis. The original neural networks are trained to either 500 epochs or 99% - 100% percent accuracy on the training set. This means that the models are likely to be overfit on the training data, especially when training with corrupted data. This may well be the intention of the authors, but it has implications for their conclusions. In their analysis, the authors take a representation drawn from different layers in the architecture and build an alternate classifier on that representation. If the classifier is sufficiently regularized (which their MASC appears to be in at least some cases) and the representation still possesses sufficient variability across the input examples (i.e. the input data points are not collapsed on one-another), then it's not surprising that the MASC classifier is able to exceed the unregularized and overfit neural network classifier. \n\n- There is insufficient analysis of the properties of the MASC as an experimental probe. The degree to which the MASC is regularized is a very important property for the interpretation of the results. This aspect of the MASC is only sparingly discussed mainly in the appendix and it's unclear how this property globally impacts the conclusions of the paper. \n\n- There is a largely undiscussed lack of consistency across empirical results. The authors present a healthy breadth of experiments but across the main findings presented in Figs, 1, 2 and 3, the authors mainly focus on the MLP experiments and to some extent the CNN results. They largely neglect to incorporate the AlexNet results into their narrative. This is understandable, because these are often inconsistent with the pattern of results across the other datasets and architectures, but it leaves the reader questioning the generality of the stated findings.\n\n- A very important baseline or control is missing. The MASC needs to be applied to a randomly initialized model (for each architecture). My interpretation of section 4 is that the authors are claiming that the generalization observed using their MASC (on the true uncorrupted training data) compared to the original neural network prediction is revealing hidden but learned generalization ability of the internal representation of the neural network. But this isn't necessarily so. The MASC classifier on a random projection of the input could potentially do just as well. Indeed it seems that as the level of corruption increases, the dimensionality of the embedding is the most important determiner of MASC performance. This would likely also be predicted for a random embedding with a sufficiently regularized classifier. \n\n- The paper occasionally slides into nearly nonsensical rhetoric. For example: in line 76 the authors write: \"We ask, why it is that in models trained with shuffled labels do we have poor generalization accompanying perfect / high training accuracy.\" The obvious answer is: \"because there is little or no common structure between the training set and the test set on which generalization is evaluated. Most of the paper is more lucid than this would imply, but statements like this weakens the overall strength of the message of the paper.\n\nClarity: The paper is readable though its clarity would benefit significantly from more formal, mathematical definitions and descriptions of the different empirical probes that are used. It took me a while to decipher what the training set was for the \"MASC Accuracy on Corrupted Training\".\n\nOverall, the paper offers little novel insight into the relationship between memorization and classification. The results are largely consistent with previous findings (cited in the paper) and I do not see how this paper contributes significantly to that literature. Specifically, the idea that hidden layer representations can possess information about the nature of the data and/or task that is not conveyed to later layers in neural networks trained with corrupted data is not a significant contribution beyond the finding of, for example, Arpit et al (2017) and Zhang et al (2017)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I can't think of prior work having done this exact experiment, but it is very much in line with all that I know from the whole memorization vs generalization literature; and in that sense I come out of having read the paper with the feeling of not having learned anything. What is the one thing the authors think we can learn from this paper that's not in existing papers?\n\nComing back to scale, the effect is pretty minimal on AlexNet+TinyImagenet. Since this is the most \"large-scale representative\" of the tasks, it definitely begs the question of what actually happens at scale. Without training on all of ImageNet with a modern model, I wonder if something could be learned by repeating the experiment with e.g. 2x and 0.5x the number of parameters.\n\nSection 5 constructs subspaces based on noisy labels for models without noisy labels (so presumably \"no memorization\"), and shows that this can be used with MACS to correctly retrieve the noised labels in some layer in most cases. I really fail to see how this \"this supports the idea that memorization can not only coexist with generalization, but that in some cases memorization can be accompanied by superior generalization.\" I suspect this just shows that the subspace is expressive enough to separate somewhat arbitrary points, which I suspect can be explained by the occasionally high number of principal components. Conversely this may say nothing about generalization, since the underlying space is presumably a generalizing one, it just says that the overarching space of the subspace is also expressive enough to separate arbitrary points. We know this is the case from a long history of probing overparameterized deep neural networks. I would like the authors to expand on Section 5 and explain why they think it shows what they claim it shows.\n\nSome suggested improvements:\n- Table 1: use space between groups of 3 digits for large numbers\n- Adam is a method with a paper that should be cited\n- Figure 1: increase label size, and generally reconsider the zoom level of the plot, details are hard to see. Another tip is to hide the shared axes' labels to create more space for the figure (IIRC using pyplot just using `sharey=True` should accomplish this)\n- Figure 2 should also have lines for the \"Minimum Angle Subspace Classifier (MASC) Accuracy on Corrupted Training\" lines of Figure 1, otherwise I don't see how to compare the two properly (and support the claim that \"accuracies on the true training labels, as well as the test set are dramatically better here than with the experiments where subspaces were determined for the corrupted training data\")\n- The abstract is very long and I recommend being much much more concise. Again, what is the one thing the authors think we can learn from this paper that's not in existing papers?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper investigates important questions in a novel way. It is well written and the claims are mostly backed by evidence."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "In this paper, the authors probe the representations of DNNs trained with noised labels.\n\nTo test the hypothesis that intermediate representations \"generalize\" even if the output layer doesn't, the authors identify class-conditional subspaces in the hidden representations of each layer via PCA. They then show that these subspaces can be used to classify test points by simply choosing the subspace onto whose projection the angular distance to the point is minimal. They then show that such a classifier often has better test performance on models trained with noised labels, confirming the hypothesis. \n\nThese subspaces are themselves constructed out of noised labels, but the authors also test the case where they are not, also finding consistent better test performance.\n\nThe authors present these findings as additional evidence of the complex interplays between generalization and memorization."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper's main weakness is a fairly important lack of situating itself with respect to prior work. A number of papers have probed intermediate layers and latent representations of DNNs [e.g. 2,3], in an attempt to understand this very memorization-vs-generalization debate. It does feel to me that many of the claims made in this paper are obvious in light of this literature. While the specific empirical investigation done here is novel to me, the conclusions drawn by the authors are not. \nIt is hard to resist thinking that most readers familiar with the literature could have easily predicted the outcomes of these experiments. While validating known hypotheses is fundamental to science, it does feel like the contribution here is limited to that.\n\nThe paper's other main weakness is it's lack of scale (and proper analysis of scale), and the oddly poor performance of some models. I know this is an easy criticism to throw, but the largest model shown in the paper has 40M parameters and something like 18% test accuracy on an unperturbed training set. In contrast, a ResNet-152 from 2015 (He et al.) has a similary number of parameters and ~80% accuracy on the full ImageNet dataset, and an 11M parameter DenseNet (Abai et al) gets 60% accuracy on TinyImageNet.\n\n> As a result, memorization is generally considered antithetical to generalization \n\nI see this written a lot, but this is a demonstrably false narrative. I think it's been the pretty clear narrative even since the 2016/17 works of Zhang et al & Arpit et al (in their _Main Contributions_ section, they write: _\"DNNs learn simple patterns first, before memorizing, [..] in other words, DNN optimization is content-aware, taking advantage of patterns shared by multiple training examples\"_) that generalization and memorization are **not at odds**; i.e. deep models do both but we don't understand how much of which and to what degree they contribute to test performance. It's also been fairly clear since ~2017, including the work of Arpit et al, that DNNs learn in some kind of hierarchical order (or frequency-based, DNNs learn lower frequencies first [1]), even in the presence of label noise. The latter already suggests that intermediate representations should be amenable to have _general_ information extracted from them, even if the last classifying later \"overfits\".\n\n[1] Neural Networks Learn Statistics of Increasing Complexity, Nora Belrose, Quintin Pope, Lucia Quirke, Alex Troy Mallen, Xiaoli Fern, ICML 2024 \n[2] Understanding intermediate layers using linear classifier probes, Guillaume Alain, Yoshua Bengio, 2016 \n[3] On the geometry of generalization and memorization in deep neural networks, Cory Stephenson, suchismita padhy, Abhinav Ganesh, Yue Hui, Hanlin Tang, SueYeon Chung, ICLR 2021"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Given that no mild-scale experiment were presented, how does the phenomenon generalize to large scale neural networks? With ResNet50 and careful training, we can get high accuracy on CIFAR100 and ImageNet."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "The idea that overfitting happens in the later layer of the network, though not new, is interesting in itself. The authors present a concrete technique how to extract the generalization performance from those activations."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "First, the paper presents experiments where the model attains high training accuracy but low test accuracy, the classical overfitting story. Then, they show that by using the intermediate activation of the neural network, they can get reasonable generalization performance beyond the naive test accuracy. The higher-level claim is that the generalization ability is not lost during the overfitting process, that ability can be \"decoded\" via techniques like MASC."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "First of all, the technique discussed in the paper is not a practical one -- they are not intended for people to use in practice to replace more careful network architecture design, hyperparameter tuning, data curation, etc.\n\nSo the paper has to be viewed from a scientific perspective.\n- Novelty: the idea of using unsupervised technique on activation to get good generalization performance is not new. It roots in the rich literature of representation learning, which are not adequately discussed in the paper.\n- Scientific rigor: The phenomenon isn't as robust as the experiment suggests, there are several places in Figure 1 and 2 where the MASC accuracy is not higher than test accuracy. Especially on ImageNet.\n- Experiment setting: It's sometime a bad criticism to say that the authors didn't run large-scale experiments. But in this case, they propose a phenomenon that's intended to hold a cross a wide range of scales, but the largest neural network they implementated is AlexNet. Even under academic budget, the author would need to show results on ResNet (from 2015). Let alone more recent models like MobileNet and ViT to be convincing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See \"Weakness\" above."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "This paper studies the interesting problem of representation building when the neural network memorizes corrupted training labels. Especially when learning under completely random noises, it was unclear if the neural networks simply ignore the underlying visual patterns and build an arbitrary lookup table for those random labels, or do they still learn useful visual representations. The experiments to study this question is clearly formulated. The conclusions are supported with experiments on multiple datasets and model architectures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper provide a deeper dive into the representation learning under generalization (learning with clean labels) or memorization (fitting to corrupted labels) for image classification models. The results show that even when heavily memorizing randomly corrupted labels, the intermediate layer representations could still provide non-trivial classification capabilities when probed with a simple classifier without using any extra label information. Furthermore, if the correct labels are provided post-hoc to build such probe, non-trivial accuracy can be obtained even when the original model was trained with completely random labels. This shows that even when fitting to completely random labels, the models are still learning useful visual representations for the input image, instead of arbitrarily wiring the network to build a lookup table for memorizing the random labels. This paper further show that a similar method can be used to build a \"probe\" with high accuracy for random labels when the original model was trained with correct labels."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The presentation of the paper could be improved, especially the experiment results and figures. It is a bit difficult to digest or find the most relevant information from their figures. For example, Figure 1, 2, 3 each occupies a full page, and each contains 30 sub-panels. Even if the authors would like to include all the results in the main paper, I would still recommend choosing a subset of panels that can clearly support the conclusions and highlight them (e.g. making them bigger and putting them in a separate figure), or even consider alternative visualization to present a summary of the information contained in each of the 30 panels.\n\n2. While the message in this paper is clear, it seems a bit specialized to the specifically chosen Minimum Angle Subspace Classifier (MASC). It is a bit unclear if it is the robustness of this classifier that enabled such phenomenon or does it hold for other simple classifiers as well. I think it is still interesting if it only holds for MASC, but including studies with other simple classifiers would make the results more comprehensive.\n\n3. It looks like the conclusion in this paper is biasing towards that the models learn similar representation in both memorization and generalization model, because the learned representation can be turned to do memorization or generalization when the corrupted or true labels are revealed after the representation learning. If this is indeed the message, it would be great if the paper could have some way to measure the representation similarity, which would not only further confirm the message, but might also be able to allow us to do comparative studies such as, do Convolutional Nets have a stronger bias towards learning similar representations than MLPs?\n\n4. (Minor) The MASC algorithm needs to use labels. I believe it is using the same labels as model training (e.g. corrupted labels in most cases) based on the comparison results in Section 4. It would be better if the paper could clearly clarify this in the methodology presentation (e.g. Section 2)."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We demonstrate that models trained using training data with shuffled labels often have significant generalization ability that can be decoded from their internals."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024decoding,\ntitle={Decoding Generalization from Memorization in Deep Neural Networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z4bfNsrum4},\nnote={under review}\n}"
},
"abstract": {
"value": "Overparameterized Deep Neural Networks that generalize well have been key to the dramatic success of Deep Learning in recent years. The reasons for their remarkable ability to generalize are not well understood yet. It has also been known that deep networks possess the ability to memorize training data, as evidenced by perfect or high training accuracies on models trained with corrupted data that have class labels shuffled to varying degrees. Concomitantly, such models are known to generalize poorly, i.e. they suffer from poor test accuracies, due to which it is thought that the act of memorizing substantially degrades the ability to generalize. It has, however, been unclear why the poor generalization that accompanies such memorization, comes about. One possibility is that in the process of training with corrupted data, the layers of the network irretrievably re-organize their representations in a manner that makes generalization difficult. The other possibility is that the network retains significant ability to generalize, but the trained network somehow “chooses” to readout in a manner that is detrimental to generalization. Here, we provide evidence for the latter possibility by demonstrating, empirically, that such models possess information in their representations for substantially improved generalization, even in the face of memorization. Furthermore, such generalization abilities can be easily decoded from the internals of the trained model, and we build a technique to do so from the outputs of specific layers of the network. In particular, we show the following: (1) For models trained using standard methods \\& datasets with corrupted training data, while the model has poor test accuracy, we can build a simple classifier with dramatically better test accuracy that uses only the model's hidden layer outputs obtained for the (corrupted) training set. (2) For the aforementioned models, if the true training class labels are known post hoc, i.e. after the model is trained, we can build a simple classifier, with significantly better generalization performance than in (1). This is true, in many cases, even for models where training class labels are shuffled with equal probability. This demonstrates that the layers of the network maintain representations in a manner that is amenable to straightforward generalization to a degree not previously recognized. (3) On the other hand, we asked if a model trained on the true training labels similarly retained the capability to memorize easily. Adapting our technique to this setting, we find that in a few cases, we can extract a high degree of memorization. The same classifier sometimes exhibits high test accuracy (on the true test labels), which further supports the idea that generalization can co-exist with memorization. Together, these results suggest a more nuanced view of the interplay of generalization with memorization in Deep Learning and suggest the need for further experiments and theory to better understand this phenomenon."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Generalization",
"Memorization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/fbf5074732f3a7dfe2826031bb42da7f7a95b032.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Decoding Generalization from Memorization in Deep Neural Networks"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z4rBSPep64 | DAViD: Domain Adaptive Visually-Rich Document Understanding with Synthetic Insights | main | Active | Visually-Rich Documents;Visually-Rich Document Understanding;Domain Adaption | applications to computer vision, audio, language, and other modalities | 3;3;5;5 | 3;4;4;4 | 2;3;3;3 | 3;2;3;3 | 1;2;2;2 | 4 | 3.75 | 2.75 | 2.75 | 1.75 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1.\tI want to know the size of each test set in Table 2.\n2.\tAs the datasets used in this paper is too small (only a few hundred samples), I wonder whether the proposed method works on larger datasets (more than one thousand samples), which can also be seen as low-resource scenarios."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe author proposes a joint-grained VRDU framework, which integrates fine-grained and coarse-grained document representations, leveraging pretrained models and synthetic data.\n2.\tThe author proposes a synthetic data generation workflow that generates structural and semantic annotations using off-the-shelf tools and LLMs."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a Domain Adaptive Visually-rich Document Understanding (DAViD) framework, which utilizes the synthetic data to train some parts of the model for domain adaptation, and enhances the model’s performance on low-resource document understanding tasks. Extensive experiments are made to validate the effectiveness of the proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe presentation of this paper is poor. The author introduce a lot of terms and abbreviations in Section 3 and 4. The method description is also redundant and complicated, which greatly hinders the readers from understanding this paper. Besides, I strongly suggest the author to rewrite the Introduction section, which is too redundant and repetitive. And I also recommend the author to replot Figure 1 and 3 into vector graphics, because the layout and small words are difficult to read.\n2.\tThe author should list the number of parameters for all the method to compare different method’s computation resources. The author takes the document understanding (in this case, it’s key information extraction) into several parts, and uses synthetic data to pre-train each part. In this case where the number of parameters and the amount of training data are higher than the baseline, it is obvious that the final performance is better, especially under low-resource scenarios, which I think is quite trivial.\n3.\tIn recent years, there are some MLLMs that are specially trained for document understanding tasks[1-3] and show stronger capability compared with old document analysis pre-trained models (LayoutLMv3, LiLT) and general MLLMs (Llava, Qwen-VL). I strongly suggest the author to add experiments on these MLLMs.\n\n[1] Liu, Yuliang, et al. \"Textmonkey: An ocr-free large multimodal model for understanding document.\" arXiv preprint arXiv:2403.04473 (2024).\n[2] Wei, Haoran, et al. \"Vary: Scaling up the vision vocabulary for large vision-language model.\" European Conference on Computer Vision. Springer, Cham, 2025.\n[3] Hu, Anwen, et al. \"mplug-docowl 1.5: Unified structure learning for ocr-free document understanding.\" arXiv preprint arXiv:2403.12895 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1. Line 137: what is KIE?\n2. Section 4.1: why do we need two representations? can't we have the same representations and output of two granularity? if you need two representations, why do you need joint granularity extraction?\n3. Line 188: what are standard tools?\n4. How is GDE implemented?\n5. Line 192: Simply referring the reader to Luo et al., 2022 is insufficient. It’s unrealistic to ask readers to read over a separate paper. Can you elaborate how you followed their work what you did?\n6. How does L2V work, formally?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The proposed approach seems effective and novel."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents the Domain Adaptive Visually-rich Document Understanding (DAViD) framework, designed to enhance information extraction from Visually-Rich Documents (VRDs), which typically include elements like charts, tables, and references. Traditional approaches require extensive annotated datasets, limiting their scalability due to labor-intensive manual labeling. DAViD addresses this by using machine-generated synthetic data for domain adaptation, along with fine-grained and coarse-grained document representation learning. This approach significantly reduces the dependency on manual annotations. Experiments demonstrate that DAViD effectively achieves competitive performance across domain-specific VRDU tasks with minimal annotated datasets, validating its potential as a scalable solution."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- A lot of unclear details: The most significant drawback of the paper is the poor writing that prevents the readers from appreciating the work. The paper introduces many components and creates many new terms. However many of them are unclear and not properly described. The new terms (e.g. L2V, SDS, SIT) could be better explained with figures (potentially Figure 1). But there was no illustration. See the questions below.\n\n- Insufficient baseline comparison: The author should consider two additional baselines [1, 2]: \n\n\n[1] Kim et al., 2022, OCR-free Document Understanding Transformer.\n\n[2] Lee et al., 2022, Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Regarding the comparison with LLaMa3: The paper mentions that the DAViD framework performs well on specific domain visual rich document understanding (VRDU) tasks, but it does not mention a comparison with existing advanced open-source large language models such as LLaMa3. Could the authors provide a performance comparison between DAViD and LLaMa3 on the same dataset? This would help readers understand the relative position of DAViD in the current research field. \n\n2. Regarding the comparison with TextMonkey and DocOWL1.5: The paper does not mention comparative experiments with existing document-specialized multimodal large models such as TextMonkey and DocOWL1.5. Do the authors have plans or have already conducted comparisons with these models? Especially in the aspect of specific domain document understanding, these comparative results would be crucial for assessing the practical application potential of DAViD. \n\n3. Regarding the generalization capability of the model: The DAViD framework focuses on efficient adaptability in specific domain VRDU tasks. Has the experiment considered the generalization capability across different types and complexities of documents?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. An innovative framework DAViD has been proposed for domain-adaptive document understanding, effectively utilizing synthetic data to reduce reliance on manual annotations.\n2. By combining fine-grained and coarse-grained joint representation learning with large models, the performance and robustness of the model have been enhanced."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a DAViD framework designed to achieve domain-adaptive rich visual document understanding (VRDU) by utilizing machine-generated synthetic data. The framework combines fine-grained and coarse-grained document representation learning and leverages synthetic annotations to reduce dependence on manual labeling. By utilizing pre-trained models and synthetic data, DAViD can achieve competitive performance even with minimal labeled datasets. Extensive experiments have validated the effectiveness of DAViD, demonstrating its efficient adaptability in specific domain VRDU tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe paper did not conduct experiments on advanced open-source large models, which limits a comprehensive assessment of the DAViD framework's performance across different types of models.\n2.\tThe paper may not have discussed in detail the potential biases in the synthetic data generation process and how these biases could affect model performance.\n3.\tThe paper focuses on solving the understanding problems of special domain documents, so the data selection should pay more attention to the diversity and professionalism of the field. The paper's discussion on the breadth of the data selected may not have been sufficiently explored."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe paper is well-motivated, and the proposed method can successfully infuse the domain-specific knowledge into existing method.\n2.\tThe proposed method can be applied to different existing model, such as LayoutLMv3 and LXMERT, thus having generality to some extent.\n3.\tThe experiment is well-organized, the paper conducts the extensive experiment to prove the effectiveness of proposed method and also compared with several large language model (LLM)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper aims at solve the problem of labor consumption of well-annotated data for specific domain in visually-rich document understanding task. The paper proposed a joint-grained framework (token-level and entity level). To solve the lack of well-annotated data in specific domain, the paper proposes a method to utilize the LLM to tag the raw data. To adapt the method to unseen target domain as well as mitigate the gap between the LLM-labeled data and human-labeled data, the paper proposes a method to infuse the domain specific knowledge into model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have following concerns that need to be explained:\n1.\tWhen encountering a new domain, the proposed method requires collecting raw data, tagging a small portion manually, and using a large language model (LLM) to tag the majority. Then, the proposed pipeline is applied for training. This complex process may impede real application.\n2.\tThe experiment does not test the method's performance in the original domain after training on new domain data, which is crucial to demonstrate the maintenance of knowledge from other domains.\n3.\tIn Table 1, the experimental setup lacks clarity. It is unclear if \"Full Training Set\" refers to D_n and D_g. If so, the baseline method appears to outperform the proposed method, reducing the contribution of the method. Additionally, the evaluation metrics for Tables 1, 2, and 3 are not mentioned, causing confusion.\n4.\tIn Section 6.2, the term \"zero-shot\" testing is misleading. The method uses some new domain data (a small portion labeled manually) to train the model, enhancing its ability in this domain. This is more suitable to be described as a few-shot application. Furthermore, the baseline settings in Table 3 are unexplained (likely LXMERT and LayoutLMv3) and the extreme low performance of the baseline method is also confusing. Domain knowledge is unlikely to significantly influence the performance, as the trained model should possess basic knowledge of the document understanding (DU) task. Clarification of the experimental setup is required to avoid confusion.\n5.\tThe full name of L2V is absent from the paper. Given that an ablation study of L2V is conducted, indicating its importance, a detailed explanation is needed.\nAdditional readability issues are present: \n1.\tMany figures, especially Figures 1, 2, and 10, are blurred. \n2.\tSection 5.1 mentions handwritten (H), digital (D), and printed (P), but uses at least two abbreviations ((\\mathcal{H}), (\\mathcal{D}), (\\mathcal{P})). Are these intended to have different meanings? \n3.\tIn Table 2, it is unclear what FST stands for. Is it a typo or an unexplained technique?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024david,\ntitle={{DAV}iD: Domain Adaptive Visually-Rich Document Understanding with Synthetic Insights},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z4rBSPep64},\nnote={under review}\n}"
},
"abstract": {
"value": "Visually-Rich Documents (VRDs), encompassing elements like charts, tables, and references, convey complex information across various fields. However, extracting information from these rich documents is labor-intensive, especially given their inconsistent formats and domain-specific requirements. While pretrained models for VRD Understanding have progressed, their reliance on large, annotated datasets limits scalability. This paper introduces the Domain Adaptive Visually-rich Document Understanding (DAViD) framework, which utilises machine-generated synthetic data for domain adaptation. DAViD integrates fine-grained and coarse-grained document representation learning and employs synthetic annotations to reduce the need for costly manual labelling. By leveraging pretrained models and synthetic data, DAViD achieves competitive performance with minimal annotated datasets. Extensive experiments validate DAViD’s effectiveness, demonstrating its ability to efficiently adapt to domain-specific VRDU tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Visually-Rich Documents",
"Visually-Rich Document Understanding",
"Domain Adaption"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2f809ae6c89019497e5a504663cf079ade9b0c04.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/61e845f93404ace77e99f289e35f68281a3cd572.zip"
},
"title": {
"value": "DAViD: Domain Adaptive Visually-Rich Document Understanding with Synthetic Insights"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z5Th95xtBW | Hierarchical Frequency Tagging Probe (HFTP): A Unified Approach to Investigate Syntactic Structure Representations in Large Language Models and the Human Brain | main | Desk Reject | Syntactic structure probe;Large language models;stereo-electroencephalography;Syntactic representation alignment | interpretability and explainable AI | Jingmin An;Yilong Song;Ruolin Yang;Nai Ding;Lingxi Lu;Yuxuan Wang;Wei Wang;Chu Zhuang;Qian Wang;Fang Fang | ~Jingmin_An2;~Yilong_Song1;~Ruolin_Yang2;~Nai_Ding1;~Lingxi_Lu2;~Yuxuan_Wang6;~Wei_Wang4;~Chu_Zhuang1;~Qian_Wang39;~Fang_Fang1 | 0 | 0 | 0 | 0 | 0 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": {
"value": "The paper reveals the author identities at line 245, where it references prior work as performed by the authors."
},
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": {
"value": "Submission Desk Rejected by Program Chairs"
},
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce the Hierarchical Frequency Tagging Probe (HFTP), a method to explore and align syntactic processing representations in both large language models (LLMs) and the human brain."
},
"_bibtex": {
"value": "@misc{\nan2024hierarchical,\ntitle={Hierarchical Frequency Tagging Probe ({HFTP}): A Unified Approach to Investigate Syntactic Structure Representations in Large Language Models and the Human Brain},\nauthor={Jingmin An and Yilong Song and Ruolin Yang and Nai Ding and Lingxi Lu and Yuxuan Wang and Wei Wang and Chu Zhuang and Qian Wang and Fang Fang},\nyear={2024},\nurl={https://openreview.net/forum?id=z5Th95xtBW}\n}"
},
"abstract": {
"value": "Large Language Models (LLMs) have shown impressive capabilities across a range of language tasks. However, questions remain about whether LLMs effectively encode linguistic structures such as phrases and sentences and how closely these representations align with those in the human brain. Here, we introduce the Hierarchical Frequency Tagging Probe (HFTP) to probe the phrase and sentence representations in LLMs and the human brain in a unified manner. HFTP utilizes frequency-domain analysis to identify which LLM computational modules (multilayer perceptron (MLP) neurons) or human cortical areas encode phrases or sentences. Human brain activity is recorded using intracranial electrodes. The results revealed distinct sensitivities to sentences and phrases across various layers of LLMs (including GPT-2, Gemma, Llama 2, Llama 3.1, and GLM-4) and across different regions of the human brain. Notably, while LLMs tend to process sentences and phrases within similar layers, the human brain engages distinct regions to process these two syntactic levels. Additionally, representational similarity analysis (RSA) shows that the syntactic representations of all five LLMs are more aligned with neural representations in the left hemisphere—the dominant hemisphere for language processing. Among the LLMs, GPT-2 and Llama 2 show the greatest similarity to human brain syntactic representations, while Llama 3.1 demonstrates a weaker resemblance. Overall, our findings provide deeper insights into syntactic processing in LLMs and highlight the effectiveness of HFTP as a versatile tool for detecting syntactic structures across diverse LLM architectures and parameters, as well as in parallel analyses of human brains and LLMs, thereby bridging computational linguistics and cognitive neuroscience."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Jingmin_An2",
"~Yilong_Song1",
"~Ruolin_Yang2",
"~Nai_Ding1",
"~Lingxi_Lu2",
"~Yuxuan_Wang6",
"~Wei_Wang4",
"~Chu_Zhuang1",
"~Qian_Wang39",
"~Fang_Fang1"
]
},
"authors": {
"value": [
"Jingmin An",
"Yilong Song",
"Ruolin Yang",
"Nai Ding",
"Lingxi Lu",
"Yuxuan Wang",
"Wei Wang",
"Chu Zhuang",
"Qian Wang",
"Fang Fang"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Syntactic structure probe",
"Large language models",
"stereo-electroencephalography",
"Syntactic representation alignment"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "an|hierarchical_frequency_tagging_probe_hftp_a_unified_approach_to_investigate_syntactic_structure_representations_in_large_language_models_and_the_human_brain"
},
"pdf": {
"value": "/pdf/4685b298788a0326bfae6e88d40cc635a3966003.pdf"
},
"presentation": null,
"primary_area": {
"value": "interpretability and explainable AI"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Hierarchical Frequency Tagging Probe (HFTP): A Unified Approach to Investigate Syntactic Structure Representations in Large Language Models and the Human Brain"
},
"venue": {
"value": "ICLR 2025 Conference Desk Rejected Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Desk_Rejected_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
||||||||||
z5UZZjXFc9 | Rethinking Fairness Representation in Multi-Task Learning: a Performance-Informed Variance Reduction Approach | main | Active | Multi-Task Learning;Fair Optimization;Dynamic Weighting Strategy | other topics in machine learning (i.e., none of the above) | 3;3;6;6 | 5;4;4;3 | 2;3;3;3 | 2;1;2;3 | 3;3;3;3 | 4.5 | 4 | 2.75 | 2 | 3 | -0.707107 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Performance Metric for $Δm$: What specific performance metric is employed for computing $Δm$? Some examples related to datasets involving a smaller number of tasks, such as NYU-v2, would be useful for further understanding.\n\n2. Source of $Δm$: It is mentioned that $Δm$ is obtained from the validation dataset ($D_v$). Why is it not obtained from the test dataset $(D_t)$? This aspect is not clearly explained in the paper.\n\n3. Performance Dependency on Task Complexity: Does the performance metric vary based on the complexity of each task? Should the choice of performance metric not account for the specific relevance of the task? Would it be beneficial to consider different performance metrics for each task depending on the task's nature?\n\n4. Direction Variable (d) in the Utility Term: There seems to be ambiguity regarding the direction variable (d), which is task-specific. In the PIVRG algorithm, “d” is computed for shared layers based on task-specific variables. Is the direction variable for the shared layer identical to the task-specific direction variable? If so, in Equation 7, it appears that “d” is determined using a utility term that also contains “d”. Are these two variables distinct, or is this an iterative dependence that needs clarification?\n\n5. Equation 8 Clarification: In Equation 8, it is not clear why there is an alpha on the LHS."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Fairness Regularizer: The paper introduces a performance metric as a fairness measure in terms of a regularizer in the backpropagation algorithm.\n2. Rigorous Experimentation: The paper presents rigorous experimentation with strong results across various datasets.\n3. Better Performance: Results demonstrate that the proposed method consistently performs better in their multi-task learning (MTL) setup than in single-task learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "- Paper identifies task imbalance, one of the common issues in multi-task learning (MTL) that leads to inefficient learning dynamics to address.\n- The paper opines that the existing methods of loss-based and gradient-based imbalances lead to uneven optimization and limiting the generalization capability of the model.\n- The paper introduces a fairness-driven approach that dynamically balances task optimization to address task, loss, and gradient imbalances.\n- The proposed method (PIVRG) uses the variance in performance $(Δm)$ across tasks as a fairness indicator and implements a dynamic weighting mechanism to progressively reduce variance among tasks, ensuring balanced optimization.\n- The effectiveness of the proposed approach is validated through comprehensive experiments involving both supervised and reinforcement learning tasks.\n- The results demonstrate state-of-the-art performance, highlighting the superiority (reducing the performance variance across tasks) of the proposed method over traditional approaches."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Please refer to the 'Questions' section of the review."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Refer to the weaknesses section."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed motivation, which incorporates performance-level information, is intuitive, and the paper’s explanation of how it applies to multi-task optimization is clear and logical.\n\n2. The paper thoughtfully follows the standard demonstration logic in the multi-task optimization field, providing necessary theoretical analysis based on widely accepted assumptions in this area.\n\n3. The proposed method demonstrates improved multi-task performance across various benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel performance-informed variance reduction gradient aggregation approach (PIVRG) for multi-task optimization. This method uses performance-level information as an explicit fairness indicator and demonstrates its effectiveness across various benchmarks, enhancing multi-task performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Using performance-level information is not a novel approach, as [1] already incorporates this concept for multi-task optimization through task difficulty, which the authors have cited in related work. Since this paper’s motivation is closely related to [1], further experiments and comparative analysis—including [1], which is currently omitted from the experiments—are needed to highlight distinctions.\n\n[1] Guo, Michelle, et al. \"Dynamic Task Prioritization for Multitask Learning.\" Proceedings of the European Conference on Computer Vision (ECCV), 2018.\n\n2. TTechnically, the paper effectively incorporates performance-level information into multi-task optimization; however, it seems to be a naive combination of task weighting based on performance metrics with existing optimization approaches, offering limited new insights for the multi-task optimization field. The authors assert that their performance-informed weighting strategy can integrate with prior loss-based and gradient-based methods, as shown in Table 5. This raises concerns that the proposed method might simply be a basic combination of previous techniques, as performance gains in each approach can already be achieved by weighting based on evaluation metrics. For a fair comparison, experiments should include combinations of [1] with other methods, encompassing both loss-based and gradient-based approaches. This would help justify the superiority of the proposed methods in terms of their methodology for incorporating performance-level information.\n\n3. The assumption that the network has access to performance-level information during optimization is quite strong. This limitation impacts the practicality of the proposed methods, particularly given that creating multi-task benchmarks is both costly and challenging due to the extensive labeling process required for multiple tasks. Additionally, the proposed methods necessitate extra validation or test sets for training, further restricting their applicability. This may explain why many previous studies have been conducted in experimental settings that do not rely on performance-level information."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please my questions in Weaknesses."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Novelty: The paper introduces a new approach to MTL that minimizes the mean of the inverse utilities for each task and explicitly considers performance-level information for dynamic weighting in the optimization process.\n2. Theoretical foundation: The authors provide a theoretical analysis demonstrating that PIVRG converges to a Pareto stationary point and achieves superior performance.\n3. Strong performance: The paper includes comprehensive experiments on various benchmarks showing that PIVRG outperforms existing methods and reduces performance variance."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a novel method named PIVRG for MTL that alleviates the issue of task imbalance by incorporating performance-level information into the optimization process. The method introduces a dynamic weighting strategy that uses the variance of performance drop across tasks as a fairness indicator, aiming to reduce the performance variance and achieve more balanced optimization. Extensive experiments show that PIVRG outperforms existing methods and reduces performance variance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Incorrect definition of the $\\Delta m$: The $\\Delta m$ used in previous works and reported in the experiments of this work is different from the $\\Delta m_i$ defined in Eq. 3 and the $\\Delta m$ defined in Line 190. The former is calculated across all specific metrics, while the latter is computed in a single task and then averaged across all tasks. It is recommended that the performance-related calculation in Sec. 3 be redefined as a new indicator to avoid confusion with $\\Delta m$. Additionally, the correct definition of $\\Delta m$ should be clarified in Section 4.\n2. Lack of discussion on some related work: [1] utilizes Key Performance Indicators (KPI) to dynamically prioritize difficult tasks, which is also performance-level information. [2] focuses on improvable gap balancing across tasks, similarly prioritizing more difficult tasks. This is particularly comparable to the characteristics of PIVRG, as seen in the experimental results where the surface normal prediction performance on NYUv2 improves.\n3. Lack of motivations and ablation studies on gradient aggregation approach of PIVRG: If Eq. 2 is one of the innovations of this paper, it would be beneficial to elaborate on its motivation and design process in the introduction and method sections. If it is not, I suggest including more ablation studies, such as comparisons like “PIVRG w/o PI”, and/or “Eq. 2 + other SOTA weighting methods”, to further validate the effectiveness of PI.\n\n[1] Dynamic Task Prioritization for Multitask Learning. ECCV, 2018.\n\n[2] Improvable Gap Balancing for Multi-Task Learning. UAI, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "For the reinforcement learning experiment on MT10, I noticed in Sec 5.5 of FairGrad that the authors found it very time-consuming to solve the nonlinear least square problem, so they approximated the objective using SGD in practice. Since you also treat Eq. (8) as a nonlinear least square problem, and appear to solve it directly, how long do the MT10 experiments take?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The proposed PIVRG outperforms all baselines on many multi-task scenarios, including those for supervised learning and reinforcement learning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper utilizes performance variance across multiple tasks as additional information to guide the training process, and then proposes a performance-informed variance reduction gradient (PIVRG) method for multi-task learning. Theoretical analysis shows that PIVRG can converge to a Pareto stationary point under certain assumptions. Extensive empirical studies further demonstrate its effectiveness. Additionally, the performance-informed idea can also be applied to other existing MTL methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The novelty is insufficient. Since the paper compares with FairGrad[1], it appears that Eq. (2) in this paper corresponds to minimum potential delay fairness, a specific case of FairGrad. Additionally, the method used to solve Eq. (8) is the same as in FairGrad.\n\n2. The use of $\\Delta m$ to guide the training process seems kind of strange. Although the paper mentions in Sec 4.1 that $\\Delta m$ is derived from the validation dataset for QM9 and CelebA, and from the training dataset for NYU-v2 and Cityscapes, further clarification on the calculation would better be provided.\\\nFrom my understanding, $M_{b,i}$ in Eq. (3) denotes the metric value of the STL baseline, obtained from the test dataset. That is, when calculating $\\Delta m$, information from the test dataset is involved. This type of **test** information should not be used to guide the **training** process, as it is also used to evaluate performance. Could you please more clarifications on the use of $\\Delta m$?\n\n[1] Ban, Hao, and Kaiyi Ji. \"Fair Resource Allocation in Multi-Task Learning.\" arXiv preprint arXiv:2402.15638 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024rethinking,\ntitle={Rethinking Fairness Representation in Multi-Task Learning: a Performance-Informed Variance Reduction Approach},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z5UZZjXFc9},\nnote={under review}\n}"
},
"abstract": {
"value": "Multi-task learning (MTL) can leverage shared knowledge across tasks to improve data efficiency and generalization performance, and has been applied in various scenarios. However, task imbalance remains a major challenge for existing MTL methods. While the prior works have attempted to mitigate inter-task unfairness through loss-based and gradient-based strategies, they still exhibit imbalanced performance across tasks on common benchmarks.\nThis key observation motivates us to consider performance-level information as an explicit fairness indicator, which can more accurately reflect the current optimization status of each task, and accordingly help to adjust the gradient aggregation process.\nSpecifically, we utilize the performance variance among tasks as the fairness indicator and introduce a dynamic weighting strategy to gradually reduce the performance variance. \nBased on this, we propose PIVRG, a novel performance-informed variance reduction gradient aggregation approach.\nExtensive experiments show that PIVRG achieves state-of-the-art performance across various benchmarks, spanning both supervised learning and reinforcement learning tasks with task numbers ranging from 2 to 40. Results from the ablation study also show that our approach can be integrated into existing methods, significantly enhancing their performance while reducing the variance in task performance, thus achieving fairer optimization."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Multi-Task Learning",
"Fair Optimization",
"Dynamic Weighting Strategy"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/481efc8857b77f80d0f4a5f97db30f1a53cab0c0.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Rethinking Fairness Representation in Multi-Task Learning: a Performance-Informed Variance Reduction Approach"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z5uVAKwmjf | AFlow: Automating Agentic Workflow Generation | main | Active | LLM Agent; Prompt Optimization; Workflow Generation | applications to robotics, autonomy, planning | 5;6;8;8 | 3;3;3;3 | 3;3;3;3 | 3;4;3;4 | 1;3;3;3 | 6.75 | 3 | 3 | 3.5 | 2.5 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The authors chose 6 agentic workflow benchmarks for the experiments. Are there more rationale and explanation behind how those benchmarks are chosen to best represent the agent workflow optimization capability?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Overall, this paper is clear, well-motivated and provides a new framework for automatic workflow optimization, which has significant potential impact on agent design and workflow optimization for the broader machine learning community. It proposes a novel, original approach to model the workflow as a sequence of LLM-invoking nodes in a graph structure, with prompts, operators, and code-represented edges in the search space. By leveraging MCTS, the paper reaches SOTA performance on major workflow benchmarks and shows the potential of enabling smaller, cheaper models reaching similar performances as large models. The documentations of the experiment setup, code representation, case studies and results are clear and technically sound, and this paper can provide great inspiration for other researchers in the domain of agent workflow optmization."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces Aflow, a new automatic workflow optimization framework based on MCTS and code-represented workflows. It models the workflow as a sequence of LLM-invoking nodes, where nodes represent LLM action and edges represent logic, dependencies and flows between the actions. The experiment results on 6 benchmarks show preliminary effectiveness over SOTA, and that AFlow can enable smaller LLMs to outperform larger models, offering better cost-performance efficiency, with significant implications for real-world applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper could benefit from discussions with regards to the following points:\n1. To reduce the search space, the paper focuses on custom prompts, operators and code-represented edges by fixing parameters such as model choice, temperature and output format - which is a sound choice. Could there be more discussion on the potential effect of these parameters on model performance?\n2. The authors mention some of the parameters used in MCTS in the appendix (e.g. $\\lambda = 0.4$ used to balance exploration vs. exploitation), but not in the main paper. It would be helpful to include key parameter values and brief discussions about the choices. \n3. Similarly, a quick discussion around why models are chosen for specific parts (executor vs. optimizer) would be helpful for context as well."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Scope and Generalizability:\nHow do you envision AFLOW handling tasks that do not have well-defined success metrics, such as creative writing or exploratory research? Are there specific adaptations you would recommend for such tasks?\nThe paper primarily discusses AFLOW’s applicability to benchmark tasks with structured goals. Could you provide examples or suggestions for how AFLOW might be applied to open-ended, real-world applications beyond these benchmarks?\nCould you clarify how AFLOW maintains context and coherence in longer workflows or workflows requiring memory across stages? Is there a mechanism in place to support long-term contextual awareness?\n\nPrompt Adaptation and Handling for Broader Tasks:\nGiven that AFLOW’s prompt templates seem tailored to structured problem-solving tasks, what modifications would be necessary to adapt these prompts to open-ended or creative tasks?\nAre there mechanisms for adapting or evolving prompts dynamically based on task progress? How does AFLOW approach prompt optimization in situations where task objectives may evolve or remain undefined?\nCould you provide examples of prompt templates used in AFLOW, and explain how execution feedback specifically informs prompt revisions, if at all?\n\nImplementation Details for Reproducibility:\nCould you provide a clearer example of how workflows evolve iteratively during AFLOW’s optimization process? Specifically, how does feedback inform changes in node structure, prompt design, or operator selection?\nHow does AFLOW ensure consistency and reliability across different task types, especially when tasks involve diverse workflows or require different prompts and operators?\n\nComparison with Other Approaches:\nHow does AFLOW’s MCTS-based approach compare to DSPy’s instruction optimization or the Tree of Thoughts approach in handling complex, multi-stage tasks? Are there specific advantages or limitations relative to these frameworks?\nTree of Thoughts is briefly mentioned as related work but not fully explored. Could you elaborate on how AFLOW’s workflow optimization extends or diverges from the principles underlying Tree of Thoughts?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Novel Approach: AFLOW’s integration of MCTS with code-represented workflows introduces a new direction in automating LLM workflows. This reduces the reliance on manual design and allows efficient workflow discovery and optimization.\n\nComprehensive Problem Formulation: The paper formalizes workflow optimization with a general mathematical framework, effectively unifying prior approaches and broadening the potential for future applications.\n\nDetailed Critique of Prior Work: The authors present an insightful analysis of existing methods, identifying the limitations of prior frameworks like ADAS in handling information accumulation and search efficiency. This sets a strong foundation for AFLOW’s proposed contributions.\n\nEmpirical Validation: AFLOW is rigorously evaluated across diverse benchmark tasks, with quantitative comparisons to multiple baselines. Ablation studies further illustrate the impact of different operators, and cost analysis demonstrates AFLOW’s efficiency."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces AFLOW, a framework that automates the generation and optimization of agentic workflows for Large Language Models (LLMs) by reformulating workflow optimization as a Monte Carlo Tree Search (MCTS) problem. The workflow structure consists of LLM-invoking nodes connected by edges, represented in code. The system leverages tree-structured experience and execution feedback to refine workflows iteratively. AFLOW is evaluated across six benchmark datasets, demonstrating performance improvements over manual and automated baseline approaches and enabling smaller models to achieve competitive results at lower cost. This novel approach aims to reduce human intervention in workflow design, providing a scalable and efficient framework for workflow optimization in structured tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Limited Scope and Generalizability: The paper primarily demonstrates AFLOW on benchmark tasks with clear success metrics, which raises questions about its applicability to more open-ended tasks, such as document generation or creative exploration. There is limited discussion on how the standardized prompts used in AFLOW would generalize to tasks without clear success criteria. The current prompts seem tailored to test-taking scenarios and may lack the flexibility required for tasks that demand creative or exploratory outputs. A clearer strategy for adapting or evolving these prompts to support open-ended workflows would strengthen the paper’s position on generalizability.\n\nImplementation Details for Reproducibility: While the automated workflow design is a strength, the paper lacks sufficient details on prompt handling, tool calling, and how workflows evolve with execution feedback. Specific examples of workflow changes during optimization and consistency maintenance across components would improve reproducibility.\n\nLimited Comparative Analysis: Although the paper provides a robust critique of ADAS, the treatment of DSPy is brief, and Tree of Thoughts is only briefly mentioned despite its relevance. A more detailed comparison with these approaches would clarify AFLOW’s unique contributions and limitations.\n\nTheoretical Analysis: The paper lacks a theoretical analysis on the convergence of the MCTS optimization process, completeness of the search space, and performance bounds, which are important for understanding the robustness and scalability of the method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "- Page 3: “We define an agentic workflow W as a sequence of LLM-invoking nodes”. Does this definition of a sequence really make sense? It appears that it can also be some sort of graph structure that allows for loops, decision points (branching), and parallel relations. Terminology-wise, wouldn’t that graph be the workflow, and a sequence of LLM-invocations would then be a particular instantiation or execution of that workflow? This relates also to Page 4: “the goal of workflow optimization is to discover a workflow W …”, given the definition of W as merely a sequence, is this what is intended here?\n- Page 4: “The edges E can represent various structures, such as: Graph: A flexible structure representing hierarchical, sequential, or parallel relationships between nodes, allowing for complex branching workflows”. The graph that is shown in Figure 2 is merely a DAG. How can a DAG represent both sequential and parallel relationships in one graph? Typically richer graph languages are required to model rich types of branching workflows, such as workflow nets [1] or BPMN. Is the graph structure that is intended here a DAG and thereby unable to represent parallel relationships, or richer than a DAG and thereby able to represent parallel relationships and other complex workflow patterns [3]? Note that efficient search spaces for have been proposed for some of those workflow representations (e.g., [4]).\n- Page 4, about Code: “offering the most precise control over workflow execution”. Why would this be the most precise? Various variants of Petri nets and other graphical representations that are commonly used in the literature to define workflows are turing complete.\n- Page 6: Equation 2: “W^* = AFlow(S_{AFlow}, G, T)“. It is unclear what method AFlow here refers to. My thought initially was that it refers to Algorithm 1 “Algorithm of AFlow”. But that algorithm is defined to take 5 arguments (defined in the algorithms require), while equation 2 passes three arguments. Without clarity on this, the procedure is not well specified.\n- Page 6: “AFlow can perform searches with an empty Operator Set”. If the set of operators is empty, then what is the set of mutations to the workflow that are considered? I didn’t find a definition of this.\n- Page 7: “Our algorithm forms the initial node by evaluating an empty workflow on the validation set, which is distinct from the root node in MCTS”. I did not understand this. Why is the initial node of the workflow not the root node in MCTS?\n- Page 7 about expansion: not clear if the LLM generates new workflow only (i.e., new code), or whether it also generates/modifies the contents of the nodes (e.g., the prompt/model/output format)."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The methodology is interesting, and the experimental setup is convincing in showing that AFLow enables smaller models to achieve superior performance to larger models. This lifts the cost/accuracy Pareto front. Given these results I am appreciative of this work."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel approach to agentic workflow that leverages MCTS to search over sequences of actions that are jointly able to achieve automation of various tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the methodology and results are nice, I do however believe that the presentation of the paper requires improvement. The description of the AFLow methodology lacks precision and can at times even be called handwavy (details below), which makes the paper hard to read. In case authors are able to address those issues I may be willing to increase my score. Some concrete examples (more examples in the questions):\n- There is a tree structure involved in the MCTS search process, but there is also a graph and nodes involved in the search space of workflows. This is a potential point of confusion and many parts of the paper do not make explicit which of those two graphs they are talking about.\n- The exact definition of the search space is not made clear anywhere in the paper. It is clear that a node is a tuple consisting of a node, prompt, temperature, and output format. What is then less clear is what it means to have an edge between two nodes. Does this mean that we first made the first LLM invocation (of the first node), and then sequentially following that, we make the second LLM invocation (of the second node)? And what does it then mean if our node has two outgoing edges (is this a decision point where we invoke one or the other, or do we execute both next nodes in parallel)? What is also less clear is how the output of the first LLM invocation is used in the LLM calls for later nodes (if at all). Is there a root node? These all point to a broader issue of a lack of precision in the specification of AFlow."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can you provide examples of optimized workflows for different tasks, along with explanations of how these workflows can be interpreted. \n2. How consistent are the results when running AFlow multiple times with the same model and task? What methods, if any, does AFlow use to introduce variability in the search process? How is the scope of the search space defined and controlled in AFlow?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Originality\n\nThis paper pioneered a new definition of workflow optimization. By redefining the workflow process, it transformed the problem of automatically generating workflows into a Monte Carlo tree search problem in the search space. It also made up for the shortcomings of the previous problem, transformed the previous work into special cases, and provided a unified framework for subsequent researchers.\n\n2. Quality\n\nThe AFlow framework proposed in the paper has shown strong performance in experimental evaluation, outperforming existing methods by an average of 5.7% on benchmark datasets. Additionally, AFLOW can achieve better performance than larger models using smaller LLM models, which is of great importance for practical applications.\n\n3. Clarity\n\nThe paper provides detailed descriptions and explanations of the key components of the AFLOW framework, such as the MCTS algorithm, node selection strategy, and LLM-driven node expansion, making the entire implementation process clear and understandable. At the same time, the paper also describes the experimental setup and experimental process in great detail, and the explanation of the experimental results is also clear. The entire paper is logically rigorous and well-organized.\n\n4. Significance\n\nThe AFlow proposed in this paper not only has a significant improvement in performance, but also proposes a unified framework in the field of automatic generation and optimization of workflows. The new definition better explains this task and provides a new framework and optimization direction for future research."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes AFlow, a new automated framework that uses Monte Carlo tree search to find the optimal workflow in the exploration space. It also redefines the workflow through code modification, tree-structured experience, and execution feedback. This paper verifies the effectiveness of AFlow on 6 different benchmark datasets, and enables smaller models to outperform GPT-4o on specific tasks. The main contributions of this paper are as follows:\n\n1. Problem Formulation: This paper formalizes the workflow optimization problem and generalizes previous approaches to specific cases. This provides a unified framework for future research at both node and workflow optimization levels.\n2. This paper designs AFlow, an MCTS-based method that automatically discovers efficient workflows across multiple domains with minimal human intervention.\n3. This article evaluates AFlow on six benchmark datasets: HumanEval, MBPP, MATH, GSM8K, HotPotQA, and DRO, verifying the effectiveness of AFlow. It is worth noting that the workflow generated by AFlow enables smaller LLMs to outperform larger models, providing better cost-effectiveness, which has a significant impact on practical applications."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "This paper has achieved good results in both method and experiment. In terms of method, the paper innovatively formulated the automatic workflow optimization problem, establishing a foundational structure for future research. In terms of experiment, it not only achieved good results, but also conducted a lot of relevant analysis. However, this paper has some weaknesses in the following aspects:\n\n1. The experimental part of this paper lacks the cost analysis of the early AFlow search stage. The cost analysis of different methods later in the paper shows the effectiveness and low consumption of the workflow found by AFlow, but the early MCTS search is a huge process, and the execution of nodes will also consume certain resources. This part does not provide experimental explanation. If the cost of exploring and finding the optimal workflow is huge, then the discussion on cost should include this resource consumption.\n2. This paper argues that different language models require different workflows to achieve their optimal performance. However, there is a lack of sufficient experiments to support this assertion, as the paper only mentions that the workflow identified using DeepSeek-V2.5 performs notably weaker on GPT-4o-mini compared to the workflow found using GPT-4o-mini itself. At least one more set of comparative experiments should be added, that is, generate a workflow through GPT-4o-mini and then use DeepSeek-V2.5 and GPT-4o-mini respectively to see the experimental results. It would be best if more comparative experiments of other types of models could be added, such as adding another LLama series model, and comparing the three models. This is an interesting assertion, but more sufficient experiments are needed to verify it."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024aflow,\ntitle={{AF}low: Automating Agentic Workflow Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z5uVAKwmjf},\nnote={under review}\n}"
},
"abstract": {
"value": "Large language models (LLMs) have demonstrated remarkable potential in solving complex tasks across diverse domains, typically by employing agentic workflows that follow detailed instructions and operational sequences. However, constructing these workflows requires significant human effort, limiting scalability and generalizability. Recent research has sought to automate the generation and optimization of these workflows, but existing methods still rely on initial manual setup and fall short of achieving fully automated and effective workflow generation. To address this challenge, we reformulate workflow optimization as a search problem over code-represented workflows, where LLM-invoking nodes are connected by edges. We introduce \\textbf{AFlow}, an automated framework that efficiently explores this space using Monte Carlo Tree Search, iteratively refining workflows through code modification, tree-structured experience, and execution feedback. Empirical evaluations across six benchmark datasets demonstrate AFlow's efficacy, yielding a 5.7\\% average improvement over state-of-the-art baselines. Furthermore, AFlow enables smaller models to outperform GPT-4o on specific tasks at 4.55\\% of its inference cost in dollars. The code will be made available as open-source upon publication."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LLM Agent; Prompt Optimization; Workflow Generation"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/f6b8ed3c03a5fb2cd09e7a285faec33bfdfd77a0.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to robotics, autonomy, planning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/cb5380861d4710feeac47665d54bc2ff5de3e788.zip"
},
"title": {
"value": "AFlow: Automating Agentic Workflow Generation"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z6qmomJW91 | RotRNN: Modelling Long Sequences with Rotations | main | Active | Sequence Modelling;Recurrent Neural Networks;State Space Models;Long Sequences | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 3;3;5;5 | 3;2;5;3 | 3;2;3;3 | 2;2;1;2 | 3;3;4;2 | 4 | 3.25 | 2.75 | 1.75 | 3 | 0.688247 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "The authors never concretely show (in terms of any sensible metrics, such as GPU hours etc) how efficient their model is compared to the baselines. It would be great if they could do a more thorough job at comparing their models against the baselines. The value of this question is particularly important, especially since their model performance is not significantly better than the baselines. Why wasn't their model compared rigorously in terms of efficiency?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "* Good theoretical justification of parametrizing recurrent state matrix as rotation matrix:\n * Show how rotations can be easily decomposed for efficient matrix power computation\n * Show how orthogonality of rotation matrices is used for normalization, leading to almost constant hidden state norms\n\n* Very effective normalization enabled by the orthogonality of rotation matrices (as seen in figure 3) ensuring that hidden state norms do not vanish/explode across long sequences, which is important for stable training.\n\n* Reproducibility: detailed hyperparameters and jax code are provided for reproducibility."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new linear recurrent model using rotation matrices. The aim of introducing rotation matrices is to enforce the theoretical constraints of LRU that are missing in its practical implementation. Parametrizing recurrent state matrix as rotations allows the authors to present a fast method of matrix powers and also allows then to present a normalization process that helps near constant hidden state norms, both of these are essential for linear RNNs especially in long sequence domain."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* The proposed approach does not seem to improve the performance on the LRA benchmarks. As shown in the table 1, the proposed approach is better than baselines only on text and that too with a very small margin. While on other benchmarks and on average, it performs significantly worse (upto 10 percent points in case of Path-X).\n\n* Also on speech commands classification task of table 2, the proposed approach is not better than any of the shown baselines.\n\n* Although the proposed implementations using rotation matrices in this paper enables theoretical constraints of LRU, they don’t improve the performance on downstream tasks. Also, the paper does not show how efficient the proposed method is compared to the baselines in terms of compute/resource use."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "How well do you think this model can scale up, compared to other approaches ? \nDoes it interact well as a preprocessing layer into transformer based models ? \nThe block diagonal matrices look like blocks of rotations in 2D. Are these the only kinds of block rotation matrices that are possible for theta ? \nIs it possible to control the periodicity of rotations using the block diagonal 'blocks' in some way ?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "While I didn't look at the detailed linear algebra proofs, they seemed correct to me mathematically and made sound intuitive sense. There are also results showing that norms of the state are well preserved when the model is run, so that part of the proposal also seems to work well. The provided Jax code also makes it clear to see how the implementation matches the technical details of the paper."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a linear state space model, where the recurrent state is transformed by a rotation matrix, A, that is an exact rotation matrix, and input is transformed by an input matrix. In prior works like LRU, careful initialization is performed to ensure that the rotation matrices at the start of learning are orthogonal and that outputs of the state space model are real values . However, as training proceeds, it is possible that complex parts arrive and are ignored. In this paper, the authors propose RotRNN - a parameterization of rotation matrices that is exact (although it doesn't cover the space of all rotation matrices, if I understood correctly). To do this, the authors show that any general matrix M can be smoothly mapped to the special orthogonal group by taking the matrix exponential of M-M^t. Now, P = exp(M-M^T) can be mapped to the special orthogonal group rotation matrix A, through a block diagonal matrix $\\theta$ by A= $ P \\theta P^t$.\nUsing this scheme orthogonal recurrent rotation matrices can be generated. \n\nAside from ensuring that an orthogonal rotation matrix can be produced, the authors also ensure that the hidden state $x_t$ is well behaved an preserves a constant norm in expectation. This is performed by using a normalization constant $\\alpha$ to counteract the decay factor $\\gamma$ applied to the rotation of the recurrent states. In practice this is actually done by rescaling the output of the application of the input matrix to the input.\n\n\nResults of using the method are shown on LRA benchmarks and on Raw Speech classification using the Speech Commands classification task."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While I am moved by the simplicity of their parameterization compared to prior works, I am not sure if the contribution is enough to merit a paper in ICRL with the kind of experimental exploration performed. I think a proper paper would run much further with the proposed method than the author(s) have done here. Speech commands is quite a small dataset and the results on it, and on LRA shed little light into the details of their method. And the results on these datasets are not necessarily better than prior methods. So the selling point of the method from a standpoint of improved results over LRU is not clear. Furthermore, other than showing the norms of the states are well behaved, the paper does not offer more technical insight either, for others to build upon. Without that, this is more of an exposition of a mathematical trick rather than a full contribution."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Given that the real-part operation at the output of LRU implicitly pairs each eigenvalue with its complex conjugate, what are the remaining important differences between the proposed algorithm and LRU?\n\nWhich of the performance differences shown are statistically significant?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The factorization of the linear recurrence matrix into cosine and sine rotations is elegant, and was a pleasure to read."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "State-based models have reduced the long-path problem of traditional recurrent models, but require complicated initialization procedures in order to avoid vanishing/exploding gradients. This paper demonstrates a new linear recurrent unit in which the recurrence is constrained to be a rotation, scaled by a head-wise decay constant. The head-wise decay constant is matched to the amplitude of the input coefficients, in order to guarantee that the magnitude of the state vector remains constant over time, avoiding the vanishing/exploding gradient problem without complicated initialization procedures."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The key weakness of this paper is a minor oversight in the analysis of LRU, which calls into question the value of this paper's contribution.\n\nThe proposed algorithm is very similar to LRU, except that it forces the eigenvalues of the recurrence matrix to come in complex conjugate pairs. The manuscript notes this as a weakness of LRU: that LRU does not require the eigenvalues to come in complex conjugate pairs, and instead, LRU simply takes the real part of the output of the linear layer. It seems that the authors of this paper do not realize that taking the real part of the output of the network solves the problem of arranging eigenvalues into complex conjugate pairs, because the real part of any complex number is the average of the complex number and its complex conjugate: Re(z) = (z+z*)/2. Thus, if z(t)=P Lambda^t P^T z(0), and if z(0) is real, then Re(z) = (1/2)(P Lambda^t P^T + P* Lambda*^t P*^T) z(0). By taking the real part of the output, LRU is pairing each explicit eigenvalue with an implicit eigenvalue equal to the complex conjugate of the explicit eigenvalue, thus effectively doubling the dimension of the recurrent layer, at no extra computational cost. \n\nSince LRU already has each eigenvalue paired with an implicit conjugate eigenvalue, the only remaining difference I can see between the proposed algorithm and LRU is the proposed input normalization, which guarantees that the recurrent state vector maintains constant norm. In theory, this seems like a useful contribution, since it explicitly avoids gradient vanishing/explosion. In practice, it's not clear that LRU suffers from problems of gradient vanishing or explosion. In the example shown, the norm of LRU gets large, but it never seems to overflow. The proposed algorithm has better performance than LRU on a toy example, but it is not clear that the difference is statistically significant."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "What does $A^{t-k}$ mean in Equation 2?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Rigorous mathematical background: The math backgrounds behind the rotation matrix-based parameterization and explicit normalization method are proved with easy-to-read derivations. In addition, those backgrounds lead the simple implementation of RotRNN. \n\nIn-depth comparison between former architectures: The theoretic comparisons between RotRNN and (LRU/SSM) are helpful to posit RotRNN within this field.\n\nStrong, latest baselines: This paper compares RotRNN with latest and state-of-the-art baselines (such as S5 and Liquid-S4) in long-range sequence modeling benchmarks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a new class of linear recurrent unit (LRU) which is uprising efficient state-of-the-art model class for long-range sequence modeling. The proposed model, RotRNN, utilizes rotation matrix-based parameterization for state transformation and the explicit normalization method. Based on strong mathematical backgrounds, RotRNN could be simply implemented and strictly guarantees preservation of hidden state magnitude across time-steps. This paper compares RotRNN with former LRU and state space models (SSMs) in theory and experiments to give in-depth understanding for readers."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Majors:\n- Limitation of rotation matrix parameterization: I think there would be drawback with constraining state transformation matrix to be rotation matrix, which might limit expression power of the model. \n- Potential drawback of explicit normalization method: It is unclear that whether the explicit normalization method is beneficial for performance. I understand that this method constrains the operation to target a specific range of dependency based on the trained value of $gamma$, so it looks constraining the model’s expressivity. Although it successfully guarantees the converged norm of hidden states during training, its performance-wise effect is not demonstrated in results. An ablation study would be helpful to see its benefit.\n- Overall, it is questionable how RotRNN could be advantageous in practice. Comparison of computational efficiency (in terms of FLOPS) would be helpful to show that RotRNN is ‘efficient to compute’.\n\nMinors:\n- Weak motivations for rotation matrix parameterization: The three motivations written at the beginning of section 3 aim to simple implementation thanks to the rotation matrix parameterization. However, there is no motivation related to hidden state processing despite of rotation matrix’s unique characteristic (such as regularity). And, it is not clear why real-valued matrix can make it robust to initialization? \n- Arguable interpretations of experiment results: The sentence “we find that our model performs best on domains with more discrete input data, such as ListOps, Text and Retrieval, achieving the highest score of all the baselines” seems not clear from the result table. Except ‘Text’ task, Liquid-S4 and S5 achieved the highest scores in ListOps and Retrieval tasks, respectively.\n- Confronting claims: This paper argues the unnecessity of SSM model’s theory-based initialization method (HiPPO) in practice (Section 2.3). However, this paper aims to build RotRNN on rigorous theoretic backgrounds. I think those claims are confronting with each other."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We propose a novel linear recurrent layer for long sequence modelling using rotation matrices for stable and efficient recurrence."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024rotrnn,\ntitle={Rot{RNN}: Modelling Long Sequences with Rotations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z6qmomJW91},\nnote={under review}\n}"
},
"abstract": {
"value": "Linear recurrent neural networks, such as State Space Models (SSMs) and Linear Recurrent Units (LRUs), have recently shown state-of-the-art performance on long sequence modelling benchmarks. Despite their success, their empirical performance is not well understood and they come with a number of drawbacks, most notably their complex initialisation and normalisation schemes. In this work, we address some of these issues by proposing RotRNN – a linear recurrent model which utilises the convenient properties of rotation matrices. We show that RotRNN provides a simple and efficient model with a robust normalisation procedure, and a practical implementation that remains faithful to its theoretical derivation. RotRNN also achieves competitive performance to state-of-the-art linear recurrent models on several long sequence modelling datasets."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Sequence Modelling",
"Recurrent Neural Networks",
"State Space Models",
"Long Sequences"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/78dab72130b925f785999cf1a1e427a2af4f949e.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "RotRNN: Modelling Long Sequences with Rotations"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z7JBs8UOLI | Unconstrained Robust Online Convex Optimization | main | Active | online learning;online convex optimization;adversarial corruption;comparator adaptive;parameter-free;unconstrained domain | optimization | 5;6;6;6 | 3;3;4;4 | 3;3;4;3 | 2;3;3;3 | 1;3;4;3 | 5.75 | 3.5 | 3.25 | 2.75 | 2.75 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In Section 4.2, can the authors provide some more explanations on how the lower bound is proved?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The problem:\n\nThis paper is the first to study unconstrained OCO with adversarial corruptions, which is a new and novel problem. \n\nThe motivation is clear and makes sense to me, as in practice we indeed might only able to observe biased gradients. \n\nThe contribution:\n\nThe authors successfully provide an algorithm which is of order $O(||u||G\\sqrt{T}+||u||Gk)$ for the case when G (the upper bound for gradient) is known, which matches the optimal rate for parameter free OCO when the number of corruptions $k=O(\\sqrt{T})$. There is also a lightly worse bound for the case where $G$ is not known. Finally, the authors also a matching lower bound for this problem. \n\nThe presentation: \n\nThe paper is in general easy to follow, and the main ideas of the methods are well explained."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies unconstrained OCO with adversarial corruptions. The auhors provide algorithms that ensures $O(||u||G\\sqrt{T}+||u||Gk)$ regret bound, where $k$ is the number of corruptions. They also provide matching lower bounds."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The problem:\n\nThe problems studied here is a combination of the well-studied unconstrained OCO and OCO with corruption. Note that OCO with stochastic corruption has been studied, and many key ideas of this paper are motivated by them. \n \n\nThe methods:\n\nThe main technique used here to deal with corrupted gradients is very similar to that in Zhang & Cutkosky (2022). To be more specific, similar to the stochastic setting, the regret of this problem can be decomposed into two terms: one \"easy\" term, e.g., the first term in (6), a standrd OLO problem with bounded gradient, and one \"bias\" term (e.g., the second term in (6), which related to the gap between the clipped gradient and the true gradient). For the bias term, by basic Cauchy-Swhurz inequality (eq. (7)), one can notice that it is dominated by max_t ||\\w_t||, that is, the max norm of the decisions. The same term also appears in Zhang & Cutkosky (2022) (page 4, the NOISE term). Following Zhang & Cutkosky (2022), the authors introduced the same surrogate loss. \n\n\nThe presentation: \n\n2) In Sections 4 and 5, the authors sometimes use ||\\cdot\\|| to denote the norm, sometimes use |\\cdot| (e.g., compare (5) and (7)). Please make it clear in the begining of Section 4 that the discussion is for W=R, but it can be extended to R^d. \n\n3) Line 215 \"(in fact, potentially exponential in t)\": I am not sure about the meaning of this sentence. Note that this paragraph is about regret decomposition, and no algorithm is discussed. So why it can exponential in t? The same sentence appears in Zhang & Cutkosky (2022) (first sentence in Page 5), but in Zhang & Cutkosky (2022) it makes sense there because it is in a different context (discussing an algorithm)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Please see the previous section."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper is well-written, effectively motivating the problem and clearly articulating the associated challenges. The techniques and results are presented in an understandable and clear manner. The work draws significant inspiration from two existing works: it leverages the composite loss function method from Zhang & Cutkosky (2022) to manage large corrupted gradients and employs filtering techniques from van Erven et al. (2021) to address unknown bounds on the gradient norms of uncorrupted rounds. Especially the proposed work is able to reduce the space complexity of filtering techniques of van Erven et al. (2021)."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors study online convex optimization under corrupted gradient feedback in an unconstrained domain. The proposed algorithm requires prior knowledge of \"a measure of total corruption\" $k$ and achieves a regret bound of $\\mathcal{O}(|| u || G (\\sqrt{T} + k))$ for any comparator $u$, assuming the gradient norm bound $G$ is known for the uncorrupted rounds. If $G$ is not known, they propose a filtering approach to guarantee a regret bound of $(|| u ||^2 + G^2) k$. They also provide matching lower bounds (up to logarithmic factors) for any choice of $u$."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "In my view, the reliance on established techniques impacts the overall novelty of the work. Specifically, the major technical tools employed are well-known, which somewhat limits the innovative contribution. It would significantly strengthen the paper if the authors could elaborate on the primary technical innovations in Section 4.1. In particular, it would be helpful to clarify how their approach in this section differs from or improves upon the methods presented by Zhang & Cutkosky (2022)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "I believe if the authors could help clarify a few of the key ideas and contributions in the algorithm, it could help me and the other reviewers better appreciate your work. \n\n\n- **The case of known $G$**: The algorithm in this case is a modification of ideas proposed by Zhang and Cutkosky (2022). However, it is not clear from the paper what are the key differences between the analysis in the stochastic case done by Zhang and Cutkosky and the one done in this paper. \n\nI know the space constraints play into this, but reading pieces of Zhang and Cutkosky (2022) clarified many pieces of the algorithm for me that were not clear from the algorithm. A small example: on line 6 of Algorithm 1 we need to find a point $w_t$ that satisfies an equation, but the paper never discusses computational complexity or, more importantly, why a solution to this is guaranteed to exist. This is luckily done by Zhang and Cutkosky (2022), but it is not even discussed in the paper. My question for the authors here is: could you briefly explain what are the key modifications to the algorithm and/or analysis from the stochastic case from Zhang and Cutkosky to this adversarial corruption setting? If the algorithm is barely different from their algorithm for sub-exponential noise, it would be even more interesting and should even be clarified in the paper, because then a contribution of the paper would be showing that the same algorithm also works for this seemingly harder case!\n\n- **The case of unknown $G$**: This case seems to be about estimating $G$ on-the-fly, doing so by using the algorithm for known $G$ with a couple of ideas from previous work (an improvement on the filtering strategy from van Erven et al., 2021, and the Epigraph Based regularization from Cutkosky and Mhammedi, 2024). This was a section that was quite hard to understand, and I have a few questions in the hope of better understanding the contributions of this section.\n\n1) On the paragraph starting on line 399, you mention you improve on the original Filter strategy. Besides the memory usage, are there other improvements? Are these improvements crucial for the algorithm? (i.e., a black-box use of Filter could lead to similar bounds?)\n\n2) When using the Epigraph-based regularization the paper also uses a reduction from e Cutkosky & Orabona (2018) to maintain the constraints, but it seems that this reduction is explicitly written into the algorithm. Is the use of this reduction the same described by Cutkosky & Mhammedi (2024), section 3.3? If not, what are the main differences? Also, since this is a reduction, couldn't algorithm 2 be described in two steps (such as the different protocols in Cutkosky & Mhammedi, 2024)? I think this would simplify the presentation greatly;\n\n- **Confusion in the description of the algorithm**: On eq. 5, $\\tilde{g}_t^c$ might be a scalar or a vector, depending on the case. I think you meant to present this only for the 1D case. This is a key definition for the algorithm, so it is important for it to be correctly described.\n\n- **On the second example**: Are there comparisons with results in the area of DRO that can help us understand the relevance of the application of this results to this problem?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper finds a natural problem with a clear technical difficulty: handling adversarially corrupted gradients in *unconstrained* problems in online convex optimization. It was surprising that this problem was not previously studied outside of the stochastic case;\n- The definition of $k$ is simple yet nicely covers two slightly different notions of robustness in OCO simultaneously;\n- The final regret bounds are asymptotically optimal, and the matching lower bounds show that we cannot improve the bounds by much;"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper studies algorithms with good regret guarantees in a modification of the classical (unconstrained) online convex optimization setting where (roughly, since the definition is more nuanced) $k$ of gradients can be arbitrarily corrupted. Previous work either studied stochastic corruptions and/or a setting where the feasible set of the algorithm/player is bounded. When the algorithm knows a bound $G$ on the norm of the gradients, this paper shows an algorithm whose regret guarantee against a comparator $u$ is (roughly) only a additive factor of $\\tilde{O}(Gk \\lVert u \\rVert)$ worse compared to the regret guarantees of optimal algorithms in the uncorrupted setting. Furthermore, this paper also shows an algorithm that has similar regret guarantees even without prior knowledge of $G$, with an extra penalty of $\\tilde{O}((\\lVert u\\rVert^2 + G^2)k)$ in the regret. Finally, the paper also provides matching lower bounds for the case of known $G$."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- I believe one of the main weaknesses of this paper is presentation. The first algorithm (for known $G$) heavily builds on the work of Zhang and Cutkosky (2022). This is not a problem by itself, but the paper does not do a great job os describing the main modifications needed , and I found myself relying on reading the other paper to understand the main ideas of the algorithm, which is not ideal. For the case of unknown $G$ the authors seem to combine (and improve on) different ideas from previous work. However, the presentation is convoluted, and it is hard to understand both the algorithm and the main technical contributions (I will expand on this in the questions);\n\n- The examples are not great: the first one is not interesting (at least from what I could understand), since the reduction of OCO to stochastic optimization is almost just matter of taking expectations of the OCO guarantees. And for the second example I could not understand whether the results the papers gets are interesting or not since there is no discussion on what kinds of results exist on the literature of the area already;\n\n- At some points in the main text there is a confusion between working in 1D and general dimension, which makes the description of the algorithms and results very confusing;\n\n\n--- \n**Summary**: The problem the paper studies is interesting and, in some sense, natural. Also, the regret guarantees the paper presents depend mildly on the corruption level $k$, and the paper also shows lower bounds that their regret bounds are asymptotically optimal. However, the techniques on the first algorithm build heavily on previous work, and the paper does not do a great job on describing the main ideas and differences with previous work. For the second algorithm, the presentation is quite convoluted and it is hard to verify some of the claims in the paper. The presentation is also confusing at a couple important points, making it even harder to understand the paper. Finally, the examples presented do not add much to the paper. \n\nCurrently, I believe the contributions of the paper are valuable, even if heavily based on previous work. Yet, the presentation could be greatly improved, which is even more important given the similarity with the techniques of previous work. Thus, currently I am not comfortable vouching to accept the paper. I am sending a few questions that hopefully will help me better understand the contributions of the paper, but I'd still strongly recommend improving the presentation of the paper;"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The paper [1] also considered some kind of corruption. Specifically, they studied the case where the strong convexity of online functions can be corrupted or degenerated to the convex case, and they used universal online learning algorithms to solve this problem. I think [1] is related to this paper in terms of corruption and can be introduced in the related work section.\n\n2. I was wondering how strong the assumption of knowing $k$ is. Actually, since $k$ serves as an upper bound for both the corruption number and the corruption magnitude. Knowing it seems to be a pretty strong assumption. Can the authors provide further explanations on this issue? I also suggest that the authors could add more explanations about it in the revised version.\n\n3. Can eq.(5) be simply rewritten as $\\tilde{g}_t^c = \\min\\\\{h_t, \\|\\tilde{g}_t|\\\\}$? The current version seems pretty complicated but not very necessary.\n\n\nReferences:\n\n[1] Contaminated Online Convex Optimization"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper considers the meaningful problem of online learning with corruption. Advances in this problem can enhance the robustness of online algorithms. The presentation is clear and easy-to-follow. The example in Figure 1 is intuitive and well-motivating. The proof sketch in Section 4 is clear, intuitive, and easy to understand for readers without much background knowledge in this field. The obtained result for the known gradient norm upper bound case is optimal."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper considers the problem of online linear optimization with possibly corrupted gradients in an unbounded domain. For this problem, the authors further investigated two setups: known gradient upper bound $G$ and unknown $G$. For the first case, the authors obtained an $O(\\|u\\| G (\\sqrt{T} + k))$, where $\\|u\\|$ denotes the norm of the unknown comparator that appeared in the definition of regret, and $k$ serves as an upper bound for the number of corruptions or the magnitude of corruptions. Note that the number $k$ is known prior to the algorithm. The authors have also provided a corresponding lower bound to prove the optimality of the obtained guarantee. For the unknown gradient norm upper bound, the authors obtained the above regret with an additive $(\\|u\\|^2 + G^2) k$ overhead. Finally, the authors have also applied their results to the applications of stochastic convex optimization with corruptions and DRO to validate the effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I am not an expert in online learning in the unbounded domain. As a result, I did not check the correctness of the proofs in this paper. The proof sketch in Section 4 is clear, as I have stated in the 'Strengths' part, and thus easy to follow. However, the analysis becomes much more complicated in the unknown gradient norm upper bound case. The current statements from Pages 8 to 10 are quite complicated and confusing. Actually, this part looks more like a draft or some proof that should be deferred to the appendix. The current presentation is quite confusing for me. I suggest that the authors could revise this part to emphasize more about the exact difference in the analysis from that in the first case, but not state the proof once again."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "\"This paper addresses online learning with ''corrupted'' feedback in unconstrained domain.\""
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024unconstrained,\ntitle={Unconstrained Robust Online Convex Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z7JBs8UOLI},\nnote={under review}\n}"
},
"abstract": {
"value": "This paper addresses online learning with ''corrupted'' feedback. Our learner is provided with potentially corrupted gradients $\\tilde g_t$ instead of the ''true'' gradients $g_t$. We make no assumptions about how the corruptions arise: they could be the result of outliers, mislabeled data, or even malicious interference. We focus on the difficult ''unconstrained'' setting in which our algorithm must maintain low regret with respect to any comparison point $\\||u\\|| \\in \\mathbb{R}^d$. Perhaps surprisingly, the unconstrained setting is significantly more challenging as existing algorithms suffer extremely high regret even with very tiny amounts of corruption (which is not true in the case of a bounded domain). Our algorithms guarantee regret $ \\||u\\||G (\\sqrt{T} + k) $ when Lipschitz constant $G \\ge \\max_t \\||g_t\\||$ is known, where $k$ is a measure of the total amount of corruption. When $G$ is unknown and incur an extra additive penalty of $(\\||u\\||^2+G^2) k$."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"online learning",
"online convex optimization",
"adversarial corruption",
"comparator adaptive",
"parameter-free",
"unconstrained domain"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8908ee77e3b6354dceb0e1538268642c3068ba72.pdf"
},
"presentation": null,
"primary_area": {
"value": "optimization"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Unconstrained Robust Online Convex Optimization"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z7PhIgVmZU | BAT-CLIP: Bimodal Test-Time Adaptation for CLIP | main | Withdraw | Test-Time Adaptation;CLIP;Robustness | transfer learning, meta learning, and lifelong learning | Sarthak Kumar Maharana;Baoming Zhang;Leonid Karlinsky;Rogerio Feris;Yunhui Guo | ~Sarthak_Kumar_Maharana1;~Baoming_Zhang2;~Leonid_Karlinsky3;~Rogerio_Feris1;~Yunhui_Guo2 | 3;5;6;8 | 4;4;5;3 | 2;2;3;3 | 2;2;3;3 | 2;3;3;3 | 5.5 | 4 | 2.5 | 2.5 | 2.75 | -0.392232 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": null,
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": null,
"primary_area": null,
"questions": null,
"rating": null,
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": null,
"summary": null,
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": null,
"withdrawal_confirmation": {
"value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors."
}
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Why does authors use the groundtruth class labels alongside the test images to train the model for predicting class labels of the test images?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. Updating both the image encoder and text encoder for CLIP is good.\n2. The experimental results seem promising.\n3. However, the experimental setting is unfair and this work seems violate the test-time adaptation setting."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work propose to improve CLIP’s robustness to common image corruptions through the proposed bimodal test-time adaptation method. The proposed method adapts the visual encoders and strengthen the alignment between image and text features using three losses, computed using pseudo-labels, and the corresponding text feature. The adaptation is performed only on the layer normalization parameters."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The method design and experiment have fatal problems.\n1. The test-time adaptation task assumes that there is no any class label for test images, because we need to use the model to predict the label for test images. However, this work directly leverage the groundtruth class label of test images to train the model, e.g., ``we compute the mean feature of all the support visual features constituting a class c. . yˆ refers to the predicted labels computed via Eq. 1 and v¯c is the class prototype of class c.'' in L333-334. It is obvious that the method is trained using the class label of test images. \n2. The test-time adaptation task typically assume that the model can process limited samples during test-time adaptation, e.g., one test image, or up to 16 images. However, this work takes 200 images as a mini-batch for TTA, as shown in L419, ``The batch sizes are set to 200, 200, and 64 for the datasets''. In contrast, other methods, e.g., TPT (Shu et al., 2022) and VTE (Dobler et al., 2024) process a single test image at a time.\n3. The novelty and contribution is limited. The proposed losses are the popular entropy minimization, projection loss and contrastive loss. The model adaptation method is the classical normalization layer adaptation.\n4. The authors claim that ''image class prototype are computed using pseudo-labels'' in the abstract. However, I cannot find any ``pseudo-labels'' in the method part. It is obvious that the authors did not produce and use the pseudo labels. Instead, the authors directly use the groundtruth class labels of test images to train the model."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Further justification for updating only the LayerNorm parameters: The paper should provide a more thorough explanation or any experimental validation of why only the LayerNorm parameters of the CLIP encoder are updated for the specific corrupted task, such as a comparison between using only LayerNorm updates and fully fine-tuning the model versus selectively fine-tuning other layers. \n2. Enhance ablation and comparative experiments: It is suggested that the authors refine their ablation and comparative experiments in accordance with the feedback provided above to strengthen the overall analysis.\n3. Impact of corruption types: It would be useful to see how BAT-CLIP performs on specific types of corruption (e.g., noise vs. blur vs. weather effects). Do certain corruptions benefit more from the bimodal adaptation, and if so, why? A more granular analysis would provide deeper insights into the strengths and weaknesses of BAT-CLIP."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper proposes a bimodal test-time adaptation method that effectively utilizes both visual and text modalities, enhancing the adaptation process and improving alignment between image and text features.\n2. The authors conduct extensive experiments on benchmark datasets, demonstrating that BAT-CLIP achieves state-of-the-art results in TTA for CLIP, with notable accuracy improvements across multiple datasets.\n3. The paper is well organized and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes BAT-CLIP, a bimodal test-time adaptation method designed to improve the robustness of the CLIP model against common image corruptions during testing. The key idea is to jointly adapt both the visual and text encoders of CLIP by exploiting its shared feature space. The proposed adaptation is realized through LayerNorm parameter updates and leverages two novel loss components: a projection matching loss (to enhance image-text alignment) and a separation loss (to improve class separability)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Limited benchmarking against prior methods: The current paper primarily compares its approach to TTA methods from 2020 to 2022. It would provide a clearer understanding of BAT-CLIP's performance within the broader TTA landscape if the authors were to include comparisons with a wider range of state-of-the-art TTA methods in 2023 and 2024.\n2. Limited Exploration of Text Encoder's Role: While the text encoder is adapted alongside the vision encoder, the paper doesn't deeply explore cases where the text encoder might contribute disproportionately to misalignment. For instance, how does BAT-CLIP handle noisy or ambiguous text class descriptions during adaptation? This could be important in real-world applications where text inputs may not always be clean or well-formed.\n3. Lack of hyperparameter sensitivity analysis: The method introduces projection matching loss and separation loss, but no details are given on how sensitive the method is to hyperparameters such as the weighting between these two losses, or the cosine distance threshold for class prototypes. This sensitivity needs to be explored experimentally to ensure the method is robust.\n4. Limited justification for LayerNorm adaptation: The paper chooses to adapt the LayerNorm parameters of CLIP’s encoders, as inspired by the other method. However, there is no strong theoretical or empirical justification for why adapting LayerNorm parameters is the optimal choice for both the vision and text encoders. It’s not clear if other layers might benefit from adaptation as well."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Please find the weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper is well-written and easy to understand.\n- Empirical results demonstrate the usefulness of the two additional loss functions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents BAT-CLIP, a bimodal test-time adaptation method designed to enhance the robustness of the CLIP model against image corruption. BAT-CLIP adapts both CLIP’s visual and text encoders by updating LayerNorm parameters. During adaptation, in addition to minimizing entropy loss, two additional loss functions leverage pseudo-labels: $L_{pm}$ maximizes the projection of class prototypes with their corresponding text features, while $L_{sp}$ increases the cosine distance between the class prototypes. The method is evaluated on corrupted image datasets including CIFAR-10C, CIFAR-100C, and ImageNet-C."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The claim that the proposed method is the first to perform a bimodal test-time adaptation of CLIP for classification tasks is imprecise, see, e.g., Section 4 of [1]. Moreover, since maximizing the similarity between visual and text features (e.g., CLIP original training objective) and increasing inter-class separability are common practices, I do not consider the approach as particularly novel.\n- Experimental evaluation should be improved. (1) Some highly related SOTA methods [2, 3, 4, 5] lack detailed discussion and comparison. (2) Additional datasets are needed to further validate the effectiveness of the approach, such as ImageNet-V2, ImageNet-A, ImageNet-R, and ImageNet-Sketch, which are commonly used in the TTA of CLIP [3].\n- Contributions are not fully supported by experimental evidence and should be clarified. (1) In the ablation study (Table 4), it is unclear whether the significant improvement with the two additional loss terms can also be achievable without updating the text encoder. (2) To support the claim of efficient adaptation, comparisons with previous baselines, especially [3], in terms of FLOPs or at least forward/backward computation times are needed. It is important given that updating the text encoder seems computationally intensive, as all text prompts seem to be processed again through the text encoder at each update step.\n\n[1] Döbler, Mario, et al. \"A Lost Opportunity for Vision-Language Models: A Comparative Study of Online Test-Time Adaptation for Vision-Language Models.\" CVPR Workshop.\n\n[2] Sreenivas, Manogna, and Soma Biswas. \"Effectiveness of Vision Language Models for Open-World Single Image Test Time Adaptation.\" *arXiv preprint arXiv:2406.00481* (2024).\n\n[3] Niu, Shuaicheng, et al. \"Test-Time Model Adaptation with Only Forward Passes.\" ICML, 2024.\n\n[4] Ma, Xiaosong, et al. \"SwapPrompt: Test-Time Prompt Adaptation for Vision-Language Models.\" NeurIPS, 2023.\n\n[5] Zhang, Jingyi, et al. \"Historical Test-Time Prompt Tuning for Vision Foundation Models.\" NeurIPS, 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- In Section 4, Optimization, you mention \"For every new task, we reset the model parameters of CLIP following TENT (Wang et al., 2021) since our goal is to adapt to a single domain in an online manner.\" What do you mean by a task here? Is each corruption treated independently? Like single domain TTA protocol?\n- **CTTA Scenario:** How would BAT-CLIP perform in long range test sequences and continual TTA scenarios as studied in (CoTTA[1], RMT[2]). \n- **TSNE plots:** It is well known that there still exists a huge modality gap in CLIP feature space; Image-image features, text-text features are closer compared to image-text features. So, the text features form a cluster, away from image features, irrespective of the classes. This is studied extensively in several works[3,4]. So how are these plots obtained where the text features seem to be close to image features.\n- **BN-1 Results:** All the experiments are done on ViT architectures which do not have Batch-Normalization layers. This makes no sense. Do you mean Layer Normalization. If so, LN layers behave the same way training and testing time. So LN-1 would mean Zero shot CLIP evaluation only.\n- **TENT and SAR Baselines:** How are these adapted to CLIP? The objectives can be used. But as there are no BN layers in ViT, what parameters are updated? The comment \"SAR addresses performance degradation in TTA caused by batch normalization and batch-agnostic layers by filtering noisy test samples, from large gradients, with a stable entropy loss function\" makes no sense in ViT based TTA. Please clarify these. \n- **BN statistics based observations:** Again, in the section 'Adaptation for multiple iterations', the authors mention \"Continuous adaptation of the normalization parameters to a single batch can lead to over-fitting causing the mean and variance to bias towards the batch and degrading generalization\". This in the content of ViT architecture without BN layers needs to be justified."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The proposed method is simple and intuitive to understand. The paper is well presented and is easy to understand.\n- The motivation and experimental analysis on the performance degradation of CLIP under corruptions is well presented. \n- The method is very efficient compared to prior methods like TPT.\n- The experimental results show significant improvements compared to previous methods which are even more computationally expensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors address the problem of TTA in the context of VLMs. Specifically, they propose a bimodal TTA to improve the performance of CLIP for domain shifted datasets. They improve the performance by encouraging the text and image prototypes to match. And they also enhance the discrimination between the class prototypes. The layernorm parameters in both vision and text encoder are updated during test time."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- **Ablation:** Please report Zero shot CLIP results followed by the loss components. It appears that $L_{tent}$ sometimes worse than Zero shot CLIP for some cases. What happens if you only use $L_{tent}$, $L_{pm}$ and $L_{sp}$ individually. A better ablation study is where you study all the loss combinations, which I encourage the authors to present.\n- **Bimodal Adapatation:** As the primary motivation and difference from prior works is the need to do bimodal adaptation, experiments demonstrating the effectiveness of bimodal adaptation is missing. Some experiments like: What if you use $L_{pm}$ without updating the text encoder? How do you show bimodal adaptation is better than unimodal update of LN parameters of Vision encoder?, could be done to highlight the role of bimodal adaptation.\n- **Lacking strong experimental protocol:** All the experiments are done on only corruption benchmarks. How would BAT-CLIP perform in other domain shifts in datasets like DomainNet, VisDA comprising of Cartoon, Sketch kind of domains and ImageNet-variants like IN-R/Sketch/V2/A? While not a necessity, a stronger set of baseline methods would be to compare with more recent works like[4,5,6]. \n- **Projection matching loss:**: In eqn 2: What happens in low batchsize setting or when some classes are absent in the batch? How would the prototypes be computed for this loss? The prototypes are computed for every batch? Also, performing experiments on varying batchsizes would further demonstrate you method's effectiveness. Why is this a projection based loss where unnormalized protypes are used and not cosine similarity based, as in $L_{sp}$? Is there any specific reason for this choice?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@misc{\nmaharana2024batclip,\ntitle={{BAT}-{CLIP}: Bimodal Test-Time Adaptation for {CLIP}},\nauthor={Sarthak Kumar Maharana and Baoming Zhang and Leonid Karlinsky and Rogerio Feris and Yunhui Guo},\nyear={2024},\nurl={https://openreview.net/forum?id=z7PhIgVmZU}\n}"
},
"abstract": {
"value": "Although open-vocabulary classification models like Contrastive Language Image Pretraining (CLIP) have demonstrated strong zero-shot learning capabilities, their robustness to common image corruptions remains poorly understood. Through extensive experiments, we show that zero-shot CLIP lacks robustness to common image corruptions at increasing severity levels during test time, necessitating the adaptation of CLIP to unlabeled corrupted images using test-time adaptation (TTA). However, we found that existing TTA methods have severe limitations in adapting CLIP due to their $\\textit{unimodal}$ nature. To address these limitations, we propose $\\textbf{BAT-CLIP}$, a $\\textit{bimodal}$ TTA method specially designed to improve CLIP's robustness to common image corruptions. The key insight of our approach is not only to adapt the visual encoders for better image feature extraction but also to strengthen the alignment between image and text features by promoting a stronger association between the image class prototype, computed using pseudo-labels, and the corresponding text feature. We evaluate our approach on benchmark image corruption datasets and achieve state-of-the-art results in TTA for CLIP, specifically for domains involving image corruptions. Particularly, with a ViT-B/16 vision backbone, we obtain mean accuracy improvements of 9.7\\%, 5.94\\%, and 5.12\\% for CIFAR-10C, CIFAR-100C, and ImageNet-C, respectively."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": {
"value": [
"~Sarthak_Kumar_Maharana1",
"~Baoming_Zhang2",
"~Leonid_Karlinsky3",
"~Rogerio_Feris1",
"~Yunhui_Guo2"
]
},
"authors": {
"value": [
"Sarthak Kumar Maharana",
"Baoming Zhang",
"Leonid Karlinsky",
"Rogerio Feris",
"Yunhui Guo"
]
},
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Test-Time Adaptation",
"CLIP",
"Robustness"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": {
"value": "maharana|batclip_bimodal_testtime_adaptation_for_clip"
},
"pdf": {
"value": "/pdf/2300af7921f4328e633dbab4a5e0441288773a78.pdf"
},
"presentation": null,
"primary_area": {
"value": "transfer learning, meta learning, and lifelong learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/dff35ffa0bf4743d290cd6d455db8e394ca69f9e.zip"
},
"title": {
"value": "BAT-CLIP: Bimodal Test-Time Adaptation for CLIP"
},
"venue": {
"value": "ICLR 2025 Conference Withdrawn Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Withdrawn_Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||
z7QAz5y8Uz | FoGE: Fock Space inspired encoding for graph prompting | main | Active | llm;prefix tuning;graph;graph encoding;geometric algebra;Fock space | learning on graphs and other geometries & topologies | 3;5;5;6 | 4;1;3;3 | 2;2;3;3 | 2;2;3;2 | 2;2;2;2 | 4.75 | 2.75 | 2.5 | 2.25 | 2 | -0.473684 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Do you need to train separate adapters for each dataset, or is there a unified adapter for all datasets?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "it is a novel idea to use fock space inspired method to obtain graph embedding."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "Inspired by Fock space, the paper proposes a training-free graph encoder approach to align graph embeddings with the LLM's embedding space. Specifically, the paper uses a parameter-free scheme to obtain graph embeddings and then trains a linear layer to align these embeddings with the LLM's embedding space, enabling the model to handle graph tasks effectively with minimal adjustments to the architecture."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Lack of efficiency experiments. I agree with the authors that the graph encoding is parameter-free and efficient. However, the complexity is dominated by the LLM rather than the GNN, even though the LLM is frozen. When using large LLMs such as LLaMA-7B, the overall training time will not differ significantly whether the graph encoder requires training or not. Therefore, my concern is that the training-free graph encoder does not offer a noticeable efficiency improvement in terms of real training/application.\n\n- Justification of the use of LLMs. Why LLM should be used in graph reasoning task (without text attribute). It make senses to introduce LLM to help with textual graph tasks, because such tasks need the text reasoning ability, world knowledge from LLM. However, for traditional graph tasks such substructure count, shortest path etc, I didn;t see the necessity of using LLMs.\n\n- Comparison with RAG. In the introduction, the authors mention RAG and compare it with prefix tuning. However, I don’t see the relevance of this comparison. Prefix tuning is designed to adapt a model’s attention to new contexts with minimal parameter updates, whereas RAG combines retrieval mechanisms with generation to enrich the model’s knowledge base dynamically. The distinction between the two methods is significant, and it would help if the authors clarified why RAG was introduced here or provided a more targeted comparison relevant to graph encoding.\n\n- Imprecise Statements. The statement, \"RAG-based approaches for graphs primarily involve converting graphs to text, while prefix tuning with graphs uses modules to extract richer, task-relevant structures, requiring larger sample sizes and higher compute power,\" is unclear. Could you add references to RAG-based approaches where \"graphs primarily involve converting graphs to text\"? For example, in [1], retrieval is performed over graphs instead of text, and in [2], text is organized into graphs for retrieval.\n\nReferences:\n\n[1] G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering, NeurIPS 2024, https://arxiv.org/pdf/2402.07630\n\n[2] From Local to Global: A Graph RAG Approach to Query-Focused Summarization, https://arxiv.org/pdf/2404.16130"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. Can you explain how the node representations $p_i$ and extra vector size vector $s$ are obtained? Will these representations affect the Fock-space inspired embeddings?\n2. How much performance gap is this task-agnostic graph encoding method with specially tuned ones?\n3. Can Fock-space inspired graph encoding understand high-order interactions between nodes?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. By leveraging Fock-space inspired graph encoding, this approach achieves nearly lossless parameter-free graph embeddings, containing rich graph structure information.\n2. They show that a simple linear layer can map the task-agnostic graph embeddings into the LLM embedding space, offering a low-computational approach to align graph representations with pre-trained language models.\n3. They conducted experiments to validate the informativeness of Fock-space inspired graph embeddings on basic graph-understanding tasks using a small neural network. Experiments also show that their embedding method performs the best in unsupervised approaches and is competitive with specialized supervised methods.\n4. This graph encoding method can adapt to different graph types, node-level embeddings and hypergraphs, making it suitable for a wide range of applications.\n5. They demonstrated that by using fock-space graph embedding, the LLM can do better at graph understanding and reasoning."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposed a novel approach to obtain powerful and model-agnostic graph representations that can be used as prompts to augment LLM’s capabilities of answering graph-related questions. By leveraging Fock spaces, a concept from mathematical physics, they achieved almost lossless task-agnostic graph embeddings capturing the diverse graph structure and information. A lightweight linear adapter is then adopted to map the rich Fock-space inspired embeddings into an LLM’s embedding space, making prefix-tuning effective for various graph-related tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Although the method is task-agnostic, it might underperform on specialized tasks without additional tuning.\n2. Although the method can achieve nearly lossless parameter-free graph embeddings, there’s no study whether it can effectively capture high-order interactions between nodes, which is important in complex graph reasoning."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 1
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Integrating graph encoders with LLMs to address graph-related tasks is a convincing approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a parameter-free graph encoder using Fock space representations to improve LLMs in answering graph-related questions. This simple approach provides rich encodings for diverse graphs, enabling effective prefix-tuned prompts for pre-trained LLMs, simplifying and generalizing existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. **Unclear motivation**: The motivation of this paper is not clear. Many previous works leverage graph encoders to learn graph representations for integration with LLMs [1,2], and the overall framework appears similar.\n2. **Potential confusion with graph prompting methods**: In this work, the authors use graphs to prompt LLMs, which could be confused with existing graph prompting methods [3,4,5]. The authors should discuss the differences between these approaches to help readers better understand the purpose of this work.\n3. **Impact of graph encoder choice**: Will the choice of graph encoder affect the performance of the proposed method? Some analytical experiments may provide a better understanding of the impact of the graph encoder choice.\n\n\n[1] Chen, Runjin, et al. \"LLaGA: Large Language and Graph Assistant.\" Forty-first International Conference on Machine Learning.\\\n[2] Tang, Jiabin, et al. \"Graphgpt: Graph instruction tuning for large language models.\" Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2024.\\\n[3] Liu, Zemin, et al. \"Graphprompt: Unifying pre-training and downstream tasks for graph neural networks.\" Proceedings of the ACM Web Conference 2023. 2023.\\\n[4] Sun, Mingchen, et al. \"Gppt: Graph pre-training and prompt tuning to generalize graph neural networks.\" Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022.\n[5] Yu, Xingtong, et al. \"Few-shot learning on graphs: from meta-learning to pre-training and prompting.\" arXiv preprint arXiv:2402.01440 (2024)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Could you explain how to obtain the size vector $s$ in the encoding process?\n- Table 4 highlights the performance on hypergraphs compared to zero-shot and few-shot scenarios. Could you provide more details on how to generate the corresponding prompts?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This paper proposes a theoretical lossless Fock-space-based method to generate graph representation and can handle graphs with attributes. The use of parameter-free encoding simplifies the design compared to other GNNs or graph transformers.\n- This method generalizes across various types of graphs and the experiments validate its performance on many tasks. It generates graph embedding that is not specific to each task and performs better than other specialized models in some cases."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes FoGE, which is a parameter-free Fock-space-based method to generate graph representations. The method can encode arbitrary graphs into embeddings and train a linear adapter that aligns these embeddings with a frozen LLM via prefix-tuning. It can handle various graph types, including simple graphs, hypergraphs, and attributed graphs, and the experiments show that this method can achieve competitive performance with baseline models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- It is not clear whether the LLM contributes to understanding graph structures or merely functions as a text generation module. The interaction between the LLM and graph embeddings should be further explored. For example, it would be insightful to see how the graph embeddings perform if they were used to train a simple MLP or other lightweight models for downstream tasks. This would help clarify whether the LLM adds value beyond generating natural language outputs.\n- In Table 5, the proposed method underperforms relative to GraphLLM across several tasks, which brings concerns about whether FoGE can consistently achieve better performance in advanced graph reasoning tasks. \n- The experimental settings are not thoroughly discussed, and details about the training process and hyperparameters are missing. Providing more transparent implementation details would improve the reproducibility of this work."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024foge,\ntitle={Fo{GE}: Fock Space inspired encoding for graph prompting},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z7QAz5y8Uz},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent results show that modern Large Language Models (LLM) are indeed capable of understanding and answering questions about structured data such as graphs. Existing proposals often use some description of the graph to create an ``augmented'' prompt fed to the LLM. For a chosen class of graphs, if a well-tailored graph encoder is deployed to play together with a pre-trained LLM, the model can answer graph-related questions well. Existing solutions to graph-based prompts range from graph serialization to graph transformers. In this work, we show that the use of a parameter-free graph encoder based on Fock space representations, a concept borrowed from mathematical physics, is remarkably versatile in this problem setting. The simple construction, inherited directly from the theory with a few small adjustments, can provide rich and informative graph encodings, for a wide range of different graphs. We investigate the use of this idea for prefix-tuned prompts leveraging the capabilities of a pre-trained, frozen LLM. The modifications lead to a model that can answer graph-related questions -- from simple graphs to proteins to hypergraphs -- effectively and with minimal, if any, adjustments to the architecture. Our work significantly simplifies existing solutions and generalizes well to multiple different graph-based structures effortlessly."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"llm",
"prefix tuning",
"graph",
"graph encoding",
"geometric algebra",
"Fock space"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/510a6e64f08aedcaf8f2befad82f64749aba621b.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "FoGE: Fock Space inspired encoding for graph prompting"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z8PcUSKXXN | Random Is All You Need: Random Noise Injection on Feature Statistics for Generalizable Deep Image Denoising | main | Active | Image Denoising;Low-Level Vision;Generalization Problem | applications to computer vision, audio, language, and other modalities | 5;5;6;6 | 4;4;4;3 | 2;2;3;3 | 2;2;3;3 | 3;2;3;3 | 5.5 | 3.75 | 2.5 | 2.5 | 2.75 | -0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "In real-world scenarios, noise is highly spatially correlated and dependent on signal intensity, which inevitably incorporates image information, making it challenging to eliminate. From a theoretical standpoint, can Gaussian denoisers truly address the complexities of real noise?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- **Innovative Approach**: The paper offers a fresh perspective on the problem of image denoising.\n\n- **Thorough Experimental Analysis**: The experimental analysis is extensive and well-structured.\n\n- **Generalization Capability**: The proposed architecture effectively generalizes across various noise distributions, even when trained exclusively on white Gaussian noise. In the experiments conducted, it consistently outperforms competing methods in denoising performance. I believe that the concept of noise injection to enhance the generalization capabilities of image-denoising architectures holds significant potential and could attract interest from the research community."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper revisits the challenge of generalizable image denoising, focusing on a training process that utilizes only Gaussian noise while testing encompasses various noise types. The authors reveal that models trained on different noise distributions yield distinct feature distributions. To enhance generalization capabilities, they introduce a novel neural network architecture that effectively handles multiple noise distributions when trained on white Gaussian noise. Central to this innovation is a noise injection block that integrates random noise into the features processed through the network layers.\n\nSpecifically, the architecture employs a U-Net type encoder-decoder structure, incorporating down and up convolutions, normalizations, ReLUs, and skip connections between the encoder and decoder. The noise injection blocks are strategically placed after each downsample bundle, which consists of Downsampling Convolution, Normalization, and ReLU.\n\nIn their experiments, the authors train the proposed network on white Gaussian noise with a sigma of 15, demonstrating its ability to generalize to various non-Gaussian noises, including (1) Speckle noise, (2) Salt & Pepper noise, (3) Poisson noise, (4) a mixture of noises (1)-(3) at different levels, (5) Image Signal Processing Noise, and (6) Monte Carlo rendered image noise."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I believe the most critical aspect of this task is its generalization capability. While the authors included the SIDD dataset in their experiments, they did not incorporate additional real-world noise datasets."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- Aside from the noise injection module, are there other technical contributios that could be highlighted as strengths of the model?\n- How does the model perform when noise is injected only into the image space (input)?\n- Real-wolrd noise adaptation (e.g., LAN[1*]) is already in use. How does the model generalize to real-world noise? For instance, how does a model trained on SIDD peform on PolyU[2*] or NAM[3*] dataset?\n- I consider the techinical contribution to be limited, but the performance is sufficiently high, so I gave it a rating of 5. I would consider increasing the rating if it can be shown to generalize well to real-world noise.\n---\n[1*] Kim, Changjin, Tae Hyun Kim, and Sungyong Baik. \"LAN: Learning to Adapt Noise for Image Denoising.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n[2*] Jun Xu, Hui Li, Zhetong Liang, David Zhang, and Lei\nZhang. Real-world noisy image denoising: A new benchmark. arXiv preprint arXiv:1804.02603, 2018\n[3*] Seonghyeon Nam, Youngbae Hwang, Yasuyuki Matsushita,\nand Seon Joo Kim. A holistic approach to cross-channel image noise modeling and its application to image denoising.\nIn CVPR, 2016."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- Performance is improved simply by adding noise in the feature space.\n- Builds upon existing experiments, effectively demonstrating the impact of this approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "- This paper enhances the generation performance of denoising by injecting random noise into the feature space, resulting in performance improvements over exisiting SOTA methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The techincal contribution is quite incremental.\n- Although noise injection at the feature level is effective, it is not particularly novel.\n- For example. while Appendix A.3 theorectically demonstrates the effect of this approach, applying random noise to input data across various training datasets might yield similar results. In this regard, it may be challeging to assert that this method is definitively more effective than previous SOTA methods.\n- While noise injection at the feature level in encoder can induce more nonlinearity in the noise distribution compared to input-only injection, claming this as a major technical strength might be an overstatement.\n- This method seems applicable to most encoder-decoder architectures, which might be a strength over proposing a completely new architecture."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to Weaknesses section for major questions.\n\n- Given that generalization to real-world conditions is the ultimate goal of the work, could the authors provide additional quantitative and qualitative results on various real-world benchmarks, such as the DND, Poly, CC datasets?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The paper presents a denoising model, RNINet, which achieves higher denoising performance than MT.\n- RNINet demonstrates strong performance across diverse synthetic out-of-distribution (OOD) noise settings, highlighting its robustness beyond standard training conditions.\n- RNINet demonstrates improved efficiency, operating at 0.1x the runtime of MT, an essential advantage as image denoising often precedes various downstream computer vision tasks.\n- The paper is well-written and accessible, offering clear explanations and comprehensive ablation studies that enhance understanding.\n- Authors provided the model to reproduce the results."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces RNINet, an architecture designed to improve generalization in deep image denoising. Unlike traditional denoising methods that often overfit specific noise types, RNINet incorporates a noise injection block to inject random noise into feature statistics during training, enabling the model to adapt to unseen noise types. RNINet enhances both denoising performance and computational efficiency, surpassing the Masked Training (MT) method. Extensive experiments demonstrate RNINet's superiority in handling various noise conditions while maintaining lower computational cost and achieving faster inference speeds."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The approach of injecting random noise into feature maps for enhanced generalization has been explored previously, and the addition of feature statistics such as mean and variance feels incremental rather than novel.\n- If the primary objective is generalizable denoising, the model’s practicality in real-world scenarios should be further substantiated. Real-world evaluation is limited to the SIDD dataset, where performance metrics are relatively poor (although superior to the other methods in the table). Additionally, Figure 8 displays only a single, easy-to-denoise image that lacks sufficient visual details, making it unconvincing as evidence of real-world noise removal capabilities.\n- While the paper emphasizes RNINet’s superiority over MT, there is insufficient theoretical explanation regarding how RNINet addresses MT’s limitations.\n- The visual outcomes in Figure 5 do not appear substantially improved compared to MT, raising questions about the perceptual gains claimed. To enhance the clarity and persuasiveness of visual comparisons, could each visual example include quantitative metrics (e.g. PSNR, SSIM)?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Regarding the experimental setup, the authors could try to train the model on a more general training set, rather than fixed Gaussian noise, and then compare the generalization ability."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ The paper introduces a noise injection technique that directly manipulates feature statistics, which is an approach to improving generalization in image denoising.\n+ This work surpasses previous methods, such as masked training denoising, on both in-distribution and some out-of-distribution datasets.\n+ The authors used Deep Degradation Representation (DDR) for further analysis to evaluate the network's generalization capabilities."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces RNINet, an architecture for generalizable deep image denoising. The key innovation is the noise injection technique, which injects random noise into feature statistics and alters them to represent potential unseen noise domains. This allows the model to generalize well despite being trained only on Gaussian noise. The authors demonstrate RNINet's performance across multiple noise types and levels compared to both specialized and generalizable denoising methods. They also provide an analysis of the feature statistics to validate their approach."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- PSNR and SSIM have limitations in their accuracy of assessment in some aspects; the authors might consider adding more metrics, such as LPIPS.\n- The paper could benefit from including more ablation studies to further explore the proposed method."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024random,\ntitle={Random Is All You Need: Random Noise Injection on Feature Statistics for Generalizable Deep Image Denoising},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z8PcUSKXXN},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advancements in generalizable deep image denoising have catalyzed the development of robust noise-handling models. The current state-of-the-art, Masked Training (MT), constructs a masked swinir model which is trained exclusively on Gaussian noise ($\\sigma$=15) but can achieve commendable denoising performance across various noise types (*i.e.* speckle noise, poisson noise). However, this method, while focusing on content reconstruction, often produces over-smoothed images and poses challenges in mask ratio optimization, complicating its integration with other methodologies. In response, this paper introduces RNINet, a novel architecture built on a streamlined encoder-decoder framework to enhance both efficiency and overall performance. Initially, we train a pure RNINet (only simple encoder-decoder) on individual noise types, observing that feature statistics such as mean and variance shift in response to different noise conditions. Leveraging these insights, we incorporate a noise injection block that injects random noise into feature statistics within our framework, significantly improving generalization across unseen noise types. Our framework not only simplifies the architectural complexity found in MT but also delivers superior performance. Comprehensive experimental evaluations demonstrate that our method outperforms MT in various unseen noise conditions in terms of denoising effectiveness and computational efficiency (lower MACs and GPU memory usage), achieving up to 10 times faster inference speeds and underscoring it's capability for large scale deployments."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Image Denoising",
"Low-Level Vision",
"Generalization Problem"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/57d92a8268ee81c984801c8cb1c98c789ce99bc0.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/9d679153c8cd63bc137acfbfcf20c34b79bb672b.zip"
},
"title": {
"value": "Random Is All You Need: Random Noise Injection on Feature Statistics for Generalizable Deep Image Denoising"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z8sxoCYgmd | LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models | main | Active | LMMs;Deepfake;Multimodality | datasets and benchmarks | 6;8;8;8 | 4;5;4;5 | 3;3;3;3 | 3;4;3;3 | 3;3;4;3 | 7.5 | 4.5 | 3 | 3.25 | 3.25 | 0.57735 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "No follow-up questions."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "+ LOKI is a novel, multimodal dataset.\n\n+ The paper is easy to read and well-organised.\n\n+ Comprehensive Evaluation and Validation.\n\n+ Curates a diverse dataset with 18,000 questions across five modalities and 26 categories, providing a solid foundation for synthetic data detection evaluation.\n\n+ Detailed Annotations.\n\n+ Directly addresses the challenges of synthetic data proliferation, impacting security, misinformation, and content authenticity.\n\n+ LOKI’s findings on LMM strengths and weaknesses have the potential to drive advancements in synthetic data detection and multimodal model development."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces a novel benchmark designed to evaluate the capability of LMMs in detecting synthetic data across multiple modalities, including video, image, 3D, text, and audio. LOKI is structured to provide diverse modalities, cover 26 detailed categories, and offer multi-level annotations, enabling tasks that range from basic authenticity judgments to fine-grained anomaly selection and explanation. The benchmark consists of 18,000 questions across various levels of difficulty, allowing for a comprehensive analysis of LMMs in detecting synthetic content. Additionally, the paper includes evaluations of 22 open-source and 6 closed-source LMMs, revealing both the potential and current limitations of these models in terms of detection accuracy and interpretability."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The benchmark lacks a robustness test against common real-world conditions like compression artifacts. To enhance real-world applicability, the authors could include performance evaluations on compressed data."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Do you have a sense as to why the prompting strategies (FS, CoT) in general had a poor/neutral impact for most of the model/data pairs (Table 5)?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is well-written and easy to follow.\n\nThis work addresses a glaring and emergent need in a topic field: synthetic data detection for LLMs; I agree with the authors that there doesn't currently exist a comprehensive, multi-modal, nuanced dataset including explainability assessment for this domain area. \n\nExtensive examples and case studies provided in appendices. \n\nMany data domains are covered in this benchmark, including several categories and characteristics that are often underrepresented in synthetic data detection (e.g., satellite images, \"abnormal details\")."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The authors introduce LOKI, a novel benchmark designed to evaluate the ability of LMMs to detect synthetic data across multiple modalities. With the concurrent preponderance of synthetic data and the rise of powerful LLMs, the authors aim to address the emerging research topic of LMM evaluation for synthetic data detection. LLMs can provide reasoning behind authenticity judgments, benefitting explainability. The focus of LOKI is to evaluate the performance of LLMs on synthetic data detection tasks. \n\nIn particular, LOKI is aimed at addressing several shortcomings present in extant synthetic data evaluation datasets, including emphasizing human interpretability and multimodality. Overall, LOKI provides key improvements to synthetic data detection, including: diverse modalities, heterogenous categories, multi-level annotations and multimodal synthetic evaluation. framework."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While the differentiated modalities, categories and annotation levels are beneficial, the overall size of the dataset actually seems relatively small vis-a-vis related datasets (Table 1). \n\nIt is unclear to me, how a user can methodically compare scores for different models across tasks/categories (e.g., in Table 2); perhaps the authors can address this, given the heterogenous and imbalanced nature of the data modalities and tasks, as well as the problem/domain \"difficulty\". \n\nAs deepfake detection is one of the most prominent synthetic data detection categories today, I believe the benchmark would benefit from its inclusion."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.\tIn the fine-grained anomaly selection task, all samples were manually reviewed to ensure question quality. However, how can the robustness of the task design be ensured in future large-scale applications?\n2.\tCould you provide additional insights into why Claude-3.5-Sonnet tends to misclassify synthetic images as real? For instance, does this result from limitations in the model's ability to recognize fine-grained abnormalities, or are there certain types of synthetic image features that are particularly challenging for the model?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThe paper is well written and easy to follow. The authors provide sufficient technical details for readers to understand their work.\n2.\tThe benchmark designed by the authors encompasses a rich variety of modalities and diverse question types, enabling a comprehensive evaluation of LMM performance.\n3.\tThe authors introduce a metric called the Normalized Bias Index (NBI) to quantify the performance differences of the model on natural and AI-generated data across different modalities, which is an innovative way to assess model bias."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces LOKI, a novel benchmark for evaluating large multimodal models (LMMs) in detecting synthetic data across multiple modalities, such as video, image, text, and audio.This benchmark features broad-level assessments, including judgment and multiple-choice tasks, alongside detailed anomaly selection and explanation exercises, providing an in-depth evaluation of large multimodal models (LMMs)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe current evaluation mainly relies on accuracy and NBI; however, at low recall rates, NBI may not adequately reflect model bias. Additionally, the design of NBI may be insufficient to comprehensively capture various types of bias exhibited by the model.\n2.\tThe paper mentions that the model exhibits \"bias\" across different modalities. However, the specific causes of this bias are not thoroughly explored through experiments or comparative analysis. This conclusion may be based on surface-level observations without further investigation into whether the bias arises from data, model architecture, or task design.\n3.\tThe paper mentions that the Chain-of-Thought (CoT) approach can impact model performance in image and 3D reasoning tasks. However, it does not provide sufficient experimental details to clarify whether CoT significantly enhances performance across all types of tasks or if it is only effective for the specific tasks currently evaluated.\n4.\tIt is suggested to discuss and compare more related works such as [1,2] in this paper.\n\n[1] Detecting and Grounding Multi-Modal Media Manipulation and Beyond. TPAMI 2024.\n\n[2] Detecting and Grounding Multi-Modal Media Manipulation. CVPR 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Quality and Realism of Synthetic Data: How does the synthetic data in LOKI reflect the latest advancements in generative models? Are there measures taken to ensure that the synthetic data poses a realistic challenge for detection models?\n- Prompting Strategies Implementation: Could you elaborate on how chain-of-thought and few-shot prompting were implemented across different models? Specifically, how did these strategies impact models that are less capable of handling long contexts or reasoning tasks?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Comprehensive Multimodal Benchmark: LOKI covers an extensive range of data modalities and subcategories.\n- Inclusion of specialized domains like satellite imagery, medical images, and philosophical texts pushes the boundaries of traditional datasets and tests models in less-explored areas.\n- Multi-Level Task Design: The benchmark doesn't just focus on binary classification but also includes tasks that assess models' abilities to explain their reasoning, promoting the development of interpretable AI systems.\n- Highlighting the Importance of Explainability\n- By testing perception, knowledge, and reasoning across modalities, LOKI contributes to the broader goal of advancing towards AGI."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces LOKI, a comprehensive benchmark designed to evaluate the capabilities of Large Multimodal Models in detecting synthetic data across multiple modalities. Recognizing the rapid advancement of AI-generated content and the associated risks of synthetic media proliferation, the authors aim to assess how well LMMs can discern real data from AI-generated counterparts.\n\nLOKI encompasses a diverse set of data modalities, including video, image, 3D models, text, and audio, covering 26 detailed subcategories such as satellite images, medical images, philosophical texts, and various audio types like music and environmental sounds. The benchmark includes over 18,000 carefully curated questions with varying levels of difficulty.\n\nThe tasks within LOKI are multi-faceted:\n\nJudgment Tasks: Binary classification to determine if a piece of data is real or AI-generated.\nMultiple-Choice Questions: Selecting the AI-generated item from a set of options.\nAbnormal Detail Selection: Identifying specific anomalies in synthetic data.\nAbnormal Explanation Tasks: Providing explanations for why data is identified as synthetic.\nThe authors evaluated 22 open-source LMMs and 6 closed-source models (including GPT-4 and Gemini) using LOKI. Their findings highlight that while LMMs show promise in synthetic data detection and offer interpretability advantages over traditional expert models, they also exhibit significant limitations. Models tend to have biases, lack domain-specific knowledge, and display unbalanced performance across different modalities."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Limited Performance in Certain Modalities: The benchmark reveals that LMMs perform poorly in modalities like 3D and audio, which may be due to the lack of available models or training data in these areas.\n- Insufficient Details on Data Generation Methods: The paper could provide more in-depth information on how synthetic data was generated for each modality, which is crucial for reproducibility and understanding potential biases in the dataset.\n- Evaluation of Few-Shot and Chain-of-Thought Prompting: The analysis of prompting strategies is somewhat limited."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024loki,\ntitle={{LOKI}: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z8sxoCYgmd},\nnote={under review}\n}"
},
"abstract": {
"value": "With the rapid development of AI-generated content, the future internet may be inundated with synthetic data, making the discrimination of authentic and credible multimodal data increasingly challenging. Synthetic data detection has thus garnered widespread attention, and the performance of large multimodal models (LMMs) in this task has attracted significant interest. LMMs can provide natural language explanations for their authenticity judgments, enhancing the explainability of synthetic content detection. Simultaneously, the task of distinguishing between real and synthetic data effectively tests the perception, knowledge, and reasoning capabilities of LMMs. In response, we introduce LOKI, a novel benchmark designed to evaluate the ability of LMMs to detect synthetic data across multiple modalities. LOKI encompasses video, image, 3D, text, and audio modalities, comprising 18K carefully curated questions across 26 subcategories with clear difficulty levels. The benchmark includes coarse-grained judgment and multiple-choice questions, as well as fine-grained anomaly selection and explanation tasks, allowing for a comprehensive analysis of LMMs. We evaluated 22 open-source LMMs and 6 closed-source models on LOKI, highlighting their potential as synthetic data detectors and also revealing some limitations in the development of LMM capabilities. More information about LOKI can be found at https://loki102.github.io/LOKI.github.io/."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"LMMs;Deepfake;Multimodality"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/04095f0bcc33b544c8db485a5e9fffb0147fc64a.pdf"
},
"presentation": null,
"primary_area": {
"value": "datasets and benchmarks"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z9CCkjVY0h | Augmented Flow Matching via Variance Reduction with Auxiliary Variables | main | Active | generative modeling;flow matching | generative models | 1;3;5;6 | 5;4;4;3 | 2;2;3;3 | 2;2;3;2 | 3;2;3;3 | 3.75 | 4 | 2.5 | 2.25 | 2.75 | -0.920575 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How much the variance can be reduced by the proposed method?\n2. Can the authors provide some criteria for choosing an appropriate augmented dimension K?\n3. Can the authors explain more on the results shown in Table 3?\n4. In Table 2, the result with 2nd-order Heun solver and AugDim=3 seems abnormally bad, can the authors explain the reason?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The investigated topic is interesting. The paper is well motivated. The proposed method is easy to implement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a novel method to reduce the variance of flow-matching loss in ODE-based generative models. The authors show both theoretically and empirically that adding auxiliary variables that are correlated to the training data reduces the variance of the target. Based on this, they propose to construct the auxiliary variables through a random projection of the training data. Experimental results confirm the effectiveness of their proposed method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. It's unclear how much the proposed method can reduce the variance when the data distribution is different.\n2. It requires more demonstration on the robustness of the proposed algorithm with respect to the choice of random projection matrix P.\n3. The effect of the proposed method highly depends on the number of auxiliary dimensions, but it lacks a criterion to determine it beforehand.\n4. The arrangement of figures and tables looks messy.\n5. The notations in Section 3 needs further clarification."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I do not have any further specific questions, and am open to discussing the weaknesses above"
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "The idea of augmenting a flow with additional variables is appealing as a way of changing the reference path so it's hopefully easier to learn or integrate."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper studies the use of augmented variables for improving flow matching. The augmented variables change the reference probability path, and make it so the variance of the conditional estimator (eq 6) is lower (prop 1): my understanding here is that there are not as many overlapping paths in the augmented space. Algorithm 1 shows the proposed way of augmenting the path, and S3.3 discusses some design choices on how to choose the path. The experimental results apply this to synthetic 2D flows (S5.1), CIFAR-10 generation (S5.2), and embryonic cell evolution (S5.3)"
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While I do agree with the motivation for the paper, I have marked it for a strong reject because I feel that the experimental results are insufficient in the current form. I am very open to discussing these through the rest of the review period.\n\n1. My biggest concern is that the experimental results do not improve upon the best-known flow matching results in any setting, and do not adequately compare to other flow matching variations. This is concerning as the idea and implementation of augmented variables is straightforward and easy to try in every flow matching setting and application. Concretely:\n + **1a.** In CIFAR-10 modeling: rectified flow reports an FID of 2.58 in comparison to the submitted paper's best FID of ~3.5 --- these comparisons are omitted in the submitted paper. The experimental results of the paper indicate/argue that the augmentations are helpful for lower NFE, so it seems fair to compare to other flow matching modifications that also create straighter paths, such as rectified flows evaluated with the Euler/2nd-order Huen methods, or mini-batch or multisample flow matching.\n + **1b.** Flows and transport on the embryonic cell evolution on the Waddington OT dataset by Schiebinger et al. (2019) have been extensively studied, and cited almost 1000 times. This dataset/setting is used in Figure 5 and other parts of the submitted paper, but do not compare to or reference any previously published results on this dataset. This makes it extremely difficult to assess the experimental comparison.\n2. I have another minor concern that the augmented variables could hurt the performance by taking away modeling capacity. I believe this is why they carefully select the number of augmenting variables to only be a few, as likely the performance is significantly hurt otherwise, 2) Table 2 stops the NFE at 100 steps for Euler and 60 steps for the 2nd-order Huen. These seem chosen at exactly the NFE where the effect of the flows with the augmented variables disappears, so it's possible the FID is hurt when integrating with a highly accurate ODE solver."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Q1. How can one choose the number of evaluations and the augmentation dimension?\n\nQ2. Please explain if the experimental results are derived from multiple runs. If so, please characterize the variability in performance. Please conduct statistical significance tests where appropriate to establish that there is a meaningful difference between the baseline and the proposed approach. \n\nQ3. Please comment on other improvement approaches. (i) Please compare the achieved performance to the previous methods (with the understanding that they are much more computationally heavy); and (ii) please explain whether there is evidence that the adoption of the proposed method in combination with these improvement strategies leads to improvement. If not, then the claim in the paper should be adjusted to acknowledge that there is potential, but it is not clear yet if the combination is useful. This avenue is particularly intriguing for me, since there have been recent proposals for simplified versions of rectified flow matching, for example, which might be computationally reasonable while retaining effectiveness. If the proposed strategy leads to further improvement, then this would be most welcome. \n\nQ4. Please provide a more detailed discussion of the cell evolution experiment. Why does the Wasserstein distance increase as NFE increases in Figure 5? This behaviour doesn’t appear to be evident for CIFAR-10, where the metrics are effectively monotonically decreasing with NFE."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "S1. The paper proposes a novel idea that is very simple yet effective. It is easy to implement and has a relatively low computational and memory overhead. There is the potential, albeit unexplored, to combine the approach with other flow matching improvement techniques. \n\nS2. The motivation for the proposed technique is well-developed, with useful examples to aid understanding and support the intuition. \n\nS3. The paper provides experimental results for synthetic data, as well as image and cell evolution data."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper addresses the task of flow matching for generating an ODE such that its path traverses between two distributions. To reduce variance during training, the paper proposes an augmented flow matching framework that introduces auxiliary variables that are correlated to the training pair. The paper provides some empirical results to demonstrate that the proposed training procedure leads to improved performance."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "W1. While there is no need for a technique to be unnecessarily complicated, and a paper should not be penalized for proposing a simple method, especially when it is effective (there is elegance and robustness in simplicity), the technical contribution of the paper is limited. There are several important issues that are not resolved. Perhaps most importantly, the experimental results clearly indicate the importance of the choice of the augmentation dimension and the number of function evaluations. The proposed technique even makes things worse (at a slightly higher computational and memory cost) if there is not a judicious selections. The paper would be considerably stronger if it investigated how to choose these parameters and proposed an effective strategy. \n\nW2. The experiments provide some support for the claim that the proposed method leads to improved performance, but the experimental analysis is not convincing or thorough. The paper does not report any measure of the variability in the experiments. It is not clear to me if these are the results from a single run or the average over multiple runs. There is no attempt at statistical significance testing to establish that the performance differences are meaningful. This is a particular concern when there is inconsistent behaviour. For example, in Table 2 Heun, the performance for AugDim=3 is consistently worse for all NFE than both AugDim=2 and AugDim=4. This doesn’t really make sense. It raises concerns about the consistency of the results and the experimental assessment. \n\nW3. The paper claims that “This approach can be plugged-in to other existing training methods that enhances efficiency, such as optimal transport or curvature-minimizing approach.” While it may be true that it can be combined with these approaches, the paper provides no evidence that the combination is useful. In general, the paper does not compare experimentally to any other methods that are designed to improve the performance of flow matching beyond basic independent sampling. It would at least be useful to understand whether the technique, when employed on CIFAR-10, for example, outperforms or approaches the performance of some of the much more computationally burdensome strategies. For example, how does it compare to ReFlow?\n\nW4. The discussion for the cell evolution experiment could be improved. Qualitatively, the distributions obtained using the augmented flow matching approach are better, but they are still very different from the original. The discussion is limited to a statement that the proposed approach leads to a collection of samples that “better follows the distribution of true points”. Are they good enough? How can one tell?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "No major questions."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The paper is well written and the idea is clear.\n2. The modification is computational light.\n3. The variance reduction strategy is simple and general, and it can be easily adapted to different situations by different design of $Y$."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes an augmented flow matching (AFM) framework, which reduces the conditional variance by introducing auxiliary random variable correlated to training pair (use linear combination for simplicity in this paper). After justifying the claim and validating the proposed method by 2D synthetic data, they applied their method to single embryonic cell evolution and CIFAR-10 dataset."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The major concern is mentioned in the discussion\n2. A typo? line 205, \"where $X_0$ is drawn...\" should be \"where $(X_0, X_1)$ is drawn...\""
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "Dimension augmentation on training data reduces training variance and achieve fast and efficient sampling."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024augmented,\ntitle={Augmented Flow Matching via Variance Reduction with Auxiliary Variables},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z9CCkjVY0h},\nnote={under review}\n}"
},
"abstract": {
"value": "Flow matching is a simulation-free approach that scalably generates an ODE, in which its path traverses between two different distributions. However, conventional flow matching relies on the training pairs drawn independently, inducing high variance that might slow down training process and degrade the performance upon training. To mitigate this, we propose augmented flow matching, a simple yet efficient framework that can be ubiquitously applied to flow matching with slight modification to the models. We first find that when some auxiliary variables that are correlated to the training data, then they contribute on variance reduction of the flow matching loss estimation, when used together with the training data pair. With this observation, we construct auxiliary variables that are correlated to the training pair, which is obtained by simple and effective linear operation from the input data. Finally, we show that with this simple modification on the training phase, we achieve the improved model flexibility and performance when the ODE is applied on the learned model."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"generative modeling",
"flow matching"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/d01339f23d4c12b0eb5e08ba0ff6e2197efc8a41.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/e48c0c2ffd199973486902331f426668336b7119.zip"
},
"title": {
"value": "Augmented Flow Matching via Variance Reduction with Auxiliary Variables"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z9UABOHCZc | GeoTimeCLIP: Unveiling the When and Where of Images | main | Active | time prediction;geolocalization;contrastive learning;metric learning | applications to computer vision, audio, language, and other modalities | 3;5;6;6 | 4;5;5;4 | 3;3;4;3 | 1;3;4;3 | 3;3;4;3 | 5 | 4.5 | 3.25 | 2.75 | 3.25 | 0.408248 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "I don't have any ethics concern."
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "What is the intuition behind joint training two-CLIP framework? Since two tasks (time prediction and geo-localization) are very different, separate training may be more effective than joint training."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Yields a good performance on time prediction tasks.\n2. Propose a new yet simple time prediction metric."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This submission presents a simple method to predict time and geo-location from a given image. This method adopts the CLIP framework which aligns image embedding and text embedding in the representation space by treating time and geo-location as texts. Considering Cyclic nature of time in terms of days and years, text embeddings for time are computed using Random Fourier Features. On the other side, predicting geo-location follows the existing GeoCLIP framework. To evaluate time prediction, this submission also introduce a evaluation metric (i.e., Time Prediction Score). Based on this TPS metric, the proposed method outperforms baselines on the zero-shot time prediction task and yields comparable accuracies on the geo-localization task."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Lack of novelty\n\nThe only novel contributions that the authors can claim in this work are that they propose a task-specific learning framework in the form of CLIP and a loss that takes into account the cyclic nature of time. However, the portion of the CLIP framework that is appropriately designed for the task (i.e., a pair of CLIPs in the unified framework) is very minimal to be considered a crucial contribution. The RFF loss, which was designed to utilize the cyclic nature in terms of frequency, does not seem to make a large contribution. It would be helpful to better understand the proposed loss if the RFF loss values matching the actual cyclic nature can be visualized.\n\nMoreover, since the proposed methods are designed and evaluated for too narrow a task, it is questionable whether they can be used more generally.\n\n\n2. Insufficient performance\n\nIn geo-localization, the performance was worse than some compared methods. If joint training of two tasks in an integrated framework results in better performance on only one task, then the need for joint training must be questionable."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "I have two concerns about the paper:\n 1. the novelty and the effectiveness of the proposed method.\n\nThe first part of the method is img-geo alignment. I found it almost the same as the method in the GEOCLIP paper. I cannot find much difference. Also in the img-time alignment part, the time encoder has exact the same architecture as the location encoder.\n\nAfter I finish reading the paper, I think the novelty lies in (1) the temporal metric learning, (2) predict geo and time jointly (3) normalize both month and hours to be continuous. The first one is the most important novelty in my understanding.\n\n1.1 about the temporal metric learning\n\nFor the proposed temporal metric learning, I am curious about the training stability of using it, since the time distribution varies across batches, which will bring instability during training so that calculating KL-loss might be unstable. \n\nAlso in Table 2, the cyclic loss did better in month prediction but did worse in hour prediction than L2 loss. In this way, why the proposed cyclic loss does not improve the prediction results for hour? You explain it as \"This small difference might be attributed to the fact that the dataset is biased towards daytime \". But L2 loss and cyclic loss experiments used the same dataset. Could you put the hour confusion matrix or prediction error distribution cross time in the supplementary to help readers have a deeper analysis on such phenomenon? Otherwise, the reason you give is not sound, because intuitively, cycle loss is much more reasonable.\n\n1.2 about the prediction results\n\nIn Table 1, you compare the time prediction results between time-clip and geotime-clip. There is a sentence \" Our experiments demonstrate that GeoTimeCLIP achieves lower errors for month and hour predictions compared to all baselines, without the need for additional metadata.\" However, GeoTimeCLIP adds geographic location as an additional metadata input compared to TimeCLIP. \n\nAlso, the Geo-localization results in Table 5 is not that good. Why GeoTimeCLIP performs worse than GeoCLIP in some metrics for the dataset Im2GPS3k, since GeoTimeCLIP uses more input. I cannot find any analysis to such phenomenon. You give the statement for Table 1 that \" training a model for both time prediction and geo-localization simultaneously results in richer time representations\", so why \"training geo and time simultaneously\" not results in very good geo-location representation? \n\n2. lack deeper analysis on the combination influence between time and geo-info.\n\nTo be honest, I expect more on the joint analysis between time and geo-info, in addition to the retrieval and prediction results. It seems that the downstream task lacks deeper analysis on the combination influence between time and geo-info, since there is a significant correlation between geographical location and time features. For example, in high-latitude regions of the Northern Hemisphere, daylight hours are shorter in winter, and it gets dark earlier. In contrast, in low-latitude regions near the equator, daylight hours remain more stable, and sunset times do not vary significantly. However, I cannot find such kind of analysis."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- Predict both time and geo-info jointly.\n- The new time representation method for both month and hour is really good: normalize the month and hour in the same range. In this way, time values are converted as a pair of continuous numbers.\n- The Time Prediction Score provides a good evaluation taking both hour and month into account.\n- The proposed temporal metric learning sounds reasonable."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper learns both image-time embedding alignment as well as image-geo embedding alignment in a retrieval way. \nDays and years are represented using Random Fourier Features to handle cyclic patterns effectively.\nInstead of standard contrastive loss, a novel metric learning approach is used.\nShared embedding space facilitates downstream tasks, such as compositional and text-based retrieval."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "I have two concerns about the paper:\n1. the novelty and the effectiveness of the proposed evaluation method.\n2. lack deeper analysis on the combination influence between time and geo-info.\n\nIn the below \"Questions\" part, I will describe them in detail."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.The authors mention that GeoTimeCLIP is able to predict time and place without additional metadata, but in practical applications, how to ensure its accuracy? In particular, is the performance of the model affected by different geographical locations and seasonal changes?\n2.What do the authors think are the potential applications of GeoTimeCLIP's findings in future research? Are there plans to apply the method to other fields or to combine it with other techniques for further research?\n3.Does the model have additional requirements for the attributes of the image, such as tones, filters, etc.?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The GeoTimeCLIP method proposed in this paper has significant originality in the joint modeling of time prediction and geographical positioning. By representing time as a month-hour pair and considering the periodicity of time, the authors shed new light on the problem of time prediction. In addition, the proposed measurement learning method based on time difference overcomes the limitation of the traditional contrast learning method in time prediction, and presents an innovative improvement to the existing methods.\n2. The quality of the paper is reflected in its methodological rigor and experimental comprehensiveness. The author not only proposed a new model framework, but also verified the effectiveness of the method through a new benchmark test. GeoTimeCLIP excelled in the combined prediction of time and location, going beyond the baseline of optimizing time only and competing with expert-level geolocation methods, demonstrating its high quality research results. \n3. The structure of the paper is clear and the logic is rigorous. In the introduction part, the background and importance of the research are clearly expounded, and then the design idea and implementation details of the method are introduced in detail. The use of diagrams and formulas effectively assists readers in understanding complex concepts, making the paper as a whole easy to read and understand."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper, GEO TIME CLIP: UNVEILING THE WHEN AND WHERE OF IMAGES, proposes a basic retrieval method named GeoTimeCLIP, which can be used to combine time prediction and geopositioning of images. The study highlights the importance of accurately estimating the time and geographic location of image capture, especially in areas such as digital forensics, ecological research, and social media management. While existing methods typically rely on GPS data for time estimation, GeoTimeCLIP uses contrast learning to align images, time, and location by building a shared multimodal feature space."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. I recommend that the authors consider establishing a public benchmark dataset containing images from a variety of scenarios and conditions in future work. This would help promote research in this field and provide a valuable resource for comparative studies.\n2. The study does not address the effects of different latitudes and climatic conditions on time representation. It would be beneficial to consider these factors, as they could significantly impact the model's performance. Exploring this aspect could enhance the applicability of the model across diverse geographic regions.\n3. Although the current model shows improved performance, it lacks interpretability in its decision-making process. I recommend introducing interpretability techniques in future studies to help users understand how the model makes predictions about time and place. This would enhance user trust and provide insights into the model's internal workings."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "L248: It took a while to find the number of scales that were used here. I think this is defined on L739 as 3, but it's not associated with the $M$. Is this correct?\n\nEq 1 Why was this approach taken? Why not feed all features into one MLP? Why not use the concatenation of the MLP outputs?\n\nWhat is the impact of different image encodings?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "- The problem is interesting and breaks new ground relative to recent works on static geospatial embeddings.\n- The elements of the approach make sense.\n- The discussion of related work is solid.\n- The presentation is clear."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work proposes a unified embedding for location, time, and a ground-level image. This is similar to recent approaches, such as GeoCLIP, but extends the representation to include time. Briefly, a frozen CLIP encoder is used to represent the image features. A lightweight MLP is used to project this into the shared representation space. Two separate lightweight encoders are used to embed location and time. Sensible choices are made for the input positional encodings for both domains. Image-location similarity is optimized using a fairly standard contrastive loss and Image-time similarity is optimized using a combination of a contrastive loss and a distance loss."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper is generally well executed, but I do see a few issues:\n\n1) The problem domain is fairly niche. I don't see that as a major issue given the recent interest in image localization, timestamp estimation, and image embeddings in general.\n\n2) It seems odd to put image and time into the same embedding space since they are distinct concepts. It seems that having two distinct embeddings might facilitate additional applications instead of having the two representations intertwined. \n(2a) Along this direction, I would have liked to see some exploration of the learned embedding space. Do space and time happen to live in different subspaces? If so, that would point toward the potential value of just making this a hard constraint.\n\n3) The compositional image retrieval experiment doesn't seem to add much value. It seems like it would have been more interesting to look at what the similarity maps of the embeddings look like for arbitrary locations and times. These embeddings could emphasize the power of the embedding, or highlight areas for improvement.\n\n4) I would have liked to see more details about the re-implementation of Zhai 2019 and Salem 2022. For example, are these using the CLIP encoder or the weaker encoders that were used in the original papers? This is also an issue with Table 5, where some of the difference between the methods could be attributed to the difference in the underlying image encoder, not all of the additional aspects which are the claimed contributions of this work. This concern could easily be addressed by experiments across different backbones (perhaps a weaker and a stronger ImageNet pre-trained model). That should be fairly quick to do given the relatively lightweight nature of this approach.\n\n5) Perhaps I missed it, but it's not clear why the particular approach for the Temporal Metric was selected. There seem to be quite a few variants of this, but only L2 is evaluated as a baseline. The results don't seem particularly conclusive in Table 2.\n\nMinor issues:\na) The bibtex entries for at least a couple of the papers need work: Salem 2022 and Zhai 2019 are missing the venue."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024geotimeclip,\ntitle={GeoTime{CLIP}: Unveiling the When and Where of Images},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z9UABOHCZc},\nnote={under review}\n}"
},
"abstract": {
"value": "Timestamp prediction aims to accurately determine the date and hour at which an image was captured using only visual cues, with applications ranging from image retrieval and metadata correction to digital forensics. In outdoor scenes, this can be inferred from variables such as overall brightness, hue, and shadow positions for hourly estimations, as well as weather patterns or seasonal changes for determining the date. However, these factors vary greatly depending on geographical location, making the challenges of time-of-capture prediction closely related to geo-localization. To address this problem, we introduce GeoTimeCLIP, a novel method capable of simultaneously estimating both the capture time (i.e., hour and month) and geo-location (i.e., GPS coordinates) of an image using a retrieval approach. Our model employs an image encoder, a time encoder, and a location encoder, aligning the time and GPS embeddings with the image embeddings in a continuous high-dimensional feature space. Considering the cyclical nature of days and years, we propose an effective way to represent time using Random Fourier Features. To learn image-time embedding alignment, rather than applying a standard contrastive loss with hard positives and negatives, we propose a more effective metric learning-based objective, which provides soft targets by considering the time difference between samples over a toroidal manifold. We introduce new benchmarks for time prediction, where we show that our jointly optimized time-location-based method outperforms baselines optimized solely for time. We also evaluate our method on existing geo-localization protocols, demonstrating that our approach performs competitively with expert geo-localization methods. Our shared embedding space enables various downstream tasks, such as compositional retrieval and text-based retrieval."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"time prediction",
"geolocalization",
"contrastive learning",
"metric learning"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/8964e2c8f6889d09d1fee84cf4e7a5a2e55918b7.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "GeoTimeCLIP: Unveiling the When and Where of Images"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z9UBpl4pv5 | Structured Initialization for Attention in Vision Transformers | main | Active | Transformer;Learning theory;Initialization;ConvMixer;Attention map | learning theory | 3;5;5 | 5;4;4 | 2;3;3 | 2;2;2 | 3;3;3 | 4.333333 | 4.333333 | 2.666667 | 2 | 3 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Why not apply structured initialization to the value (V) component of self-attention? Additionally, how are the feed-forward network (FFN) layers, normalization layers, and projection layers initialized in the proposed method?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.Theoretical Foundation: The structured initialization method is based on solid theoretical analysis rather than just empirical results, providing a strong rationale for its effectiveness.\n\n2.Performance Improvements: The method consistently shows significant performance improvements over conventional ViT initialization methods in small-scale datasets, which is a notable achievement."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper addresses the challenge of applying Vision Transformers (ViTs) to new domains with small datasets, where Convolutional Neural Networks (CNNs) typically excel due to their inherent architectural inductive bias. The authors propose a novel approach that reinterprets CNN's architectural bias as an initialization bias for ViTs, termed \"structured initialization.\" Unlike traditional ViT initialization methods that rely on empirical results or attention weight distributions, this method is theoretically grounded and constructs structured attention maps. The paper demonstrates that this structured initialization enables ViTs to achieve performance comparable to CNNs on small-scale datasets while retaining the flexibility to perform well on larger-scale applications. The proposed method shows significant improvements over conventional ViT initialization across several small-scale benchmarks, including CIFAR-10, CIFAR-100, and SVHN, and maintains competitive performance on large-scale datasets like ImageNet-1K."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. In terms of innovation, the Transformer architecture was initially designed to minimize inductive bias. The author's attempt to incorporate structural biases from CNNs into the Transformer seems to go against the original intent of the Transformer design, which could be seen as a step backward for the evolution of Transformer models.\n\n2. The variety of experimental backbones is somewhat limited. It would be beneficial to conduct experiments with DeiT or Swin-Transformer to compare results. Furthermore, aside from classification tasks, it would be interesting to test the method on detection or segmentation tasks to further evaluate its versatility and effectiveness."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- Does the structured initialization limit the model's ability to learn better representation? It would be better to provide some representation-level analysis using metrics like Centered Kernel Alignment (CKA) similarity [1].\n\n[1] Kornblith, Simon et al. “Similarity of Neural Network Representations Revisited.” ICML 2019."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper is easy to follow.\n- As much work has been trying to introduce convolutional design into the ViT model, this paper provides an interesting viewpoint that initializing the attention map as CNNs can also help to introduce the inductive bias and subsequentially improve the performance of trained ViT on small-scale datasets.\n- A theoretical explanation is provided to show the connection between the structural initialization in ViT and inductive bias in CNNs.\n- Some special designs like more heads and various initialization conv kernel sizes are adopted."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents an approach to initialize ViT through structured initialization of attention maps. By incorporating CNN-like inductive biases during initialization, it aims to combine the local spatial processing capabilities with the global relationship learning of attention mechanisms and take advantage of CNNs' inductive bias. Experimental results on several small-scale datasets validate its effectiveness."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The fundamental approach of forcing attention maps to mimic convolutional kernels seems to contradict the core advantage of attention mechanisms, as their advantage is to learn flexible, dynamic global relationships. It would be better to justify why structured initialization is preferred over simply incorporating convolutional blocks into the architecture, which would be a more straightforward solution. \n- It would be better to provide more analysis of why this approach is better compared to well-established solutions: \n - Transfer learning from large-scale pre-trained ViTs\n - Hybrid architectures combining convolution and attention\n- The optimization process required for initializing attention maps introduces additional computational overhead during training, and one needs to further choose the optimizer for initialization and conv kernel size (as well as other hyperparameters for different model sizes), which makes it impractical."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "N/A"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Why Impulse Filters? The reasoning behind choosing impulse filters (instead of other structured filters) for initializing attention maps could be explained in more detail. Are impulse filters the best possible choice, or could other filter types provide better generalization?\n\n2. How does this structured initialization affect the deeper layers of ViTs after finetuning? Figure 3 is quite interesting. Do the constraints imposed by impulse filters affect the deeper layers’ ability to fine-tune long-range dependencies? The paper does not discuss the long-term effects of this initialization on the network’s convergence."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Structured architecture achieves good performance across both small and large datasets, which demonstrates its scalability and flexibility."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces a novel method for improving the performance of ViTs when trained on small-scale datasets by incorporating structured initialization. The authors identify that ViTs struggle with small datasets compared to CNNs, which benefit from inherent inductive biases. By reinterpreting the architectural bias in CNNs as an initialization bias for ViTs, the authors propose a \"structured initialization\" method that results in structured attention maps for ViTs. The key contribution lies in the use of random convolutional impulse filters to guide the initialization process. The method is theoretically justified and empirically validated across serveral benchmarks. The paper demonstrates that structured initialization yields performance improvements on small datasets without compromising ViT’s flexibility on larger datasets."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The core argument of the method is that the convolutional structure can be transferred to the attention mechanism in transformers by initializing the attention maps with random impulse filters. However, this analogy between convolutional layers in CNNs and the attention mechanism in ViTs may be overly simplistic. CNNs' convolutional filters are spatially local and fixed in structure, while attention in ViTs is meant to capture long-range dependencies and is more flexible. This difference is crucial, and the method does not seem to fully address how imposing a rigid, convolution-like structure at initialization aligns with the flexibility that the attention mechanism needs. The convolution structure might limit the model's ability to learn long-range dependencies that are essential to the transformer. The claim that random impulse filters can replace learned convolutional filters is somewhat true for CNNs under certain conditions (like ConvMixers), but applying this to ViTs is more challenging. The attention mechanism is a more complex and dynamic operation compared to convolutions, and it’s unclear if the same approximation can hold. In practice, imposing a convolution-like structure might hinder the attention mechanism's ability to adapt during training.\n\n2. The paper proposes that impulse filters, combined with the softmax operation, can initialize the attention maps. The softmax function ensures that all outputs are non-negative, which is a crucial difference from convolutions, which can have both positive and negative values. Random convolutional filters may contain both positive and negative values, while softmax output does not. This inconsistency could cause issues. The authors acknowledge this (stating the filters must be positive), but they do not provide a deep exploration of how this might affect the quality or flexibility of the learned attention maps. Relying on impulse filters could reduce the model's expressivity, especially if the initialized filters are too rigid and only positive-valued patterns are learned initially.\n\n3. The authors propose an iterative optimization process to solve for the initial values of $Q_{init}$ and $K_{init}$ such that the resulting attention maps resemble impulse convolution filters. The optimization is based on a pseudo-input, which is generated from positional encodings rather than actual data. This could introduce an unwanted bias into the model's initial learning process. While using positional encoding as pseudo-input is an interesting idea, the paper does not adequately explore how different choices of pseudo-inputs affect the results or whether using actual training data for initialization would be a better alternative.\n\n4. The optimization process described is more computationally expensive (up to 10,000 iterations using Adam) compared with traditional initialization methods. This added complexity raises the question of whether the benefits of structured initialization outweigh the cost, especially given that the improvements on large datasets are marginal. There is no discussion of the computational cost vs. benefit of this method compared to standard initialization techniques.\n\n5. The use of random impulse convolution filters assumes that locality is always important, but this assumption may not hold in tasks where global context is critical. In CNNs, locality is useful because of the hierarchical structure of learned features. However, in transformers, the attention mechanism is specifically designed to handle long-range dependencies. By forcing the model to start with local dependencies (via impulse filters), the authors may inadvertently restrict the model's ability to learn global features early in training, leading to potential issues in tasks where global context is key from the beginning.\n\n6. The method relies on several hyperparameters, including filter size (3x3 or 5x5) and the number of iterations for optimization. However, the choice of these hyperparameters is not adequately justified or explored. The method's performance is likely sensitive to these parameters, but there is no thorough analysis of how variations in filter size or optimization parameters affect results. Given the complexity of the initialization process, these aspects should have been investigated in detail to ensure the method’s robustness."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024structured,\ntitle={Structured Initialization for Attention in Vision Transformers},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z9UBpl4pv5},\nnote={under review}\n}"
},
"abstract": {
"value": "The application of Vision Transformers (ViTs) to new domains where an inductive bias is known but only small datasets are available to train upon is a growing area of interest.\nHowever, training ViT networks on small-scale datasets poses a significant challenge. \nIn contrast, Convolutional Neural Networks (CNNs) have an architectural inductive bias enabling them to perform well on such problems. \nIn this paper, we propose that the architectural bias inherent to CNNs can be reinterpreted as an initialization bias within ViT. \nSpecifically, based on our theoretical findings that the convolutional structures of CNNs allow random impulse filters to achieve performance comparable to their learned counterparts, we design a ``structured initialization'' for ViT with optimization.\nUnlike conventional initialization methods for ViTs, which typically (1) rely on empirical results such as attention weights in pretrained models, (2) focus on the distribution of the attention weights, resulting in unstructured attention maps, our approach is grounded in a solid theoretical analysis, and builds structured attention maps.\nThis key difference in the attention map empowers ViTs to perform equally well on small-scale problems while preserving their structural flexibility for large-scale applications.\nWe show that our method achieves significant performance improvements over conventional ViT initialization methods across numerous small-scale benchmarks including CIFAR-10, CIFAR-100, and SVHN, while maintaining on-par if not better performance on large-scale datasets such as ImageNet-1K."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Transformer",
"Learning theory",
"Initialization",
"ConvMixer",
"Attention map"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/61ab1513c7c7ea6f852afb1ff0b031f4b6a2d15e.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning theory"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Structured Initialization for Attention in Vision Transformers"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
z9j7wctoGV | Non-parametric Kernel Relative Test for Machine-generated Text Detection | main | Active | Large language models;Machine-generated text detection;Relative test | alignment, fairness, safety, privacy, and societal considerations | 5;5;6 | 3;5;3 | 3;2;3 | 2;2;2 | 2;2;3 | 5.333333 | 3.666667 | 2.666667 | 2 | 2.333333 | -0.5 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "The method relies on several assumptions coming from empirical results (line 88, line 280-). The authors are encouraged to provide additional clarifications, both intuitive and theoretical, to further explain those assumptions."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The proposed relative test can reduce the false positive rate that has been observed in current two sample tests. \n\nIt also proposes a novel method to optimize kernels in relative tests for MGT detection, which significantly improving the effectiveness and efficiency of the detection process."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a non-parametric kernel relative test to detect machine generated text (MGTs) by testing whether it is statistically significant that the distribution of a text to be tested is closer to the distribution of human written text (HWTs) than to the MGTs’ distribution. It improves the current two-sample test-based detection methods, which assumes that HWTs must follow the distribution of seen HWT. The authors further develop a kernel optimization algorithm in relative test to select the best kernel that can enhance the testing capability for MGT detection."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "While this paper presents some innovative ideas, its significance appears limited. There has been some similar studies such as the ICLR'24 paper \"Detecting machine-generated texts by multi-population aware optimization for maximum mean discrepancy.\". \n\nThe proposed method requires the preparation of both the MGT and HWT datasets. In that case, simply comparing the method with Bino in the experiment is insufficient, and may even be unfair because Bino is a zero-shot method. It is essential for the authors to engage in a comprehensive experimental study, comparing their proposed method with various other detection algorithms.\n\nThe method leverages Roberta based GPT-2 as feature generator (line 200). Please explain the reason, and provide a comparison of this choice with other potential alternatives in both text explanation and experimental results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": {
"value": "No concerns"
},
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "refer to the weakness"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1). Basically, the idea has some novelty to some extend. \n\n2). The problem to address is important.\n\n3). The writting is somehow clear and easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This submission proposes to address the limitation of existing non-parametric LLM detector approach that tends to make mistakes in identifying HWTs that deviate from the\nseen HWT distribution. The authors suggest to employ non-parametric kernel relative test to address the issue. Basically, the idea has some novelty to some extend. The paper highly follow the method in paper “DETECTING MACHINE-GENERATED TEXTS BY\nMULTI-POPULATION AWARE OPTIMIZATION FOR\nMAXIMUM MEAN DISCREPANCY” from both methodology style and the experimental design."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "My major concerns are as follows:\n\n1). The limitation of these MMD-based approach is obvious. The appraoch by nature is not Zero-Shot aaproach. As the approach needs to prepare HWT and MGT in advance, I would say the approach is not training free by nature. Especially, the paper proposes kernel optimisation algorithm to select best kernel. This by nature is somehow “training”. Thus, the approach only compare times with Bino is not sufficient, as it also needs to compare time with training/classifier based approach (Table 2)\n\n2). The experiments are far from complete. Bascially, the datasets used are obviously too simple. Why not use the RAID datasets (in the following paper) that includes 22 different LLMs with different settings?\n“RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors, ACL’24”\n\n3). Need to validate the robustness of the detection algorithm under adversial attack. \n\n4). Need to provide non-english dattasets’ results to see the effectiveness on non-english settings.\n\n5). As mentioned in 1), the paper needs to provide comparisons to other classification based approaches as well other metric-based/ logits-based approaches. Such as:\nClassification-based: \nFEW-SHOT DETECTION OF MACHINE-GENERATED TEXT USING STYLE REPRESENTATIONS. ICLR’24. https://arxiv.org/pdf/2401.06712\nThreads of Subtlety: Detecting Machine-Generated Texts Through Discourse Motifs. ACL’24. https://arxiv.org/pdf/2402.10586. \nLogits-based:\nDetectGPT (ICML’23) \nFast-DetectGPT (ICLR’24) \nDNA-GPT [ICLR’24]\nDALD: Improving Logits-based Detector without Logits from Black-box LLMs. [NeurIPS’24]\n\n6). The paper also highly depends on the feature extractor, that is “fine-tuned a RoBERTa model on GPT-2-generated”. What’s the performance for other feature extractors? (need experiements). Need to explain the major different and advantage of proposed approach over training-based approaches.\n\n7). The literature review is far from sufficient. Pl refer to the recent surveys such as:\nA Survey of AI-generated Text Forensic Systems: Detection, Attribution, and Characterization\nA Survey on Detection of LLMs-Generated Content, EMNLP’24.\nOn the Possibilities of AI-Generated Text Detection"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Please see weaknesses."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The motivation of the paper is reasonable, and the problem it addresses has practical application value.\n\n2. The method proposed in the paper is simple and effective, with a coherent logic, and the description of the algorithm is very clear.\n\n3. The paper has a solid theoretical foundation, its arguments are logically sound, and the method is highly interpretable.\n\n4. The experiments indicate that the algorithm performs well."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper examines the issue of how to detect machine-generated texts. It suggests that previous methods are unable to identify human-written texts when there is a change in distribution. It proposes to employ non-parametric kernel relative test to detect machine-generated texts by testing whether it is statistically significant that the distribution of a text to be tested is closer to the distribution of human-written texts than to the distribution of machine-generated texts. A kernel optimization algorithm is proposed to select the best kernel that can enhance the testing capability for machine-generated text detection. Some experiments support the effectiveness of the method proposed in this paper."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The assumptions on which the method is based are too strong and do not align with reality. Firstly, there is significant overlap between machine-generated language and human language; they do not exist in completely separate domains. Additionally, the subspace assumption is overly idealized and lacks a solid foundation, which greatly undermines the paper's validity.\n\n2. The method proposed in the article resembles a general text anomaly detection approach and is not closely related to large language models or machine-generated language detection. It appears to be a universal solution for text anomaly detection rather than a targeted one, as the specific characteristics of the problem have not been discussed or reflected in the method's design.\n\n3. The comparative methods in the experiments are relatively few and not comprehensive enough. Many anomaly detection solutions could also address this issue, so more comparisons and discussions should be provided."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "A robust detection method for LLM-generated texts using relative test"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024nonparametric,\ntitle={Non-parametric Kernel Relative Test for Machine-generated Text Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=z9j7wctoGV},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent studies demonstrate that two-sample test can effectively detect machine-generated texts (MGTs) with excellent adaptation ability to texts generated by newer LLMs. \nHowever, the two-sample test-based detection relies on the assumption that human-written texts (HWTs) must follow the distribution of seen HWTs. As a result, it tends to make mistakes in identifying HWTs that deviate from the \\textit{seen HWT} distribution, limiting their use in sensitive areas like academic integrity verification.\nTo address this issue, we propose to employ \\textit{non-parametric kernel relative test} to detect MGTs by testing whether it is statistically significant that the distribution of \\textit{a text to be tested} is closer to the distribution of HWTs than to the distribution of MGTs. \nWe further develop a \\textit{kernel optimisation} algorithm in relative test to select the best kernel that can enhance the testing capability for MGT detection.\nAs relative test does not assume that a text to be tested must belong exclusively to either MGTs or HWTs, it can largely \\textit{reduce the false positive error} compared to two-sample test, offering significant advantages in practical use. \nExtensive experiments demonstrate the superior detection performance of our method, compared to state-of-the-art non-parametric and parametric detectors."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large language models",
"Machine-generated text detection",
"Relative test"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/a78b8223c94794a73437b6d9305efa8f9bc56f25.pdf"
},
"presentation": null,
"primary_area": {
"value": "alignment, fairness, safety, privacy, and societal considerations"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Non-parametric Kernel Relative Test for Machine-generated Text Detection"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zA0oW4Q4ly | Compelling ReLU Networks to Exhibit Exponentially Many Linear Regions at Initialization and During Training | main | Active | linear regions;activation regions;ReLU network;pretraining;network initialization | other topics in machine learning (i.e., none of the above) | 3;3;3;6 | 3;3;4;2 | 2;2;2;3 | 2;2;2;3 | 2;2;3;3 | 3.75 | 3 | 2.25 | 2.25 | 2.5 | -0.816497 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please address the Weaknesses."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "-\tThe paper presents detailed introduction and explanation of the proposed method, including how to construct the initialisation and how to calculate the gradient. The paper is compressive, well-structed, and easy to follow. \n-\tThe paper present experiments for cases of one dimension and high-dimension non-convex problems.\n-\tI find the paper makes conceptual contributions of proposing a new initialisation strategy."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper designs a novel training strategy: (1) reparameterise the network weights in to make it exhibit a number of linear regions exponential in depth; (2) train on the derived parameters for an initial solution; (3) refine the parameter by directly updating the underlying model weights. Experiments are given to support the method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "-\tThe experiments are not sufficient. The current experiments only cover quite shallow (three layers) ReLU neural networks on very simple tasks. It is unclear whether the results apply to complex scenarios, like deeper neural networks, transformer on fitting images, mining on text data, etc. Thus, the paper actually cannot help understand the success of deep learning.\n- No comparison is given with other initialisation methods.\n- The explanation of why this method works is not sufficient. This makes the method not convincing.\n-\tNo theoretical results are provided. This is particularly severe given the experiments are insufficient.\n\nThe paper looks like in an early stage with insufficient validation. I suggest the authors to do more following on this paper."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "My review is as follows:\n\n- I think this paper brings up important issues with standard neural network training practices (such as using relu for activation or gradient descent, etc).\n\n- One thing that I think is potentially missing is the verification of the findings on a somewhat more realistic scenario. Could we expect the proposed method to outperform a standard neural network approach (e.g. a similar size relu network trained by SGD) when, say, predicting airline delays? Or, other more standard methods such as linear regression or decision tree?\n\n- To make the results potentially more broad, I wonder if the proposed strategy could be somehow applied to classification (perhaps can test it on a simple dataset such as \"two spirals\" dataset). If that's not straightforward, I think it'd still help me understand the contributions better if the authors can comment on the challenges.\n\n- Which of the findings of this paper could we expect to carry over to other non-linear activation functions such as sigmoid?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "Please see the Questions section."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper focuses on the expressivity of ReLU networks and argues that the standard neural network training approaches lead to models that cannot utilize all of the linear regions that a ReLU network has the potential to exhibit. The paper contains an approach to overcome this issue."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "Please see the Questions section."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "1. How are ReLU activations guaranteed to generate symmetric triangle waves? There are many possible compositions of ReLU activations, but only a subset of them are symmetric triangle waves. If the network is reparameterized using only the triangle wave basis functions as proposed in the paper, will it lose some flexibility and expressivity as it is not possible for the reparameterized network to create other shapes or patterns within each layer? \n\n2. Could you provide some intuitions and/or theory regarding why pretraining helps maintain the triangle generating structure and avoid eliminating activation regions as the network gets deeper?\n\n3. Does the proposed method improve convergence rate? Could you demonstrate it with some experiments and/or theory?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed reparameterization of ReLU networks is novel and interesting. Based on the observation that ReLU networks can generate symmetric triangle waves, the proposed approach introduces an approach to directly reprent the ReLU neurons (rather than weights) as asymmetric triangle wave basis function with learnable locations of the peaks within [0, 1].\n2. The proposed learning algorithm is simple and seems to be effective in training the reparameterized network for simple target functions as shown in the experiments.\n3. The 1D demonstrations and experiments are nice, which illustrate how the proposed method learns useful patterns for fitting the target function."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to reparameterize ReLU networks by parameterizing the peaks of the triangle wave basis functions generated by ReLU activations. This ensures that the number of linear regions grows exponentially with the depth of the network, which reduces the waste of representation capacities in randomly initialized ReLU networks. A learning algorithm is proposed to train the reparameterized ReLU network by first updating the derived parameters and then updating the actual weights underlying the model. The proposed method is empirically evaluated on 1D convex and 2D nonconvex target functions, which demonstrate its improved accuracy compared to randomly initialized networks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Theorem 3 seems to be an “only if” statement, and therefore setting $s_{i+1}$ according to Eq 4 is a necessary but not sufficient condition to guarantee differentiability of the reparameterized network.\n\n2. It is unclear how useful the proposed method is in practice, since it is mostly evaluated on simple 1D convex target functions. As this is not a theory paper, the proposed method should at least be evaluated on some semi-real datasets, (e.g., some of the common UCI datasets).\n\n3. Writing needs to be improved. The logic flow is a bit confusing. Also, several important concepts and building blocks are not well explained in the paper. For example, important definitions like definitions of linear regions, activation patterns, and activation regions should be stated in the paper or appendix.\n\n4. The following closely related works which analyze compositions and/or reparameterization of ReLU activations are not discussed in the paper.\n\n[1] K Eckle, J Schmidt-Hieber. A comparison of deep networks with relu activation function and linear spline-type methods. Neural Networks 2019.\n\n[2] DM Elbrächter, J Berner, P Grohs. How degenerate is the parametrization of neural networks with the ReLU activation function? NeurIPS 2019.\n\n[3] W Chen, H Ge. Neural characteristic activation analysis and geometric parameterization for ReLU networks. NeurIPS 2024.\n\n[4] B Hanin, D Rolnick. Complexity of linear regions in deep networks. ICML 2019.\n\n[5] M Raghu, B Poole, J Kleinberg, S Ganguli, J Sohl-Dickstein. On the expressive power of deep neural networks. ICML 2017.\n\n[6] D Rolnick, K Kording. Reverse-engineering deep ReLU networks. ICML 2020."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "* How does the proposed reparameterization strategy perform in complex, high-dimensional tasks, and what are the challenges in scaling this method effectively to more realistic datasets?\n\n* Have you considered testing this method on real-world datasets with more variability and noise? How robust is the technique in such scenarios, and are there any performance trade-offs when dealing with non-synthetic data?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "* The paper introduces a novel approach to reparameterize ReLU network weights, which forces the network to exhibit an exponential number of activation regions. This significantly enhances the expressivity of the network and addresses the inefficiencies of randomly initialized models, providing a more accurate and efficient approximation of nonlinear functions.\n\n* The proposed pretraining strategy allows the network to initialize with exponentially more linear regions, thus reducing the reliance on gradient descent to discover new activation patterns. This results in faster convergence and much more accurate function approximations, as demonstrated through numerical experiments, showing orders of magnitude lower errors compared to standard initialization methods."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel strategy to improve the efficiency of ReLU neural networks. It focuses on overcoming the limitations of randomly initialized networks, which tend to be unnecessarily large and inefficient in approximating simple functions. The authors introduce a reparameterization of network weights that ensures an exponential number of activation patterns, thus maximizing the linear regions in the input space. Their approach includes a pretraining stage using derived parameters that enhances the expressivity of the network before standard gradient descent is applied. This method shows significant improvement in approximating both convex and non-convex functions, with better accuracy and efficiency compared to traditional networks. The paper's findings demonstrate that networks initialized with exponential linear regions can capture nonlinearity more effectively, leading to more accurate function approximations. It concludes with potential extensions to multidimensional and non-convex functions, positioning this strategy as a promising tool for more efficient deep learning models."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "* While the paper demonstrates significant improvements in one-dimensional convex functions, the results for higher-dimensional functions and complex non-convex problems are not as thoroughly explored. The proposed method may face scalability challenges when extending to high-dimensional inputs, where the complexity of real-world tasks lies.\n\n* The introduction of a pretraining step with specific reparameterization adds complexity to the network training pipeline. This may make the approach more difficult to implement or integrate into standard deep learning workflows, especially for practitioners looking for more straightforward techniques.\n\n* The effectiveness of the method heavily relies on carefully derived theoretical constructs, such as triangle functions and their parameterization. While this works well in controlled scenarios, its practical robustness in more diverse and noisy real-world datasets is not fully tested or demonstrated."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024compelling,\ntitle={Compelling Re{LU} Networks to Exhibit Exponentially Many Linear Regions at Initialization and During Training},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zA0oW4Q4ly},\nnote={under review}\n}"
},
"abstract": {
"value": "A neural network with ReLU activations may be viewed as a composition of piecewise linear functions. For such networks, the number of distinct linear regions expressed over the input domain has the potential to scale exponentially with depth, but it is not expected to do so when the initial parameters are chosen randomly. Therefore, randomly initialized models are often unnecessarily large, even when approximating simple functions. To address this issue, we introduce a novel training strategy: we first reparameterize the network weights in a manner that forces the network to exhibit a number of linear regions exponential in depth. Training first on our derived parameters provides an initial solution that can later be refined by directly updating the underlying model weights. This approach allows us to learn approximations of convex, one-dimensional functions that are several orders of magnitude more accurate than their randomly initialized counterparts. We further demonstrate how to extend our approach to multidimensional and non-convex functions, with similar benefits observed."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"linear regions",
"activation regions",
"ReLU network",
"pretraining",
"network initialization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b1d5768e314e0209fb7dd32ef59bcc30d2ecea7c.pdf"
},
"presentation": null,
"primary_area": {
"value": "other topics in machine learning (i.e., none of the above)"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/939a54b0ad6cc466df0748b147037fd513248f81.zip"
},
"title": {
"value": "Compelling ReLU Networks to Exhibit Exponentially Many Linear Regions at Initialization and During Training"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zAogQOIphH | ControlSpeech: Towards Simultaneous Zero-shot Speaker Cloning and Zero-shot Language Style Control | main | Active | text-to-speech;style control;discrete codec model | applications to computer vision, audio, language, and other modalities | 3;5;5;5;8 | 4;5;4;4;4 | 2;2;3;3;4 | 2;3;3;2;4 | 3;2;3;3;4 | 5.2 | 4.2 | 2.8 | 2.8 | 3 | -0.0625 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "Scalability and Data Requirements: Have you explored how ControlSpeech performs with larger datasets or in low-resource settings? What are the minimum data requirements to achieve satisfactory performance?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The paper makes significant contributions to the field of speech synthesis:\n\n-> Proposes ControlSpeech, the first TTS system capable of simultaneous zero-shot speaker cloning and zero-shot style control.\n-> Introduces the SMSD module to address the many-to-many problem in textual style control, enhancing style diversity and accuracy.\n-> Develops the VccmDataset and ControlToolkit, providing valuable resources for the research community.\n-> Demonstrates through extensive experiments that ControlSpeech achieves state-of-the-art performance in several metrics.\n\nMajor strengths:\n\n1. Novelty: The integration of zero-shot speaker cloning with zero-shot style control addresses a significant gap in current TTS systems.\n2. Technical Depth: The use of a pre-trained disentangled codec space and the SMSD module shows a deep understanding of the challenges in TTS.\n3. Comprehensive Evaluation: The experiments cover a wide range of metrics and scenarios, including out-of-domain tests and ablation studies.\n4. Resource Contribution: Providing the VccmDataset and ControlToolkit enhances reproducibility and aids future research."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces ControlSpeech, a novel text-to-speech (TTS) system that achieves simultaneous zero-shot speaker cloning and zero-shot language style control. Unlike previous zero-shot TTS models that can clone a speaker's voice but lack style control, and controllable TTS models that can adjust speaking styles but cannot perform speaker-specific voice generation, ControlSpeech integrates both capabilities. It takes a speech prompt, a content prompt, and a style prompt as inputs, and employs bidirectional attention and mask-based parallel decoding to capture codec representations corresponding to timbre, content, and style within a discrete decoupling codec space.\n\nThe authors identify a many-to-many problem in textual style control, where different textual descriptions can correspond to the same audio style and vice versa. To address this, they propose the Style Mixture Semantic Density (SMSD) module based on Gaussian mixture density networks. The SMSD module enhances fine-grained partitioning and sampling of style semantic information, enabling more diverse and accurate speech style generation.\n\nTo evaluate ControlSpeech, the authors create a new dataset called VccmDataset and develop a toolkit named ControlToolkit, which includes source code, the dataset, and replicated baseline models for fair comparison. Experimental results demonstrate that ControlSpeech achieves comparable or state-of-the-art performance in terms of style controllability, timbre similarity, audio quality, robustness, and generalizability. Ablation studies confirm the necessity of each component in the system."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper compares ControlSpeech with several baselines but could include more recent models, especially in multilingual settings or other languages beyond English.\n2. While the VccmDataset is a valuable contribution, it may still be limited in scale compared to the datasets used in large-scale TTS systems. This might affect the generalizability of the results."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"Yes, Privacy, security and safety"
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. The core innovation of the paper is to use speech prompt and style prompt to control the timbre and style of synthesized speech simultaneously. For timbre cloning, the success of zero-shot TTS has made timbre cloning reach a high degree of similarity. For style control, some text style-guided style control methods have also been proposed. It is not difficult to combine the two, so the core is how to make the synthesized audio not have a style similar to the prompt audio, but have a style that is strongly related to the style prompt. The examples provided in the \"One timber with multiple styles\" section of the Demo page are particularly critical. However, the demo reflects a certain diversity of styles but does not reflect the obvious controllability of the style, that is, the correlation between the style and the style prompt. Samples 1-3 reflect the change of style, while samples 4-6 cannot hear the difference in style, and the audio style of 7-9 is not very related to the style described in the text. Therefore, is the style control, the core innovation of the paper, still not well resolved?\n2. Can you add experimental explanations on the correlation between the style representation of the synthesized audio and the style representation of the prompt audio, as well as the similarity between the style representation of the synthesized audio and the style representation obtained by sampling the style prompt through the SMSD module? This can help to prove that the style of the synthesized audio is more controlled by the style prompt rather than leaked by the prompt speech. For example, using different styles of speech of the same speaker as the prompt, and then using the same style prompt. Then test whether the style of the synthesized speech is consistent with the style prompt.\n3. The decoupling of content and style is achieved through FACodec in Naturalspeech 3 The codec used is Ycodec = concat(Ys, Yc), where Ys=concat(Yp, Ya). How are Yp, Ya, and Yc arranged? The information used to predict the i-th layer token is the cross-attention fusion of the previous i-1 layer tokens, the text and the global style feature, and the unmasked tokens of the i-th layer. Does the order of Yp, Ya, and Yc affect the prediction effect of the token? Will the unmasked Ys in the speech prompt cause leakage and affect the predicted Ys? Thus causing the style of the generated speech to be related to the prompt audio and reduce the correlation with the style prompt?\n4. This paper proposes a module called SMSD to solve the one-to-many problem of style control. The Ground Truth used by the training target of SMSD is Ys, which is extracted by the style extractor. Is the style extractor here trainable? What is the specific structure? Extracting the style of a long speech as a global style representation, is it not enough to represent more complex and style-changing prompts? Such as \"a woman starts to speak softly in a low-energy voice, and then becomes more and more emotional. Her voice gradually becomes higher and higher, and her speaking speed becomes faster and faster.\" Sampling from different gauss distributions and noise perturbation modules increases the diversity of styles. But will the consistency of the style of the synthesized speech with the style prompt be negatively affected? If we use the same style prompt to sample multiple times and synthesize speech. How about the similarity of the global style representation extracted from this synthesized speech?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Simultaneous Control: ControlSpeech's ability to simultaneously clone a speaker's voice and control style is an advancement in the TTS field.\n2. Zero-Shot Capabilities: It demonstrates competitive zero-shot voice cloning and style control, which are valuable for applications where training data for specific speakers or styles is limited.\n3. Disentangled Representation: By disentangling timbre, content, and style, ControlSpeech allows for more flexible and independent control over speech attributes.\n4. SMSD Module: The novel SMSD module effectively addresses the many-to-many problem in style control, enhancing both style accuracy and diversity."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "ControlSpeech achieves both zero-shot speaker cloning and zero-shot style control. Unlike previous TTS models that either mimic a speaker's voice without style control or control style without speaker-specific voice generation, ControlSpeech can independently control timbre, content, and style. It leverages a few seconds of audio prompt and a simple textual style description to fully clone a speaker's voice and adjust their speaking style. The system employs bidirectional attention, mask-based parallel decoding, and a Style Mixture Semantic Density (SMSD) module to address the many-to-many problem in textual style control."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Insufficient innovation: There are numerous related works on zero-shot TTS and style controllable TTS. ControlSpeech combines the two tasks. The architecture used is also based on zero-shot TTS. The innovation of the method and architecture is average.\n2. There is no sufficient analysis or proof of the decoupling effect of style control with or without speech prompt. As there is style information in the speech prompt. In other words, does zero-shot TTS have the ability of style control itself? Is there any test of the style control ability of zero-shot TTS? At the same time, ControlSpeech accepts speech prompt and style control at the same time. The final control effect of the two types of information on the style is not analyzed. Will there be information leakage (that is, the style in the speech prompt will also affect the final style)? In addition, in the timbre cloning experiment, the timbre similarity of ControlSpeech is not optimal, and in the style controllability experiment comparison, the pitch control effect of ControlSpeech is not optimal. Is this also because style and timbre are not completely decoupled and affect each other?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Does the disentanglement primarily stem from the codec generator, the codec itself, or another module?\n2. Is the proposed method capable of controlling other attributes, such as age and gender, similar to PromptTTS? I do not notice any evaluation metrics based on these attributes. \n3. How does ControlSpeech handle cases where there is a contradiction between the style text prompt and speaker timbre prompt?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. This paper tackles a critical issue in style control by enabling independent modification of each attribute without affecting the others.\n2. The introduction of the Style Mixture Semantic Density Sampling method, along with an analysis of noise perturbation, effectively addresses the many-to-many challenge in controllable speech generation.\n3. The paper is well-organized, easy to follow, and includes clear and informative figures."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel model, ControlSpeech, which enables independent and simultaneous control over timbre, content, and style attributes, demonstrating strong zero-shot voice cloning and style control capabilities. Additionally, the authors introduce a Style Mixture Semantic Density Sampling (SMSDS) method to address the many-to-many challenge in controllable speech generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The disentanglement mechanism seems to be largely based on the FACodec. It would be beneficial to provide more detailed comparisons with NaturalSpeech3 to clarify the distinct contributions of this work.\n2, A dedicated Related Work section would help contextualize this work by providing a clearer comparison with previous approaches, such as PromptTTS2, rather than relying solely on the brief introduction in Section 3. Please do not move the related work section into appendix. \n3. Equations should be formatted using LaTeX for better readability and precision, rather than presented as plain text.\n4. Additional experiments are needed to verify the independent modification of each attribute. For instance, modifying only the speech rate while evaluating speaker similarity could help determine if timbre remains unaffected."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. Can FACode ensure complete decoupling between content, style, and timbre?\n2. The authors should add AudioBox to the baseline systems."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. ControlSpeech can simultaneously control speaker timbre and speaking style.\n2. The author proposes the SMSD module to solve the many-to-many mapping problem in textual style control.\n3. The paper is clearly written"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces ControlSpeech, a text-to-speech system that can clone a speaker's voice and control speaking style with just a short audio prompt and style description. ControlSpeech uses mask-based parallel decoding and proposes the SMSD module for better style control. A toolkit is provided for validation. The system shows comparable or state-of-the-art performance, and ablation studies confirm its components' necessity."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper mentions that ControlSpeech is the first model that can control timbre and style simultaneously. However, Meta's paper AudioBox in 2023 can already achieve control of these two aspects. This is an act of overclaiming.\n2. The samples from the \"One timber with multiple styles\" section on the demo page exhibit only minimal differences among different styles. This makes the style control ability claimed in the paper less persuasive."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "- How is the style exactor optimized within the SMSD module, and does L_SMSD update the style exactor?\n \n- Considering global style representations are in R^d, why use Q-K-V attention modules instead of conditional normalization layers for fusion? Additionally, can global style representations be derived from audio style prompts for testing?\n \n- How to obtain style prompts for timbre cloning tasks?\n \n- Is it possible to provide demos showcasing the system's performance on free-form style prompts rather than label-generated style prompts?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The proposed method achieves comparable or state-of-the-art performance in timbre cloning and style control tasks.\n \n2. The proposed SMSD and noise perturbation module effectively alleviate one-to-many issues in style control tasks\n \n3. The dataset and model will be open-sourced."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents an innovative zero-shot TTS system that allows for the control of speaking styles through textual prompts. Specifically, it adopts a pretrained decoupling codec to generate different speech attributes based on content prompts, timbre prompts, and style prompts in the corresponding representation space . Moreover, it introduces a style mixture semantic density (SMSD) module to mitigate the many-to-many mapping issue in style control. The experimental results demonstrate the proposed method exhibits comparable or state-of-the-art performance in timbre cloning and style control tasks. In addition, both the dataset and the model will be open-sourced."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The primary weakness is its reliance on label-generated style descriptions for both training and testing, which hinders the assessment of the system's robustness against free-form style prompts.\n\nOther weaknesses include:\n\n1. Incomplete experimental validation:\n \n - Missing gender accuracy metrics in style control tasks.\n - Considering the use of FACodec, it might be beneficial to include NaturalSpeech 3 as a baseline for voice cloning tasks.\n - The paper claims that each Gaussian distribution in the mixture density networks represents a specific speaking style (section 3.3, L271), but this lacks textual or experimental support.\n\n2. Inadequate description of experimental details (refer to the question for specifics).\n \n3. Presentation issues:\n \n - Related work should be presented in the main text, not the appendix.\n - In section 3.1, L158 states \"the dashed box represents frame-level features,\" but the text encoder's output should be phoneme-level, not frame-level.\n - In section 3.2.2, L208 mentions \"the aligned text representations,\" which is ambiguous regarding whether \"text\" refers to content prompts or style prompts.\n - The paper overclaims in certain instances, such as stating \"ControlSpeech is the first TTS model capable of simultaneously performing zero-shot timbre cloning and style control.\" In fact, AudioBox supports textual style prompts , and NaturalSpeech 3 supports audio style prompts.\n - In Figure 1, the input of the content encoder and style encoder within FACodec should be audios rather than texts.\n - In Figure 2(a), the SMSD Module is labeled both as frozen and trainable, which can lead to confusion.\n\n4. Demo issues:\n \n - Using ground-truth wavs as voice prompts in \"Style control for unseen speakers\" and \"Control of unseen styles\" risks leaking style information."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024controlspeech,\ntitle={ControlSpeech: Towards Simultaneous Zero-shot Speaker Cloning and Zero-shot Language Style Control},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zAogQOIphH},\nnote={under review}\n}"
},
"abstract": {
"value": "In this paper, we present ControlSpeech, a text-to-speech (TTS) system capable of fully cloning the speaker's voice and enabling arbitrary control and adjustment of speaking style, merely based on a few seconds of audio prompt and a simple textual style description prompt. Prior zero-shot TTS models only mimic the speaker's voice without further control and adjustment capabilities while prior controllable TTS models cannot perform speaker-specific voice generation. Therefore, ControlSpeech focuses on a more challenging task—a TTS system with controllable timbre, content, and style at the same time. ControlSpeech takes speech prompts, content prompts, and style prompts as inputs and utilizes bidirectional attention and mask-based parallel decoding to capture codec representations corresponding to timbre, content, and style in a discrete decoupling codec space. Moreover, we analyze the many-to-many issue in textual style control and propose the Style Mixture Semantic Density (SMSD) module, which is based on Gaussian mixture density networks, to resolve this problem. The SMSD module enhances the fine-grained partitioning and sampling capabilities of style semantic information and enables speech generation with more diverse styles. To facilitate empirical validations, we make available a controllable model toolkit called ControlToolkit, which includes all source code, a new style controllable dataset VccmDataset, and our replicated competitive baseline models. Our experimental results demonstrate that ControlSpeech exhibits comparable or state-of-the-art (SOTA) performance in terms of controllability, timbre similarity, audio quality, robustness, and generalizability. Ablation studies further validate the necessity of each component in ControlSpeech. Audio samples are available at https://controlspeech.github.io/."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"text-to-speech",
"style control",
"discrete codec model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/43347e6f60776ee6ebb35d912475fac888ec9103.pdf"
},
"presentation": null,
"primary_area": {
"value": "applications to computer vision, audio, language, and other modalities"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/5cae84da88e136eb28f5fe0375e0279d5bce9769.zip"
},
"title": {
"value": "ControlSpeech: Towards Simultaneous Zero-shot Speaker Cloning and Zero-shot Language Style Control"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zAyS5aRKV8 | EgoSim: Egocentric Exploration in Virtual Worlds with Multi-modal Conditioning | main | Active | Controllable video generation;Egocentric video prediction;World model | generative models | 5;6;6 | 4;3;3 | 2;4;3 | 2;3;3 | 2;3;1 | 5.666667 | 3.333333 | 3 | 2.666667 | 2 | -1 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1.1 What is the difference between EgoSim (SVD) and EgoSim in Table 1? \n\n1.2 For Epic-Field experiments, what are the input conditions (text, image, or both) in Table 1?\n\n1.3 Regarding LoRA usage (Line 437): Why was an additional LoRA necessary for Epic-Field when the model was already fine-tuned? This seems redundant and needs justification. Did the authors use LoRA to fine-tune the pre-trained model?\n\n1.4 How were camera poses obtained/annotated for RealEstate and Epic-Field datasets?\n\n1.5 How is the training and testing dataset split? What are the respective dataset sizes?\n\n1.6 No training detail is provided. Please mention your setup such as earning rates, optimization parameters, batch sizes, number of training iterations, hardware specifications, and training time.\n\n2. How is K* and V* calculated in practice?\n\n3.1 L265 \"attachted\"\" revise my review for this paper.\n\nThese clarifications would significantly improve the paper's reproducibility and technical clarity."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The authors integrate multiple conditioning types in a coherent framework, demonstrating a novel adaptation of epipolar attention from 3D generation to video generation. The introduction of the SEAL and CI2V-adapter shows thoughtful consideration of the challenges in multi-modal video generation. The evaluation demonstrated on both static (RealEstate) and dynamic (Epic-Field) datasets, supported by both qualitative and quantitative improvements over existing methods. The extensive ablation studies further strengthen the technical contributions."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper presents a novel video diffusion architecture capable of handling multiple conditioning inputs, including image, text, and camera poses, in a unified framework. The work makes meaningful technical contributions to controllable video generation, though there are areas where clarity could be improved."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The paper suffers from several clarity issues that should be addressed. The experimental setup and results sections (3.2, 3.3) lack clear organization and the data preparation is not detailed throughout the paper, making it difficult to fully understand the implementation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses section. Overall I think the task addressed is important and interesting. Most of the simple cases look fine. I suggest the authors add more comparisons with more recent methods, i.e., viewcrafter."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "+ The multi-modality control of video generation is an interesting topic. The proposed method fills in the gap in precise camera-conditioned image-to-video generation. \n+ Most of the past camera-conditioned video generations trained on static 3D scene datasets, i.e. Realestate, DL3DV. The proposed method provides an effective practice to repurpose video understanding benchmarks for generation and to some extent shows a way to resolve the data scarcity of dynamic scene data with camera pose annotations.\n+ Balancing different control signals is an intuitive challenge in multi-modal guided video generation. The proposed CI2V adapter is a simple and effective strategy to handle it."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper tackles video generation with multi-modal condition signals: text, image, and camera pose. It introduces several model designs including employing epipolar attention to the spacetime domain for precise camera motion control and a CI2V adaptor that balances text and vision guidance based on camera information. Further, it repurposes the EPIC Fields dataset as the new dynamic scene dataset with camera annotations. Extensive experiments show the effectiveness of each proposed module."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- Given that Realestate is a large-scale dataset with 100 times the number of scenes compared to Epic-Field, how do you prevent overfitting your generations to static scenes?\n- It would be interesting to compare the proposed method with Viewcrafter [M1] in terms of the preciseness of camera controls in a static 3D scene. \n- The camera trajectories in the results are quite simple and mostly object-centric, it would be better to infer with longer, more complex trajectories in open scenes. \n- [Minor] The examples in 'Interacting with the World' contain too many noticeable artifacts, e.g., hand disappearance, hand merging into objects, etc. \n\n[M1] Yu, Wangbo, et al. \"Viewcrafter: Taming video diffusion models for high-fidelity novel view synthesis.\" arXiv 2024."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "There are 3 major concerns (details above):\n- Several values are missing in Tab.1 which makes it difficult to understand the trends. It'd be helpful to provide details on these missing values and EPIC-Fields experiment setting.\n- The text mentions efficiency benefits in 2 places and disentangling text & image features. It'd be useful to verify if this is indeed the case.\n- Several aspects of the text need further clarifications."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- This work focuses on improving the controllability of video diffusion models with multiple conditioning factors: combinations of camera ego-motion, text, and image, which is important from a practical use perspective.\n- The idea of incorporating epipolar constraints into the attention mechanism is simple and intuitive.\n- Experiments on RealState (Tab.1, Fig.4) and EPIC-Fields (Tab.1, Fig.5) datasets show the effectiveness of the proposed approach over existing methods.\n- Ablations in Tab.2 and visualizations on the website provide a better understanding of the capabilities of the proposed approach."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work focuses on improving the controllability of video diffusion models for egocentric exploration with camera ego-motion + text + image as conditioning. The proposed framework incorporates epipolar constraints to better focus on the relevant parts in the attention mechanism, referred to as Spacetime Epipolar Attention (SEAL). It is then combined with existing text-to-video and image-to-video diffusion models. Experiments on RealState and EPIC-Fields datasets show the effectiveness of the proposed approach over existing methods."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- In Tab.1, several values are missing, which makes it difficult to compare different models. There is some justification in the text (L426-429). It'd be helpful to have more details:\n - Why can't MotionCtrl and EgoSim (SVD) be evaluated on EPIC-Fields? Both are I2V methods and EPIC-Fields contains both frames and camera egomotion.\n - For CameraCtrl + SparseCtrl, why can't TransErr and RotErr be computed? Since ground truth is available, the camera trajectory from the video diffusion output needs to be computed. Is it because COLMAP optimization does not converge on the outputs? Is there some other reason?\n - It'd also be useful to have T2V and I2V settings on EPIC-Fields to better understand the trends across different datasets. Since text description is available for EPIC-Fields (Fig.5), is there any reason to not use these settings?\n- There are 2 mentions of efficiency benefits in the text. It'd be helpful to verify these benefits quantitatively, in terms of memory usage and train/inference time.\n - L205-206: The use of epipolar attention introduces additional sparsity, enabling us to utilize memory-efficient operations.\n - L236-238: employ pixel unshuffle Shi et al. (2016) to adjust the size while preserving as much fine-grained positional information as possible. This approach is sufficient and also helps to save computational resources. \n- L245-246 mentions: 'a particular patch in a specific frame should be explained either by text or by the context frame, but not both'. It'd be interesting to see if this is indeed the case. One way to do this is to check the values of $\\mu$ in Eq.4, which should be close to 0 or 1.\n- Some experimental details are missing:\n - L288-290: details on how EPIC-Fields is processed.\n - L323: what does 'more difficult random trajectories' mean? how are they sampled?\n - Are the baselines re-trained in the same setting or used in a zero-shot manner?\n- It'd be helpful to clarify these aspects:\n - L19-20: while textual information compensates for necessary spatiotemporal structures, it often intrudes into already observed parts of the scene\n - L25: issue of textual intrusion into observed areas\n - L109-110: override the interference of other features in video generation\n - L112-114: leverages camera movement information to further assist in clearly defining the boundaries between the text and visual elements\n - L189-191: need a control method that operates relatively independently of these features"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": {
"value": "@inproceedings{\nanonymous2024egosim,\ntitle={EgoSim: Egocentric Exploration in Virtual Worlds with Multi-modal Conditioning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zAyS5aRKV8},\nnote={under review}\n}"
},
"abstract": {
"value": "Recent advancements in video diffusion models have established a strong foundation for developing world models with practical applications. The next challenge lies in exploring how an agent can leverage these foundation models to understand, interact with, and plan within observed environments. This requires adding more controllability to the model, transforming it into a versatile game engine capable of dynamic manipulation and control. To address this, we investigated three key conditioning factors: camera, context frame, and text, identifying limitations in current model designs. Specifically, the fusion of camera embeddings with video features leads to camera control being influenced by those features. Additionally, while textual information compensates for necessary spatiotemporal structures, it often intrudes into already observed parts of the scene. To tackle these issues, we designed the Spacetime Epipolar Attention Layer, which ensures that egomotion generated by the model strictly aligns with the camera’s movement through rigid constraints. Moreover, we propose the CI2V-adapter, which uses camera information to better determine whether to prioritize textual or visual embeddings, thereby alleviating the issue of textual intrusion into observed areas. Through extensive experiments, we demonstrate that our new model EgoSim achieves excellent results on both the RealEstate and newly repurposed Epic-Field datasets. For more results, please refer to https://egosim.github.io/EgoSim/."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Controllable video generation",
"Egocentric video prediction",
"World model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/2d91836691f0f29e8a2654c5499430971ab6d9a9.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "EgoSim: Egocentric Exploration in Virtual Worlds with Multi-modal Conditioning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zAzzMOaisF | LLMs for Generalizable Language-Conditioned Policy Learning under Minimal Data Requirements | main | Active | Large Language Models;Language-conditioned policy;Offline policy learning;Decison Making Agent;Goals generalization;Domain generalization | foundation or frontier models, including LLMs | 3;3;5;6 | 4;3;4;3 | 2;2;3;3 | 1;2;3;2 | 2;3;4;3 | 4.25 | 3.5 | 2.5 | 2 | 3 | -0.19245 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1. Following Weakness 2, is it possible for this method to exhibit performance generalization across different datasets?\n1. It appears that Figure 1 is not referenced in the main text.\n1. Line 1220, the citation is not displayed correctly."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- The paper uses a substantial amount of symbolic notation to make the descriptions clearer.\n- The method proposed in this paper addresses the need for real-time interaction feedback or large amounts of offline data.\n- The paper demonstrates the potential performance of the method when provided with greater computational power."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a policy learning method under minimal data requirements, enabling LLMs to act as both cheap data enhancers and flexible generalizers. This approach addresses the traditional RL methods' dependency on large amounts of offline data or real-time interactions. Experimental results confirm the effectiveness of the method and demonstrate a scaling phenomenon between the method and computational resources."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The pipeline proposed in this paper requires multiple stages, which imposes certain demands on computational resources.\n1. Even though the paper provides an explanation for selecting BabyAI as the sole benchmark, relying on a single benchmark to validate the method's effectiveness is insufficient. Given the claimed general applicability of the method, more robust and diverse experimental results should be presented to strengthen the evidence."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1) While the method shows generalization capabilities, the extent to which it can handle various novel goals and states in practice remains unclear. More thorough testing in diverse scenarios is needed.\n2) Performance on small models may be poor."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1) TEDUO effectively leverages LLMs as both data enhancers and generalizers, thus reducing the need for expensive labeled data.\n2) Focus on unlabeled, pre-collected data sets with fewer assumptions about their quality or origin.\n3) Experiments show that TEDUO proves its ability to improve and generalize compared to the baseline and achieves better performance in zero-shot Settings, making it suitable for new scenarios."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents a novel approach, TEDUO, for training autonomous agents. TEDUO addresses the challenge of training agents that can generalize to new goals and states while requiring minimal labeled data. The approach leverages large language models (LLMs) as data enhancers and policy generalizers, which allows the use of easy-to-obtain, unlabeled datasets. The approach uses the knowledge of pre-trained large language models to achieve generalization ability, and experiments show that the method has better results and generalization ability than the baseline. However, there are some limitations: First, the method relies on the knowledge of pre-trained large language models, and if the capability of large language models is not good enough, the effect of the method may be significantly decreased. In addition, the experimental scene is relatively simple. Although the experiment proves that the method can generalize, it may not have enough generalization ability for the complex scene in reality."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) The paper assumes that environment states can be adequately represented textually. This may restrict the approach's applicability to more complex, real-world scenarios where high-dimensional or continuous representations are required.\n2) The success of the TEDUO framework heavily relies on the LLMs' pre-trained knowledge. If the domain is not well-represented in the LLM's training data, performance may suffer, limiting generalizability to specialized fields.\n3) The focus on simpler environments like BabyAI may not reflect the challenges of real-world applications, where environments can be more dynamic and less predictable.\n4) The approach introduces significant computational demands, especially for LLM-based state abstraction and generalization."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please see questions above."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "- The authors discuss a very interesting problem in reinforcement learning: generalization with minimal training data. This is also a general concern for RL.\n\n- Using LLM to augment trajectory data is novel and effective. It has the potential to propose many diverse training data without setting up different environments.\n\n- The performance of TEDUO is effective on BabyAI (table 1) and the authors also explore generalizability to compositional tasks."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper introduces TEDUO, a training pipeline aimed at enhancing language-conditioned policy learning in autonomous agents while minimizing data requirements. \n\nTEDUO leverages large language models (LLMs) to address these challenges by employing a three-step process:\n\n**Data Enhancement**: Using LLMs to perform state abstraction and hindsight labeling on an unlabeled dataset of state-action transitions, allowing for the creation of labeled datasets.\n\n**Policy Learning:** Applying offline RL algorithms to learn optimal policies based on the enhanced datasets for a finite set of training goals.\n\n**Generalization**: Fine-tuning a base LLM to distill knowledge about environment dynamics and optimal actions, enabling the model to generalize to previously unseen states and language commands."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The pipeline for data enhancement may rely on the symbolic nature of the tasks. For example, BabyAI is a easy to define symbolic task where changing the shape or color of objects and doors would result in new tasks. However, this would be more difficult to enhance data for more complex environments, e.g. VirtualHome and Alfred. Could the authors provide elaboration on how the pipeline could be generalized to a more complex environment. \n\n2. LLMs are inherently good at simple generalizations (changing only names or objects), however the generalization of reasoning and complex planning are often more challenging. Could the authors benchmark the performance following Table 1 settings on BabyAI more advanced task levels (babyai provides a detailed task levels where some tasks are much more challenging) ? It would be helpful to emphasize what kind of generalization ability does TEDUO possesses. \n\n3. Step 2 is not clearly written. How is an \"abstract\" MDP solved? For example, for solving a abstract task \"picking up the object\", what is the solution to the MDP. Also, how to determine if two tasks are the same type of abstract MDP or if they should be labelled as different ?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Isn't there any benchmark applicable other than BabyAI? I felt like the entire method is highly correlated to the domain knowledge of the BabyAI benchmark. Even for the BabyAI benchmark, there might be more updated solutions (I am not sure about this. I apologize if not) to be compared. Finally, it is a marginal point, but it would be better to introduce your target task or domain at the beginning of your paper. You keep to using terms such as 'autonomous agent', 'state space', and 'reinforcement learning' and it sounds like your target task is very highly abstracted MDP space. However, you also mention 'natural language-written goals' and discuss how to map them into the neural representation. Maybe introducing your domain and task thoroughly at the beginning of the introduction can enhance its readability."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "Almost every step of the end-to-end process is automatic. For example, a pre-trained LLM works as a state labeler, so a vast amount of unlabeled data is now usable. Analogously, a state space compressing program is also written through an LLM, which minimizes the human expert's effort. Above all, goal-generalization ability is quite notable. Their experimental results show relatively less performance drop during the test phase compared to the baseline. The ablation study also strongly indicates the capability of hierarchical RL; it learns against a variety of sub-tasks and combines the solutions to reach the ultimate goal."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "To train a goal-conditioned autonomous agent, the authors propose four existing main challenges: unlabeled data, limited exploration, unknown data collection policy, and goal generalization. They solve these problems mainly through LLM. In particular, they exploit an LLM to label the unsupervised RL dataset and compress it through an LLM-written program. Once every trajectory is compressed into the latent space, they learn the policy and distillate their information into a pre-trained LLM. The presented method outperforms the baseline, and adequate ablation studies support their intuition."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The first drawback is its scalability. Their proposed method can be summarized into three main steps: data labeling and compressing, policy training through offline RL, and knowledge distillation. They generate the Python program from the LLM and use that program to extract the minimal information from the original state space. This step totally relies on the characteristics of the environment. In fact, the LLM could not successfully compress the state space unless it was a discrete grid world. I can not imagine how to scale up the LLM-based state compression approach to much more complex environments. In the offline RL step, they used tabular Q-learning. Tabular iteration is one of the most powerful methods to optimize an MDP policy, but its memory consumption hardly limits its scalability. In addition, more strong baselines are required to verify the capability of their method. They mainly compared the vanilla LLM and finetuned LLM, and actually, it is trivial that finetuned LLM significantly outperforms the vanilla LLM. There is also another baseline that works without LLM, but that was published with the benchmark itself in 2018, which is too outdated these days."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We train LLM agents as language-conditioned policies without requiring expensive labeled data or online experimentation. The framework leverages LLMs to enable the use of unlabeled datasets and improve generalization to unseen goals and states."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024llms,\ntitle={{LLM}s for Generalizable Language-Conditioned Policy Learning under Minimal Data Requirements},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zAzzMOaisF},\nnote={under review}\n}"
},
"abstract": {
"value": "To develop autonomous agents capable of executing complex, multi-step decision-making tasks as specified by humans in natural language, existing reinforcement learning approaches typically require expensive labeled datasets or access to real-time experimentation. Moreover, conventional methods often face difficulties in generalizing to unseen goals and states, thereby limiting their practical applicability. This paper presents TEDUO, a novel training pipeline for offline language-conditioned policy learning. TEDUO operates on easy-to-obtain, unlabeled datasets and is suited for the so-called in-the-wild evaluation, wherein the agent encounters previously unseen goals and states. To address the challenges posed by such data and evaluation settings, our method leverages the prior knowledge and instruction-following capabilities of large language models (LLMs) to enhance the fidelity of pre-collected offline data and enable flexible generalization to new goals and states. Empirical results demonstrate that the dual role of LLMs in our framework—as data enhancers and generalizers—facilitates both effective and data-efficient learning of generalizable language-conditioned policies."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Large Language Models",
"Language-conditioned policy",
"Offline policy learning",
"Decison Making Agent",
"Goals generalization",
"Domain generalization"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/b0b15ac4db414cfe8cf60c95ce8dba6e12509037.pdf"
},
"presentation": null,
"primary_area": {
"value": "foundation or frontier models, including LLMs"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/25f8ffc0129b05a6fc8cad67e00e9394e8b693c5.zip"
},
"title": {
"value": "LLMs for Generalizable Language-Conditioned Policy Learning under Minimal Data Requirements"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zB6uMznFuZ | TimeAutoDiff: Generation of Heterogeneous Time Series Data via Latent Diffusion Model | main | Active | Time series data;Tabular data;Heterogeneous;Diffusion model;VAE;Generative model | generative models | 1;3;3;5 | 1;4;2;3 | 1;3;3;3 | 1;3;2;3 | 1;4;3;3 | 3 | 2.5 | 2.5 | 2.25 | 2.75 | 0.632456 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": ">The sampling time column ranks models by their speed, with lower numbers indicating\nfaster sampling\n\nPlease add details.\n\n\n> We set the dimensions of output features in this way as we used\nthe mean-squared (MSE), binary cross entropy (BCE), and cross-entropy (CE) in the Pytorch package.\n\nIt looks disconnected from the main text. Could you clarify?\n\n>L: 249 More details are deferred in the Appendix J.\n\nAppendix J does not provide additional details on the selection of a decoder architecture.\n\n> LL: 171-173\n\nDoes the current model generate data for textual metadata? If so, it would be great to provide more experiments/demonstrations. If not – it may be worth removing this example."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- **Writing** The paper is well-written and well-structured.\n- **Methodological contribution** The paper presents a clear method similar to diffusion methods in other domains. The technical details are discussed in detail. It seems straightforward to reproduce the results.\n- **Research problem** The presented problem is essential and timely. In particular, synthetic time series generation is critical for foundation model development, as the authors mention in the discussion."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes a novel method, TimeAutoDiff, to generate time series. It combines a variation autoencoder and DDPM. The method empirically outperforms prior methods on several datasets and allows for conditional time series generation."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "See Questions for detailed comments and suggestions.\n\n- **Comparison to prior methods** It is worth discussing more the key differences to other diffusion-based models such as Diffusion-ts (https://github.com/Y-debug-sys/Diffusion-TS), and TimeDiff (https://openreview.net/pdf?id=ESSqkWnApz). Additionally, it is worth expanding metrics, such as consistency and diversity, as discussed in the TSGM framework (https://github.com/AlexanderVNikitin/tsgm).\n\n- **Limitations and fairness**. Limitations and fairness are not discussed enough. In particular, can generated data be biased with respect to conditional metadata? \n\n- **Design choices**. It would be great to have more details on the design choices of the decoder and $\\epsilon_\\theta$. Do other architectures significantly affect the performance of the method?"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 1
},
"contribution": {
"value": 1
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Why are most experiments on regression datasets? How can these validate heterogeneous properties of “tabular” time series data?\n\nThe experiments lack convincing evidence. Please include additional summary statistics of the generated data and compare with baselines. For instance, stock data (nasdaq100) are known for key summary statistics like volatility and moving averages. Demonstrating that the proposed method closely matches these statistics would provide stronger support for its efficacy. Can you provide a generated example of your trained nasdaq100 model?\n\nWhy is Section 4.2 titled “Utility Guarantees” when there is no theoretical analysis provided?\n\nThese weaknesses are not meant to be exhaustive. I believe they are sufficient to show that this paper is not ready for publication."
},
"rating": {
"value": 1
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 1
},
"strengths": {
"value": "NA"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes to use latent diffusion models to generate synthetic time series tabular data.\nThey propose to combine VAE with DDPM and term the proposed method TimeAutoDiff.\nThey claim many advantages of this proposal and provide some experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "**Originality.** DDPM or VAE for time series generation is not new. Most techniques used are also from prior works.\n**Soundness.** It’s well-known that VAE is considered 1-step DDPM. I find the motivation and reasoning in `line 144-154` very unconvincing. It’s still unclear why DDPM is good at handling time series data, why VAE, as a special case is DDPM, can bring more to the table. Also, I don’t think citing unpublished and prior venue’s rejections as SOTA method is convincing, e.g., TSGM in `line 156`. \n**Clarity.** The captions of Fig 2,3 provide zero information. Many notations are introduced without definition, e.g., x_cont in `line 192`. Overall clarity can be further improved.\n**Significance.** The significance is hard to parse due to limited clarity. At best, I find it lacking due to its assembly nature. Additionally, I question the correctness of the experimental evaluation."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. I would like to inquire about the rationale behind employing RNN as the encoder for time-series data and capturing temporal dependencies between features across various timestamps. The temporal dependencies between features can also be captured by the denoising network of the diffusion model. Therefore, is it necessary to utilize an RNN to capture temporal dependencies when encoding the data?\n\n2. I would encourage the author to provide a more comprehensive discussion and emphasize the values or practical significance of the tasks the model is evaluated on, specifically unconditional generation and time-variant meta-data conditioned generation."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The work has the following notable strengths:\n1. The works did comprehensive evaluation on unconditional and conditional generation tasks, taking both lower-order and higher-order statistics into account. The proposed approach shows superior performance over existing approaches on both tasks across the datasets considered in the work. In addition the work also did comprehensive over different hyper-parameters variations and ablations to reveal the impact of different components to the model.\n2. The presentation quality of the work is good and the proposed method of the work is easy to follow."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The proposed work presents a framework of latent diffusion models for heterogeneous time series data. The framework utilizes an encoder within a variational autoencoder (VAE) to project both discrete and continuous features of time series data into a continuous latent space. Subsequently, a diffusion model is employed to model the distribution of the latent codes of the time series data. The encoder employed in the VAE is an RNN, while the diffusion model is DDPM[1]. Notably, the denoising network within DDPM employs cyclic encoding to incorporate time-stamp information into the time series and adopts a bi-directional RNN architecture. The proposed latent diffusion model can also be conditioned on meta-data for incorporating conditional information. Evaluation results on unconditional generation demonstrate that the proposed approach generates high-quality synthetic data and outperforms existing approaches.\n\nReferences\n\n[1] Ho, Jonathan, Ajay Jain, and Pieter Abbeel. “Denoising diffusion probabilistic models.” Advances in neural information processing systems 33 (2020): 6840-6851."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "The work exhibits several significant weaknesses:\n1. The work lacks innovative methodology. Similar ideas for tabular data, which arguably encompass time-series data, have been explored in existing works [1]. Despite some crucial technical differences between the two works, including the design of the denoising network $\\epsilon_\\theta$ and the approach to the latent space, the high-level framework of both works is both a latent diffusion model that projects structured data to a latent space and uses a diffusion model to model the distribution of latent codes.\n2. The majority of the experiments are limited to unconditional generation of time-series data, and it is unclear how the proposed model can be readily adapted to tasks with more practical applications, such as forecasting and imputation.\n\nReferences\n\n[1] Zhang, Hengrui, et al. “Mixed-type tabular data synthesis with score-based diffusion in latent space.” arXiv preprint arXiv:2310.09656 (2023)."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "1, How can the model be made more robust to high-dimensional feature spaces without relying heavily on latent space reduction?\n\n2, What modifications or extensions to the framework would be necessary to effectively handle heterogeneous datasets that include both continuous and categorical features?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper is easy to read and the experiments are comprehensive."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper introduces TimeAutoDiff, a time series tabular data synthesizer that combines the Variational Auto-Encoder (VAE) and Denoising Diffusion Probabilistic Model (DDPM). It effectively manages heterogeneous features and enhances data generation fidelity and temporal dependencies. The model integrates a latent diffusion framework with a specialized VAE, improving performance in high-dimensional settings. It supports applications such as missing data imputation, privacy, and interpretability, and demonstrates superior results compared to existing models in generating synthetic time series data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "One concern is its current inability to generate interpretable results. In many practical applications, especially in high-stakes fields like finance and healthcare, stakeholders must clearly understand how models make decisions. The lack of interpretability can lead to skepticism and reluctance to adopt the model, as users may hesitate to trust a system whose inner workings are opaque. This limitation could hinder the model's applicability in scenarios where understanding the rationale behind generated data is crucial for decision-making; Another point is the pure focus on continuous data: While the method demonstrates good performance with continuous data, its methods are primarily tailored for this data type. This focus raises concerns about the model's effectiveness when dealing with heterogeneous datasets that include categorical features. The inability to seamlessly integrate and process diverse data types could restrict the model's usability in real-world applications where such heterogeneity is common; another point is the dependence on latent space reduction: The paper notes a performance drop when the feature sizes increase, which necessitated a reduction in the latent space dimension. This reliance on dimensionality reduction to maintain performance raises concerns about the model's scalability and robustness. If the model's effectiveness is sensitive to the dimensionality of the latent space, it may struggle to perform well in high-dimensional settings without careful tuning."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We develop a time series tabular synthesizer, combining VAE and diffusion model."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024timeautodiff,\ntitle={TimeAutoDiff: Generation of Heterogeneous Time Series Data via Latent Diffusion Model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zB6uMznFuZ},\nnote={under review}\n}"
},
"abstract": {
"value": "In this paper, we leverage the power of latent diffusion models to generate synthetic time series tabular data.\nAlong with the temporal and feature correlations, the heterogeneous nature of the feature in the table has been one of the main obstacles in time series tabular data modeling. \nWe tackle this problem by combining the ideas of the variational auto-encoder (VAE) and the denoising diffusion probabilistic model (DDPM).\nOur model named as \\texttt{TimeAutoDiff} has several key advantages including \n(1) \\textit{\\textbf{Generality}}: the ability to handle the broad spectrum of time series tabular data with heterogeneous, continuous only, or categorical only features; \n(2) \\textit{\\textbf{Fast sampling speed}}: entire time series data generation as opposed to the sequential data sampling schemes implemented in the existing diffusion-based models, eventually leading to significant improvements in sampling speed, \n(3) \\textit{\\textbf{Time varying metadata conditional generation}}: the implementation of time series tabular data generation of heterogeneous outputs conditioned on heterogenous, time varying features, enabling scenario exploration across multiple scientific and engineering domains.\n(4) \\textit{\\textbf{Good fidelity and utility guarantees}}: numerical experiments on eight publicly available datasets demonstrating significant improvements over state-of-the-art models in generating time series tabular data, across four metrics measuring fidelity and utility; \nCodes for model implementations are available at the supplementary materials."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"Time series data",
"Tabular data",
"Heterogeneous",
"Diffusion model",
"VAE",
"Generative model"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/70355c99c3fa9eaad9d10d5cfce39e22357b4ffb.pdf"
},
"presentation": null,
"primary_area": {
"value": "generative models"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": {
"value": "/attachment/439960b643b51dc01bc9c24539492a56f6daab17.zip"
},
"title": {
"value": "TimeAutoDiff: Generation of Heterogeneous Time Series Data via Latent Diffusion Model"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zBbZ2vdLzH | Joint Graph Rewiring and Feature Denoising via Spectral Resonance | main | Active | GNNs;Rewiring;Denoising;Spectral Resonance;cSBM | learning on graphs and other geometries & topologies | 5;5;6;6;8 | 3;2;3;3;3 | 3;3;3;3;4 | 2;2;3;2;4 | 2;3;3;3;4 | 6 | 2.8 | 3.2 | 2.6 | 3 | 0.456435 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 4
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 4
},
"primary_area": null,
"questions": {
"value": "could timing comparisons of the proposed methods, more explicitly and beyond just complexity comparison, be provided to observe computational benefits/limitations?"
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 4
},
"strengths": {
"value": "The primary novelty of JDR lies in its combined optimization of graph structure and node feature alignment, enhancing data quality by maximizing alignment between the spectral components of the graph and feature matrices. This unified approach addresses both structural and feature-level noise simultaneously, which is rare among existing methods that typically target these types of noise separately. A key concept introduced is “spectral resonance,” where optimal alignment between the graph’s leading eigenvectors and the feature matrix’s singular vectors is achieved, providing a measurable target for denoising and boosting node classification performance. To manage the challenging non-convex optimization, the paper presents an iterative heuristic based on alternating optimization, which simplifies alignment maximization and enables efficient processing of large, real-world graph datasets with multiple classes. Another advantage of JDR is that it outputs a modified graph in the preprocessing stage, enhancing interpretability and reusability for subsequent GNN applications—a contrast to end-to-end methods that alter the graph only during training. Lastly, JDR’s design allows it to adapt effectively to both homophilic and heterophilic graphs, expanding its applicability beyond previous methods, which often work best with specific types of graph structures, such as those with high homophily.\n\nThe paper is well-organized, systematically guiding the reader through the challenges, methodology, and outcomes of the proposed Joint Denoising and Rewiring (JDR) algorithm. It begins with an introduction that effectively frames the problem of noisy graph structures and features, establishing the need for an approach that addresses both in unison. The methodology section details the concept of spectral resonance and the iterative optimization heuristic that drives JDR, using clear mathematical definitions and visual aids to support understanding. Following this, a comprehensive experimental section validates the algorithm's effectiveness across synthetic and real-world datasets, offering detailed comparisons with state-of-the-art methods and highlighting the algorithm’s robustness across homophilic and heterophilic graph types. Finally, the paper provides an insightful discussion on related works, situating JDR within the broader landscape of graph preprocessing techniques, before concluding with a summary of contributions and potential directions for future research. Overall, the structure flows logically, with each section building on the previous one to reinforce the practical relevance and theoretical underpinnings of JDR.\n\nAddressing the limitations of the proposed approach is appreciated\n\nThe illustrations provided in the work, both in the body and abstract, are informative and well done as well as qualitatively informative.\n\n The use of the appendix is also well done in providing explicit discussion of the proposed algorithm \n\nThe paper has a robust experimental section with compelling results working in favor of the approaches proposed"
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents Joint Denoising and Rewiring (JDR), a novel algorithm designed to simultaneously address noisy graph structures and node features, thereby enhancing node classification performance in graph neural networks (GNNs). The work proposed aims to address the issue of graph datasets that suffer from noise due to missing or spurious edges and misaligned node features. JDR counters this by iteratively refining both the graph structure and features to maximize the alignment between their leading eigenspaces, achieving what the authors call “spectral resonance,” which improves data representation for downstream GNN performance. The algorithm operates in three iterative steps: first, it decomposes the graph’s adjacency matrix and node feature matrix into eigen and singular vectors; then, it aligns the leading eigenvectors of the graph with the primary singular vectors of the feature matrix, creating a more cohesive representation; and finally, it synthesizes the updated graph for subsequent GNN tasks. The authors argue this iterative approach effectively enhances graph quality and feature clarity, especially in datasets exhibiting mixed homophily and heterophily, which are often challenging for GNNs. The proposed work provides extensive experiments show that JDR consistently outperforms existing rewiring methods on synthetic and real-world datasets, yielding better alignment and improved node classification accuracy. Additionally, JDR’s preprocessing method produces an enhanced graph that supports interpretability and reusability, unlike other methods that modify graphs solely during training. The paper highlights JDR’s scalability and versatility across different graph types, underscoring its potential in handling noisy real-world data and establishing it as a valuable advancement in preprocessing techniques for GNNs."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "JDR depends on the availability of informative node features for effective rewiring and denoising, which restricts its applicability to settings with substantial node feature information; this reliance could limit its effectiveness in networks that primarily encode structural data. The algorithm’s design is also tailored to node-level tasks, making it less suited for graph-level tasks like graph classification, where global structure matters more than node-specific features"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 2
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "My main concern with this work is that although the proposed method looks to be sound, there is really no theorectical ground on its effectiveness in general when applied to a wider range of datasets. Also, the datasets used are very small and I'm not sure how useful and how scalable the method is in practice. It would significantly strengthen the work if the authors can either theoretically show how much improvement can be obtained using their method or empirically demonstration the effectiveness on a wide range of datasets, not just on common datasets that we have already seen too many papers claiming their effectiveness over others. I believe the contributions of such work are rather limited."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The proposed method is general and can be applied to a wide range of GNNs for downstream classification tasks. \n\n2. The proposed method jointly considers graph rewiring and feature denoising.\n\n3. The proposed method improves the performance of a numer of GNNs on a number of popular datasets in the experiments."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper proposes to jointly consider graph rewiring and feature denoising to improve the performance of GNNs for downstream classification tasks. The proposed method improves the performance of a numer of GNNs on a number of popular datasets in the experiments."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The solution is somewhat incremental and its novelty is low, although it appears to be sound.\n\n2. There is no theoretical guarantee on the degree of improvement using JDR.\n\n3. The experiments were conducted on a small number of datasets that cannot be considered as an evidence that the proposed method is really effective."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "- As the computation overhead of the proposed method seems large, can it be applied to large graphs, e.g. ogbn-products?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "- This paper uses cSBMs as a key framework to build intuition about the graph rewiring and denoising problem, providing the theoretical foundation for the alignment target.\n- The empirical verification using synthetic data is clear.\n- The method is evaluated on both homophilic graphs and heterophilic graphs, showing its generalizability."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper considers the graph rewiring problem and feature denoising problem to improve the graph node classification task.\nThis paper utilizes cSBM modeling to optimize both graph structures and graph features to make them have better alignment, then proposes the JDR method to effectively optimize the alignment function."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proposed method involves graph structure matrix and graph feature matrix decomposition, which can be computationally challenging on extremely large real-world graph data, limiting the practicability of the proposed method.\n- As the SVD decomposition can have time complexity of $O(N^3)$, it may be not accurate to say the proposed JDR has time complexity of $O(N)$"
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1. In Figure 2, why do the two subgraphs compare w.r.t. amplitude and normalized amplitude, respectively?\n2. Regarding section 2.2, can you explain more about how the structure is revealed parametrically? In line 137, $|\\lambda|$ is considered the signal-to-noise ratio, when it can also be considered the degree of homophility?\n3. The remark below definition 1, ''the information about the labels is indeed contained in the leading vectors of V and U'', is unclear.Because usually the decomposition of a matrix gives a support space, the weights of each support, and the coefficient of reconstructing the matrix using the support, which leads to the insufficiency of the leading support vectors to be representative.\nThis should be relevant for W2."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.The overall structure is well organized and easy to understand, especially the diagram demonstrations are very helpful.\n2.The problem formulation is clearly presented.\n3.Experiments are comprehensive and convincing."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This work presents a structural rewiring algorithm, with consideration of structural alignment with node features, supported by theoretical investigation on cSBM model."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The theoretical framework seems convincing to me, while how the real data sets fit the parametric model needs further investigation, otherwise the heuristic of denoising features might not be applicable.\n2. More explanation is needed on the insights of where the rationale of denoising comes from, e.g. lines 159-190.\n3. Since the eigendecomposition is applied on adjacency, the comparison of training and inference time costs between the vanilla GNN and with the add-on of the proposal is needed."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "See above."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. The method is well inspired and can achieve better performance than several other graph rewiring methods.\n2. An impressive amount of experiments were implemented."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The work proposes a method (JDR) to jointly perform graph rewiring and feature denoising. The method is derived from cSBM to maximize alignment between the eigenspaces of features and the graph. The authors tested the performance of the methods in various datasets and showed the advantages of the proposed JDR method."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The paper should better discuss previous works on GNN versus MLP, and the connection between heterophily and graph noise. [1-4] investigated the phenomena that MLP sometimes perform better than GNN (especially GCN) for heterophilic graphs. In particular, [4] proposed a metric that well correlates with empirical GNN performance, and also discussed the connection between heterophily and graph noise. A better discussion (acknowledgement?) of [4] is needed due to its high relevance to this paper.\n\n2. The comparison between JDR and graph rewiring methods seems not perfectly fair as the authors also mention themselves. Moreover, if JDR indeed achieves optimal denoising, then the graph may no longer be needed in later training. This (JDR(X) + MLP) seems uncovered by the ablation settings. \n\n3. The authors should check if the rewired graph structures degenerate, which may be a natural consequence of combining matrix factorization scheme and thresholding. \n\n4. The hyperparameter tuning was not performed for the GNN backbone. This is questionable as the optimal backbone hyperparameter is anticipated to vary across different rewired graphs (and potentially different denoised features). This may matter because the performance gap between JDR and other methods seems small in most cases.\n\n5. Some paragraphs are unclear and not readable. In particular, it is unclear what “findings” in the sentence “The Gaussian adjacency equivalence conjecture (Shi et al., 2024) suggests a generalization of the findings to the true binary adjacency case.” is referring to. The whole section (as well as the proof) needs to be polished.\n\n[1] Ma, Yao, et al. \"Is Homophily a Necessity for Graph Neural Networks?.\" International Conference on Learning Representations.\n\n[2] Gomes, Diana, et al. \"When Are Graph Neural Networks Better Than Structure-Agnostic Methods?.\" I Can't Believe It's Not Better Workshop: Understanding Deep Learning Through Empirical Falsification. 2022.\n\n[3] Luan, Sitao, et al. \"Revisiting heterophily for graph neural networks.\" Advances in neural information processing systems35 (2022): 1362-1375.\n\n[4] Dong, Mingze, and Yuval Kluger. \"Towards understanding and reducing graph structural noise for GNNs.\" International Conference on Machine Learning. PMLR, 2023."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We introduce joint denoising and rewiring (JDR)—an algorithm to jointly rewire the graph and denoise the features, which improves the performance of downstream node classification GNNs."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024joint,\ntitle={Joint Graph Rewiring and Feature Denoising via Spectral Resonance},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zBbZ2vdLzH},\nnote={under review}\n}"
},
"abstract": {
"value": "In graph learning the graph and the node features both contain noisy information about the node labels. In this paper we propose joint denoising and rewiring (JDR)—an algorithm to jointly rewire the graph and denoise the features, which improves the performance of downstream node classification graph neural nets (GNNs). JDR improves the alignment between the leading eigenspaces of graph and feature matrices. To approximately solve the associated non-convex optimization problem we propose a heuristic that efficiently handles real-world graph datasets with multiple classes and different levels of homophily or heterophily. We theoretically justify JDR in a stylized setting and verify the effectiveness of our approach through extensive experiments on synthetic and real-world graph datasets. The results show that JDR consistently outperforms existing rewiring methods on node classification using GNNs as downstream models."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"GNNs",
"Rewiring",
"Denoising",
"Spectral Resonance",
"cSBM"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/774642a17a5f5392d306d859611200881b8071f0.pdf"
},
"presentation": null,
"primary_area": {
"value": "learning on graphs and other geometries & topologies"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Joint Graph Rewiring and Feature Denoising via Spectral Resonance"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zBgiCWCxJB | SSOLE: Rethinking Orthogonal Low-rank Embedding for Self-Supervised Learning | main | Active | self-supervised learning;orthogonal low-rank embedding | unsupervised, self-supervised, semi-supervised, and supervised representation learning | 5;6;6;8 | 5;3;3;4 | 2;3;3;3 | 2;3;3;3 | 3;3;3;3 | 6.25 | 3.75 | 2.75 | 2.75 | 3 | -0.207514 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 5
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.What is the relationship between the three limitations, the two challenges, and the proposed method?\n\n2.How does Eq. (2) decouple the low-rank enforcement and high-rank enforcement?\n\n3.The title of the paper does not mention “multi-view learning”, but why do the authors discuss “multi-view learning” throughout the paper (especially in experiments)? Besides, since there are several multi-view self-supervised learning methods, why not compare the proposed with these works as well?"
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1.The paper is with a good clarity by providing a deep analysis on the problem when applied OLE in SSL. The two challenges to be solved are well discussed.\n2.The authors provided a detailed theoretical analysis to illustrate the research problem as well as the developed method.\n3.Sufficient experiments are performed, and the results demonstrate the work’s effectiveness."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper presents orthogonal low-rank embedding for self-supervised learning (SSOLE) by decoupling low/high-rank enforcement on positive/negative pairs and low-rank enforcement via deviation matrices."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.The main idea of the paper of employing low/high-rank enforcement to adjust the distance between contrastive sample pairs has been proposed and discussed in several previous works of supervised learning (like LDA). As a result, the paper makes incremental contribution by extending this idea to self-supervised learning, which makes its novelty somehow limited. Besides, some related works are not discussed in this paper.\n2.The relationship between the three limitations, the two challenges, and the proposed method could be further discussed. \n3.The authors consider decoupling the low-rank enforcement and high-rank enforcement with Eq. (2), which needs more explanation to analyze how Eq. (2) achieves this aim.\n4.It is mentioned that achieves competitive performance across SSL benchmarks without relying on large batch sizes, memory banks, or dual-encoder architectures, which lacks detailed verification or discussion."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "1.In the case of weakly supervised datasets, such as when the dataset contains noisy labels, does this method have adaptability?\n2.Regarding the description in the appendix A.4 that the product of diagonal matrix P and its transpose is the identity matrix, why are all the diagonal elements of the resulting matrix either 1 or -1? Could you provide a more detailed explanation?"
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.The paper offers comprehensive experimentation, strengthening the validity of the presented approach.\n2.The method is described in detail, enhancing its reproducibility and understanding.\n3.In terms of experiments, this paper evaluate the adaptability and robustness of the SSOLE framework through transfer\nlearning to various linear classification tasks and demonstrates its superior performances."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper primarily focuses on applying OLE to SSL and propose a novel method that integrates Orthogonal Low-rank Embedding into the Self-Supervised Learning paradigm. The authors mainly addresses two key challenges in applying OLE to SSL: the enforcement of orthogonality with an infinite number of classes and the limitations of the nuclear norm in distinguishing between positive and negative correlations. By decoupling low-rank and high-rank enforcement and applying constraints on feature deviations, SSOLE adapts OLE for self-supervised tasks. The paper demonstrates how SSOLE adapts OLE for self-supervised tasks and showcases its superior performance in various learning scenarios while maintaining computational efficiency."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.The writing of this paper needs to be improved.\n2.Some experiments are not sufficiently thorough, such as when evaluating the performance of the method in semi-supervised learning, the datasets used are somewhat limited, and the experimental results are not particularly striking. If additional experiments on other datasets could be conducted to demonstrate the method's effectiveness, it would be more convincing."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See \"Weaknesses\"."
},
"rating": {
"value": 6
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1. Methodological Innovation: The paper introduces a low-rank bias matrix and a decoupled constraint mechanism based on the integration of OLE and SSL, addressing the issue of representation collapse that traditional methods struggle to resolve in unsupervised scenarios.\n\n2. Theoretical and Experimental Support: The effectiveness of the SSOLE framework is supported by both theoretical analysis and experimental validation, demonstrating strong performance across various datasets, particularly in settings with limited computational resources.\n\n3. Broad Applicability: In image classification tasks, SSOLE shows good generalization ability, adapting to different datasets and further enhancing the robustness of feature representation."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper points out that traditional Orthogonal Low-Rank Embedding (OLE) methods face significant challenges in self-supervised learning (SSL), mainly due to representational collapse caused by an excessively large number of classes, and the difficulty of distinguishing between positively and negatively correlated features under low-rank constraints. To address these issues, SSOLE decouples the low-rank and high-rank constraints and applies low-rank constraints to the deviation matrix of features. This approach effectively prevents representational collapse and enhances the ability to differentiate between positive and negative sample pairs. Experimental results demonstrate that SSOLE achieves excellent performance across various SSL benchmarks, showing good scalability and efficiency, especially without requiring large batch sizes and complex architectures."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. Some proofs are omitted and not specific enough, such as\n\n i) Why is the probability of $|cos(\\theta_{ij})|>1/d$ is larger than $1/d$? By what, Chebyshev inequality?\n\n ii) Some statements are as follows: \"Since the rows of $\\tilde{V}$ are nearly orthogonal, the nuclear norm is dominated by the sum of the row norms.\" Why can it hold? The readers need more detailed explanations.\n\n2. Some proofs are unnecessary. THEOREM 3.3 states that the nuclear norm is unitarily invariant, a property that is very common in linear algebra textbooks.\n\n3. What does $\\approx$ mean? Does it mean that the equation holds with high probability, or that the values are close? If so, how close are the values? The paper needs to give a clear definition."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "See weaknesses."
},
"rating": {
"value": 8
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "1.\tThis paper is well-written with clear motivations.\n2.\tIt is technically sound with comprehensive theoretical analysis.\n3.\tExperimental results demonstrate the effectiveness of the method."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper proposes a self-supervised orthogonal low-rank embedding (SSOLE), which integrates OLE into the SSL paradigm. It addresses two challenges in applying OLE to SSL: the difficulty of enforcing orthogonality in the presence of an infinite number of classes, and the nuclear norm’s inability to distinguish between positive and negative correlations. By decoupling low-rank and high-rank enforcement and applying low-rank constraints to feature deviations, SSOL adapts OLE for self-supervised and other tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1.\tThe parameter $\\lambda$ controls the balance between intra-class compactness and inter-class separability enforcement. It will be better to analyze its influence to the final performance.\n\n2.\tThe authors enforce intra-class low-rank property via deviation matrix instead of original feature matrix, it is also suggested to investigate its effectiveness by ablation study."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We address key challenges of applying orthogonal low-rank embedding to self-supervised learning."
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024ssole,\ntitle={{SSOLE}: Rethinking Orthogonal Low-rank Embedding for Self-Supervised Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zBgiCWCxJB},\nnote={under review}\n}"
},
"abstract": {
"value": "Self-supervised learning (SSL) aims to learn meaningful representations from unlabeled data. Orthogonal Low-rank Embedding (OLE) shows promise for SSL by enhancing intra-class similarity in a low-rank subspace and promoting inter-class dissimilarity in a high-rank subspace, making it particularly suitable for multi-view learning tasks. However, directly applying OLE to SSL poses significant challenges: (1) the virtually infinite number of \"classes\" in SSL makes achieving the OLE objective impractical, leading to representational collapse; and (2) low-rank constraints may fail to distinguish between positively and negatively correlated features, further undermining learning. To address these issues, we propose SSOLE (Self-Supervised Orthogonal Low-rank Embedding), a novel framework that integrates OLE principles into SSL by (1) decoupling the low-rank and high-rank enforcement to align with SSL objectives; and (2) applying low-rank constraints to feature deviations from their mean, ensuring better alignment of positive pairs by accounting for the signs of cosine similarities. Our theoretical analysis and empirical results demonstrate that these adaptations are crucial to SSOLE’s effectiveness. Moreover, SSOLE achieves competitive performance across SSL benchmarks without relying on large batch sizes, memory banks, or dual-encoder architectures, making it an efficient and scalable solution for self-supervised tasks."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"self-supervised learning",
"orthogonal low-rank embedding"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/5b60614063053904e11e9d88f974b63f61fb216e.pdf"
},
"presentation": null,
"primary_area": {
"value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "SSOLE: Rethinking Orthogonal Low-rank Embedding for Self-Supervised Learning"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |
|||||||
zBrjRswpkg | Foundation of Scalable Constraint Learning from Human Feedback | main | Active | RLHF;RL;Constraint Learning;Theoretical Analysis | reinforcement learning | 3;3;5;5 | 4;3;3;4 | 2;3;3;2 | 2;3;3;2 | 1;1;2;3 | 4 | 3.5 | 2.5 | 2.5 | 1.75 | 0 | [
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 3
},
"primary_area": null,
"questions": {
"value": "Please refer to the weaknesses part for the questions. Here are a few more questions:\n\n1. There are other methods in the literature that have been provided for solving C-CMDPs (e.g., Hao et al, Chow et al -- risk constrained RL with percentile risk criteria), so why have the authors introduced a new approach and \n\n2.In 4.1, why are the weights \"w\" the same for RHS and LHS."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1. The paper is well written and mostly easy to understand. \n2. The results about where constraint functions can be learnt or not is quite interesting. \n3. The experimental results are quite detailed."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper aims to provide theoretical results in the context of Constraint Learning from Human Feedback. There are a few impossibility results that have been provided on which type of constraints (expected cost constraint, chance constraint) can be learnt in the context of which models (E-CMDP, C-CMDP)."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1. The notion of impossibility in learning is not explained. Does it imply you can never find a cost function that is safe? \n2. There are other types of CMDPs that have not been referenced. Only expected cost constraint and value at risk cost constraint have been provided. There is hard constraint and Conditional Value at Risk constraint that have previously been introduced by Hao et al. [Reward Penalties on Augmented States for Solving Richly Constrained RL Effectively]. \n3. Why is learning of cost function important to safety? Can't systems be made safer without explicitly learning a cost function\n4. How are the experimental domains decided? Only a few environments from Safety Gym are considered. Frozen lake is a simpler problem setting. Why were other environments not considered. \n5. In the second paragraph of introduction, it is mentioned that CLHF has significant works. However, in the experiments there are not too many baselines provided. Can I please check why that is the case?\n6. There needs to be more intuition provided before utilising formal description. Otherwise, it gets difficult to keep track of all the symbols."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "Is assumption 1 actually mild? My interpretation of $\\nu(\\tau) = 0$ is\nthat no constraints are violated. Assumption 1 says any such policy has\nthe same preference distribution, which seems bizarre. Consider for\nexample a CartPole swingup domain where the constraint function is\npositive when the velocity of the cart is above a threshold, and zero\notherwise. Then if the system is initially stationary (pole in the\ndownward position), there is no preference between an agent that does\nnothing and one that swings up the pole successfully.\n\nWhat is $\\xi$ supposed to represent on line 722? Is this the constraint\nviolation probability written as $\\delta$ in the main text?\n\nIn MDP in the proof of proposition 3, is $H=1$?\n\nIn propositions 3 and 4, I think you want probabilities instead of\nindicators.\n\nI'm having difficulty following the proof of Proposition 3. How did you\ncome up with that upper bound on $\\hat{c_1}(s_1, a_2, s_2)$? If you're\njust going for existence, can't you find much simpler settings of\n$\\hat{c}$?\n\nIn Proposition 4, what is the purpose of the indicator? The clause\ninside the indicator is deterministic.\n\nIf Lemma 1 is not identical to Russo and Van Roy's Lemma 4, it should\nhave a proof. What is actually the difference between these two? It\nlooks to me like you're using $\\psi_k$ to be the cumulant generating\nfunction of $U_k$ instead of $Z_k$, is that it? Anyway, I think there\nshould be a proof here.\n\nRegarding Figure 5, is it known that existing ICRL methods struggle in\nthis environment? Are there no baselines that can solve this\nenvironment? That sounds suspicious. It also very much weakens the claim\nthat your method achieves superior performance—I suspect some strong\nbaselines are missing."
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The problem setting studied in this paper is interesting and relevant.\nThe notion of characterizing which forms of human feedback enable which\ntypes of safe RL is very neat, and so are the theoretical results\ntowards this end."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper critically analyzes two forms of human feedback for\nconstraint inference; namely *trajectory feedback* wherein humans\nclassify whether given trajectories are safe, and *policy feedback*,\nwherein humans classify whether decision policies are safe. Notably, the\npaper also considers two forms of constrained MDPs (chance-constrained\nMDPs and expected constrained MDPs), and analyzes which forms of human\nfeedback can induce constraint inference enabling solutions to each type\nof constrained MDP. Finally, the paper presents a new policy gradient\nmethod for joint constraint inference and control, and demonstrates its\neffectivess at safe policy optimization in some common benchmark tasks."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "One major weakness of the paper is that it is not well-written; examples\nare given below and in the Questions section. As a consequence, I found\nit fairly difficult to follow the paper—I had to write my own document\nof notes and math to fill in several missing or confusing pieces. The\nclarify of the paper could have been improved tremendously, and I\nencourage the authors to do so.\n\nThe part of the proof of Proposition 4 that you omitted should not have\nbeen omitted, and this proof can be explained better. The claims on ine\n816-820 are confusing. Particularly, it says \"policy feedback does not\nprovide no information on $c_1$\". Firstly, it should say \"policy\nfeedback does not provide any information…\". More importantly, I don't\nnecessarily find this accurate. What the equation on line 818 implies is\nthat for any policy feedback dataset will have the form\n$\\{(\\pi_k, 0)\\}$, so there is no signal for identifying a safe policy.\nThis property is directly a consequence of the stucture of $c_1$, though\n(i.e., there are certainly alternative cost functions that will not have\nthis property here). Specifically, it is saying that the costs are\nsufficiently close to $0$ (or \"symmetric\" enough) that no policy is\ndangerous. This does give information about $c_1$. However, it does not\nprevent us from inferring that $\\hat{c}_1\\equiv 0$, so that the\nconstraint function induced from policy feedback does not restrict the\nclass of safe policies, which can result in the deployment of a\nC-CMDP-unsafe policy (by being a strict superset of the set on line\n809). I think this example / narrative would be very helpful.\n\nAgain, Proposition 2 should have a more complete proof, and perhaps\nshould be motivated more. What is the oracle here, and why should we\nassume access to one? I'm assuming what is really meant here is that\nwith unbounded preferences, we can learn safe policies. I think there is\na substantial amount missing in the proof here:\n\n1. The policy feedback case is not immediately obvious to me. Maybe\n it's obvious to you as the authors that have been thinking a lot\n about this setting, but as a reader, I would like verification that\n my thought process is actually what you had in mind; at the very\n least, this would help me understand the claim of the proposition.\n Am I correct in my interpretation that, in the policy feedback case,\n you can theoretically enumerate policies, query the oracle until you\n get feedback $0$, and then this certifies a safe policy by\n definition (so you can return it)?\n2. For the trajectory feedback case, is my interpretation correct that\n you again enumerate policies like in the policy feedback case,\n enumerate trajectories under each policy (given knowledge of\n $P_1, P$), and return a policy once you find one that the oracle\n only returns $1$ on at most $p|\\mathcal{T}|$ trajectories? And why\n do you need knowledge of $r$?\n\nAnother issue with the paper is what I believe to be a lack of baselines\nwith respect to the empirical analysis. Notably, the authors had no\nresults for any baselines except for in the tabular domain. As such,\ngiven the prevalence of the SafetyGym domain, I am skeptical that no\nother method is able to solve the tasks attempted in this paper.\n\n## Minor Issues\n\nOn line 56, \"We denote sets by curly alphabets\" – I don't think\n\"alphabets\" is the right word, maybe \"braces\"?\n\nIn Definition 1, \"finte\" should be \"finite\".\n\nThe notation for the return in Definition 2 is not quite right. In\nparticular, you're simultaneously defining the return as a function on\n$\\mathcal{S}$ and on $\\Delta(\\mathcal{S})$.\n\nThe notation $v^\\pi_1$ in Definition 3 is technically not defined. I'm\nassuming this implicitly represents $v^\\pi_{r, h}$ where $r$ is the\nreward function in the MDP.\n\nIn theorem 1, you have defined $K := |\\mathcal{D}|$, so you can (and\nshould) replace the $\\forall (d, K)\\in\\mathcal{H}\\times\\mathbb{N}$ with\n$\\forall d\\in\\mathcal{H}$."
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 3
},
"contribution": {
"value": 3
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 2
},
"primary_area": null,
"questions": {
"value": "Concerning Propositions 1 & 2: \n\nIn proposition 3 and the subsequent paragraph (l. 738) it is stated that an estimated cost function can perfectly reconstruct a trajectory feedback dataset in the sense of Eq. 4) but it is not clear to me why we evalute the estimated cost function $\\hat{c}$ through the additional indicator function. In this example MDP, from what I can see a reasonable estimator $\\hat{c}_1$ should be able to converge to $\\hat{c}_1(s_1,a_2,s_2)=1$ and $0$ for any other possible trajectory. Would this estimate not enable one to recover safe policies? \n\nConcering the impossibility results with policy feedback: \n\nI fail to see the difference between the trajectory feedback setting and a policy feedback setting where annotators observe one trajectory sampled from policy $\\pi_k$. And consequently how could one obtain an impossibility result for one, if both feedback forms can be equal?\n\nI believe the overall contribution to be valuable to the literature and am willing to raise my score if the above points are addressed."
},
"rating": {
"value": 5
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 3
},
"strengths": {
"value": "The paper addresses a highly relevant problem, namely the inference of constrained objectives from coarsely-labelled data and its theoretical foundations: \n- The authors provide a thorough theoretical analysis of this learning paradigm in the case of trajectory feedback and policy feedback and show (to my knowledge) novel insights regarding the feasibility of constraint learning depending on the type of feedback. \n- The main theoretical finding establishes a risk bound that quantifies the learning complexity of learning constraints in terms of (among others) trajectory count, model class and permitted constraint violation probability. \n\nThese results are relevant to the field, as they lay a sound motivation for practical algorithms. The empirical results are furthermore reasonably aligned with the theoretical analysis."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "This paper provides a theoretical analysis on the feasibility of constraint learning from two types of human feedback and provides results quantifying the learning complexity of multiple constraint learning from trajectory-based feedback. The authors furthermore introduce a policy-gradient formulation based on the lagrangian objective corresponding to the latter theoretical result and develop a practical algorithm, labelled CCPG, that jointly learns constraints and a policy from trajectory-labelled data."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "- The proof of proposition 1 leaves some questions open to me (see questions).\n- I believe, a natural addition that many readers would appreciate in this context is if the analysis extended to preference-based feedback.\n- In addition to the point above, it would be interesting to see an experimental confirmation of the contents of proposition 1 and 2 (i.e. the learnability of constraints from diffferen types of feedback).\n\n\nGenerally speaking, the paper is written in concise and clear language, however there are a number of typos and sentences/notation I would consider rephrasing:\n- line 39, \"comprehensive\" is too strong in my view\n- line 40, \"fills\"\n- line 51, \"a more general constraints\"\n- line 53, \"empirical validated\"\n- line 57, I don't know the term \"curly alphabets\"\n- line 65, \"finte\"\n- line 79, the shorthand for the value function taking a probability distribution as an input should be mentioned\n- line 110, \"a constrained RL\""
},
"withdrawal_confirmation": null
},
{
"TLDR": null,
"_bibtex": null,
"abstract": null,
"anonymous_url": null,
"authorids": null,
"authors": null,
"code_of_conduct": {
"value": "Yes"
},
"code_of_ethics": null,
"comment": null,
"confidence": {
"value": 4
},
"contribution": {
"value": 2
},
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": {
"value": [
"No ethics review needed."
]
},
"keywords": null,
"large_language_models": null,
"no_acknowledgement_section": null,
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": null,
"presentation": {
"value": 1
},
"primary_area": null,
"questions": {
"value": "1) What do $q$ and $P$ represent in Algorithm 1?\n2) Could the authors discuss the reasons behind the constraint violations observed in CCPG during the experiments, as well as the differences in performance between CCPG w/PC and CCPG w/TC across different environments?"
},
"rating": {
"value": 3
},
"reciprocal_reviewing": null,
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": {
"value": 2
},
"strengths": {
"value": "1) CLHF is a valuable research direction and it is important for both reinforcement learning applications and the security of large language models (LLMs).\n2) The authors propose a new decision function based on trajectory feedback and provide the theoretical analysis for the proposed decision function."
},
"student_author": null,
"submission_guidelines": null,
"summary": {
"value": "The paper focuses on constraint learning from human feedback (CLHF) and proposes a new decision function based on trajectory feedback for CLHF. It demonstrates that cost functions learned from feedback may not accurately represent the true constraints, which can result in policies that appear safe but are not actually safe in the true environment. The authors also provide some theoretical analysis to support their claims."
},
"supplementary_material": null,
"title": null,
"venue": null,
"venueid": null,
"weaknesses": {
"value": "1) There is no clear representation of the motivation and contributions of the paper.\n2) The theoretical analysis is limited, aspects such as convergence analysis and constraint violations related to constraint learning are not addressed.\n3) The overall presentation of the paper might benefit from improvements, as it does not clearly convey its main claims and contains some expression errors.\n4) The experimental results do not seem to adequately support the theoretical analysis. For example, Figure 3 shows significant constraint violations in CCPG w/PC.\n5) The paper lacks additional necessary experiments, including comparison experiments, ablation studies, and hyperparameter analysis, etc.\n6) The baselines and experimental environments in the paper are too few to illustrate the validity of the method.\n7) There is no code or detailed implementation description provided to support the reproducibility of the results."
},
"withdrawal_confirmation": null
},
{
"TLDR": {
"value": "We investigate constraint learning from theoretical perspective and provide a scalable algorithm"
},
"_bibtex": {
"value": "@inproceedings{\nanonymous2024foundation,\ntitle={Foundation of Scalable Constraint Learning from Human Feedback},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=zBrjRswpkg},\nnote={under review}\n}"
},
"abstract": {
"value": "Constraint learning from human feedback (CLHF) has garnered significant interest in the domain of safe reinforcement learning (RL) due to the challenges associated with designing constraints that elicit desired behaviors. However, a comprehensive theoretical analysis of CLHF is still missing. This paper addresses this gap by establishing a theoretical foundation. Concretely, trajectory-wise feedback, which is the most natural form of feedback, is shown to be helpful only for learning chance constraints. Building on this insight, we propose and theoretically analyze algorithms for CLHF and for solving chance constrained RL problems. Our algorithm is empirically shown to outperform an existing algorithm."
},
"anonymous_url": {
"value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity."
},
"authorids": null,
"authors": null,
"code_of_conduct": null,
"code_of_ethics": {
"value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics."
},
"comment": null,
"confidence": null,
"contribution": null,
"desk_reject_comments": null,
"details_of_ethics_concerns": null,
"flag_for_ethics_review": null,
"keywords": {
"value": [
"RLHF",
"RL",
"Constraint Learning",
"Theoretical Analysis"
]
},
"large_language_models": null,
"no_acknowledgement_section": {
"value": "I certify that there is no acknowledgement section in this submission for double blind review."
},
"other_comments_on_LLMs": null,
"paperhash": null,
"pdf": {
"value": "/pdf/89322f47b029a2433f8ec66e2e7c3dcae702a722.pdf"
},
"presentation": null,
"primary_area": {
"value": "reinforcement learning"
},
"questions": null,
"rating": null,
"reciprocal_reviewing": {
"value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6."
},
"resubmission": null,
"revert_desk_rejection_confirmation": null,
"revert_withdrawal_confirmation": null,
"soundness": null,
"strengths": null,
"student_author": null,
"submission_guidelines": {
"value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide."
},
"summary": null,
"supplementary_material": null,
"title": {
"value": "Foundation of Scalable Constraint Learning from Human Feedback"
},
"venue": {
"value": "ICLR 2025 Conference Submission"
},
"venueid": {
"value": "ICLR.cc/2025/Conference/Submission"
},
"weaknesses": null,
"withdrawal_confirmation": null
}
] |