id
stringlengths
10
10
title
stringlengths
3
179
track
stringclasses
1 value
status
stringclasses
3 values
keywords
stringlengths
2
2.39k
primary_area
stringclasses
21 values
author
stringclasses
501 values
authorids
stringclasses
501 values
aff
stringclasses
1 value
aff_domain
stringclasses
1 value
position
stringclasses
1 value
rating
stringclasses
355 values
confidence
stringlengths
0
19
soundness
stringclasses
642 values
contribution
stringclasses
596 values
presentation
stringclasses
782 values
rating_avg
float64
0
9
confidence_avg
float64
0
5
soundness_avg
float64
0
4
contribution_avg
float64
0
4
presentation_avg
float64
0
4
corr_rating_confidence
float64
-1
1
project
stringclasses
1 value
github
stringclasses
1 value
Review
listlengths
2
10
xUHL8mtSUL
Scalable Gaussian Process via Hilbert-Schmidt Singular Value Decomposition
main
Active
Scalability;Gaussian process regression;Hilbert Schmidt singular value decomposition;compact Mat\'ern
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
3;3;3;5;5
4;3;4;4;5
2;2;2;2;2
2;2;1;1;2
2;2;2;3;2
3.8
4
2
1.6
2.2
0.645497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Some questions and details that came to my mind while making the revision of the manuscript are:\n\nQ1: Some comments are added in the review of SOTA methods and background about pre-processing and how problematic this is. What do pre-processing steps mean for the authors? and which method is it referring to?\n\nQ2: Paragraph in L212: What are the real implications of the use of compact Matern kernel for RBFs. Are we limited on the type of covariance functions used, right?\n\nQ3: The nugget for numerical stability… but what is actually the dimension/order of this one?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "It is always nice to see contributions to the scalability of GPs and particularly GPR. Despite the fact that I do not share some of the thoughts/ideas around the comments on the SOTA methods, I think the paper has a valuable point in certain directions. The Mercer's Theorem is well-known in the GP community and has been considered since the beginning of the interaction between GPs and ML. Perhaps, the main strength is the combination of the HS-SVD together with the pseudo-inverse taken from Pozrikidis (2014) and later the Sylvester determinant theorem. Even if there are some details or deeper analyses missing in such directions, I think that putting these methods together is a powerful message to look back to in these times when scalability matters a lot." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for Gaussian process regression (GPR) that reduces the classical problem of cubic and quadratic complexity in GPs. In particular, the work exploits the well-known Mercer decomposition of kernels, such that the covariance function can be expressed as an infinite sum of eigenfunctions. Under this condition, it is possible to use a special type of SVD to obtain an usable decomposition of the covariance matrix. Under the assumption of compact Matern kernels and the truncation of eigenvalues, the performance is compared on 4 different simulations with synthetic data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My gut feeling with the submission is that it has yet limitation and details that are not clear enough to validate the quality of the contribution. Some thoughts:\n\n[W1] - Mercer's theorem is a well-known method, as well as the inverse approximation Sherman–Morrison–Woodbury formula which is basically a special case of the Woodbury matrix identity. The use of Sylvester's determinant is also common in the community, as far as I remember. In this regard, the HD-SVD is the only method that is kind of new to me here, but I am not yet sure if the combination of these together makes a super strong contribution.\n\n[W2] - The criticism + comments around the whole literature of GPs are somehow vague, in the sense that so much detail and effort has been put on them, and are kind of discarded due to high-level opinions and not much precision on their issues and limitations. In this sense, GPR and its scalability is super-explored since 20y ago.. Then, what concerns me is that near-zero analysis is derived around the quality of the inverse for example, the effect of assuming the compact Matern kernel or the first paragraph in section 3.2 around the noise term and how reasonable is that. To me, this last detail is the worst one so far of the paper, in the sense that it is not super scientific (what is the order of the noise term, everyone can add to any ML model some noise to the data and observe nicer properties and it is easily modeled by an isotropic Gaussian right?)\n\n[W3] - I do not buy the technique of truncating the eigenvalues in the HS-SVD and the justification that it stabilizes everything. How many of them are truncated? Little details on this point are added in my opinion.\n\n[W4] - The fact that only synthetic-data simulations are used in the submission tends to be a bad sign in the ML/GP community, even more if the main motivation is scalability and large-scale data.." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Along with little experimental support, the paper also doesn't provide any theoretical support in terms of bounding the mean squared error for using the HS-SVD of the compact Matern instead of the full kernel. Can anything be said about the value of the approximate MSE due to using HS-SVD with respect to the true MSE if the full kernel were used?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The generalization of the 1D compact Matern kernel to higher dimensions is interesting. It will be useful to fully investigate and understand the properties of this kernel. In the experiments provided, the use of this kernel shows considerable promise. However, there are quite a few weaknesses of paper as pointed out below." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new technique for Gaussian process regression based on the Hilbert-Schmidt singular value decomposition (HS-SVD) of the compact Matern kernel. The compact Matern kernel proposed in the paper is a generalization of the 1-D compact Matern proposed in Cavoretto et al., 2015 to higher dimensions. The eigenvalues and eignevectors of the compact Matern kernel can be computed easily via simple closed-form expressions and thus, it is possible to compute the Mercer decomposition of this kernel easily, unlike commonly used kernels like Radial Basis Function (RBF), Matern, and exponential kernels. Thus, one can find the HS-SVD of the compact Matern kernel by keeping only the first m eigenvalues and corresponding eigenvectors. Instead of computing and storing the complete $n \\times n$ Kernel matrix, the storage cost is reduced to $O(nm)$. Operations like computing the inverse and log determinants of the kernel also become cheaper and the time complexity is reduced to $O(nm^2)$ from $O(n^3)$. Finally, some experiments using synthetic datasets are performed to show the effectiveness of the proposed approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Insufficient experiments/evidence: The main value proposition of the paper seems to be the introduction of the generalized compact Matern kernel whose HS-SVD can be computed effectively. Some experiments are performed for nonlinear function approximation but much more evidence is required to accurately judge how well this method is truly outperforming. There is also no theoretical support showing how well the proposed approach is performing (see questions).\n\nApart from the points mentioned above, the paper doesn't clearly mention how to choose the parameter m (the dimension of the SVD) which is crucial to the method. In general, is there an empirical rule of thumb or some known theoretical bound that could guide the choice of m? The same point holds for the other parameters of the compact Matern kernel $\\rho, \\alpha, \\beta$.\n\n2) Missing related work: There have been works that use SKI and take time almost linear in m where m is the number of “inducing points” and also doesn't scale exponentially with the input dimension r. See for example https://arxiv.org/pdf/2305.14451. Specifically, in the caption under Table 3, the authors note that methods like SKI cannot be sued since the the number of grid points grows exponentially with r. However, it seems that the above paper tackles this very problem.\n\n3) The messaging of the paper seems to be confusing in some places. Most of the ideas presented in the paper like using an approximate Mercer decomposition of kernels have been widely known and studied. The main contribution of the paper seems to be the construction of the generalized compact Matern kernel whose HS-SVD can be computed effectively. But, for example, in the discussion section 5, the authors claim that they introduced the HS-SVD method and constructed the compact Matern kernel as an illustrative example. However, as pointed out above, HS-SVD (which basically follows from the Mercer decomposition) has been pretty well known. So, to claim that they introduced the HS-SVD method itself is a bit misleading I feel. Especially since they don't provide any algorithm for obtaining the HS-SVD of any other kernels.\n\nOverall, I feel that this paper needs to be improved before it can be considered for publication at ICLR. Hence, I'm recommending a reject." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In your approximation of page 4, you simply do truncation of the representation. Do you have a theory for the approximation? Or small m can make the approximation not trustworthy.\n2. You spend a great extent discussing the smoothness of the compact Matern kernel, is the result new?\n3. Could you add some real-world datasets in the experiment parts?\n4. Have you compared your method with the method in 'Sample and Computationally Efficient Stochastic Kriging in High Dimensions'? As far as I know, this work also states that they achieve the SOTA. You can include a comparison with this method in experiments, or explain why such a comparison was not included." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The idea is novel, using HS-SVD to help reduce computational cost." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers the computational issue in the GP area, which is widely known as a bottleneck in large-scale GP applications. THe paper proposes a method based on the Hilbert-Schmidt singular value decomposition that obtains a low-rank decomposition“for free”, reducing both time complexity and space complexity." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Writing: many sections use \"fast\" as the title. It may lead to confusion about what the \"fast\" really means. You should add something related to complexity in it to make it clearer.\n2. Experiment: I can not see the experiment on the real dataset. Can you explain why you do not do experiments on the real dataset?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In addition to the weaknesses, the following questions need to be addressed:\n\n**Q1.** In Definition 2.4, the authors state that they use a truncated form of the HS-SVD for computation. However, reference [1] suggests that truncating the series may result in deviations from the standard kernel space span. How do the authors determine the truncation order, and what measures are taken to ensure that the truncated form remains within the kernel space span?\n\n**Q2.** Following Q1, the authors also mentioned that \"the truncation strikes a balance between computational efficiency and the accuracy of the low-rank approximation\". Can the authors provide a quantification or analysis of this trade-off? The authors may refer to reference [2] for a similar analysis.\n\n**Q3.** In Section 3.1, the authors introduce the compact Matérn kernel. How does this kernel differ from the iterated Brownian bridge kernel discussed in reference [1]? In addition, [1] also mentioned that it is possible to extend their kernel to high-dimensional case by using tensor products, how does this compare to the proposed kernel in Definition 3.2?\n\n**Q4.** In Section 3.2, the authors claimed that their proposed method can reduce the computational complexity of the matrix inversion from $O(n^3)$ to $O(m^3)$. However, this appears similar to existing low-rank approximation methods, such as the Nyström approximation, which also achieves the same complexity. Can the authors clarify how their approach is advantageous compared to these established methods? It is suggested to add a table comparing the proposed method with existing methods, in terms of the time and space complexities.\n\n**Q5.** In Section 3.3, the authors claimed that truncating the small eigenvalues improves stability by removing ill-conditioned parts of the kernel matrix. Can the authors provide theoretical justification for this claim? The reviewer also did not find any experiments supporting this claim.\n\n**Q6.** The authors use runtime (in seconds) and RAM/VRAM usage as metrics for performance evaluation; however, these are hardware-dependent metrics. Could the authors also provide hardware-independent metrics such as FLOPS and arithmetic intensity?\n\n**Q7.** In Section 4, the authors claimed that \"by integrating the strength of dimension reduction and variational inference methods with HS-SVD’s low-rank decomposition, we could achieve the best of both worlds—enhancing stability while maintaining fast runtime and low memory costs.” Can the authors perform an experiment/ablation study to support this claim?\n\n### References\n[1] Cavoretto, R., Fasshauer, G.E. and McCourt, M., 2015. An introduction to the Hilbert-Schmidt SVD using iterated Brownian bridge kernels. *Numerical Algorithms*, 68, pp.393-422.\n\n[2] Griebel, M., Rieger, C. and Zwicknagl, B., 2015. Multiscale approximation and reproducing kernel Hilbert space methods. *SIAM Journal on Numerical Analysis*, 53(2), pp.852-873." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The main strengths of the paper are as follows:\n\n**S1.** Proposes a method for obtaining low-rank approximations of kernel matrices based on HS-SVD, that requires minimal tuning or preprocessing and does not depend on GPU resources.\n\n**S2.** Discusses the numerical advantages of the proposed method, particularly in reducing both computational and memory cost.\n\n**S3.** Empirically demonstrates that the proposed method is superior to the baselines on simulated large-scale datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the scalability issue of Gaussian process regression (GPR), which suffers from cubic time complexity and quadratic space complexity when dealing with large datasets. The authors propose a scalable framework based on the Hilbert-Schmidt singular value decomposition (HS-SVD) to achieve a more efficient low-rank approximation of the kernel matrix, thereby reducing the time complexity to $O(nm^2)$ and space complexity to $O(nm)$. Empirical results demonstrate that the proposed framework outperforms existing methods in terms of runtime and memory usage on simulated large-scale datasets, while requiring minimal preprocessing and being accessible without GPU resources." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are several weaknesses of the paper:\n\n**W1.** The novelty of the proposed method is questionable, as the HS-SVD has already been introduced and thoroughly studied in [1]. Many definitions, lemmas, and theorems in this work are derived from [1]. For instance, Definitions 2.4 and 3.4 are adapted from [1] without proper citations.\n\n**Suggested action:** The authors should clarify their contributions in relation to [1], and ensure proper attribution for all borrowed sections, including definitions and theorems.\n\n**W2.** The paper lacks adequate theoretical analysis and justification regarding several key aspects:\n- There is no clarification on how the authors determine the truncation order (see Q1), nor is there any quantification or analysis of the corresponding trade-off (see Q2). Specifically, it is unclear how the authors chose the truncation order and whether they have tested other values to obtain the results presented in Tables 1-4.\n- There is no justification for how truncation enhances stability by eliminating ill-conditioned parts of the kernel matrix (see Q6).\n- The authors do not explain how the proposed method can be integrated with existing dimension reduction and variational inference methods, nor how this integration could result in faster runtime and lower memory costs (see Q8).\n\n**Suggested action:** The authors should consider conducting an ablation study to analyze the effect of different truncation orders, and/or provide a theoretical bound on the approximation error.\n\n**W3.** Most of the baseline methods used in experiments are relatively outdated, with many published within 5-10 years ago. The authors should consider including more recent state-of-the-art methods to evaluate their method. Some relevant works are as follows:\n- Wu, K., Wenger, J., Jones, H.T., Pleiss, G. and Gardner, J., 2024, April. Large-scale Gaussian processes via alternating projection. In *International Conference on Artificial Intelligence and Statistics* (pp. 2620-2628). PMLR.\n- Allison, R., Stephenson, A. and Pyzer-Knapp, E.O., 2024. Leveraging locality and robustness to achieve massively scalable Gaussian process regression. *Advances in Neural Information Processing Systems*, 36.\n- Li, K., Balakirsky, M. and Mak, S., 2024, April. Trigonometric Quadrature Fourier Features for Scalable Gaussian Process Regression. In *International Conference on Artificial Intelligence and Statistics* (pp. 3484-3492). PMLR.\n- Noack, M.M., Krishnan, H., Risser, M.D. and Reyes, K.G., 2023. Exact Gaussian processes for massive datasets via non-stationary sparsity-discovering kernels. *Scientific reports*, 13(1), p.3155.\n\n**Suggested action:** The authors should consider incorporating more recent works as baselines (e.g., the ones listed above) and discuss how their method compares, both theoretically and empirically, to the updated baselines.\n\n**W4.** The experimental validation is insufficient. While the proposed method shows superior results on simulated datasets, it lacks evaluation on real-world datasets. Additionally, most simulations only use $n = 10,000$ samples, which is not convincing enough to support the claim that their method is suitable for large-scale settings. It is recommended that the authors include experiments on larger real-world datasets to strengthen their results. Please refer to the experiments in the references mentioned against W3.\n\n**Suggested action:** The authors should conduct additional experiments using real-world datasets, such as those from the [UCI Machine Learning Repository](https://archive.ics.uci.edu), which contain millions to tens of millions of samples, to better evaluate the scalability of their method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Suggestions: \n- Describe exact approaches to scale GPs in the literature review.\n- Explain better why your approach is novel, given that applying decomposition of the kernel is not.\n- The Simulations should not focus on a particular synthetic function in low dimensions. It should be the MLE, RMSE, CRPS (...) of the approximation for a variety of machine-learning-relevant datasets. One of them could be a synthetic function. In the simulations, you might also compare to the Vecchia approximation (which should be presented in the intro). The Vecchia approximation is the state-of-the-art in statistics. \n\nQuestions:\n- How would the method change for non-stationary kernel designs? It is my assessment that recent applied studies increasingly try to take advantage of non-stationarity. \n\n- Are you confident about the correctness of your MSE scores? They seem large for the given function." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The manuscript is clearly written. \n- While not novel in itself (in my opinion), the content might very well be of interest to readers. \n- Scalable GPs are a hot topic.\n- The method is easy to implement, so it has a good chance to be applied." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript proposes a methodology to scale up Gaussian processes, focusing on Gaussian process regression. The core idea is to decompose the covariance matrix via Hilbert-Schmidt singular value decomposition that, in principle, can obtain a low-rank representation for free. The method is explained and a pseudocode is offered. The method is then tested in 5 different simulations. \n\nOverall, I enjoyed reading this paper. It is well-written and easy to read. However, there are some significant shortcomings regarding novelty, the literature survey, and the test results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(A) Major:\n(1) Novelty: The idea of singular value decomposition for GPs is not new and was even described in the GP cookbook (Williams and Rasmussen). The principle has been applied in various papers, including [Sivaram Ambikasaran, Daniel Foreman-Mackey, 2015] and [Drineas, P. and Mahoney, M. W. (2005)]. I fail to recognize the novelty of the proposed approach, since, in general, decomposing the kernel into eigenvalues and eigenfunctions is well-known (as is even stated in the manuscript). Please clarify what part of the \"Method\" section is in fact novel. One piece of potential novelty might come from the kernel which I address in the next comment. \n\n(2) Methods for scaling Gaussian processes have recently moved away from the limitations to a particularly (often stationary) kernel design. The community has come to the revelation that flexible kernel designs are what makes GPs powerful function approximation tools. Dictating one or the other kernel design is therefore counter-productive. Please describe how the method could be extended to non-stationary kernels. \n\n(3) Simulation Experiments. The simulations all use one particular function (in 1d or 2d) with points on a grid or randomly distributed. At the very least the method has to be tested on some real higher-dimensional datasets. There are so many to choose from and really any will do, but one particular synthetic function is not sufficient to demonstrate performance. Also, I am concerned about the MSE error scores (MSE is not optimal but more on that later). The MSE seems very high for all methods. The function is approximately bounded by [-1,1] in 1d and [-.2,.2] in 2d; an MSE of 0.1 seems high, especially for large datasets. I might have missed something here but this was a concern. Next, assuming gridded data is pretty unrealistic and those tests don't have too much value. My suggestions for improvements are (a) new test datasets from various fields (topography, weather data, Housing data (available through scikit-learn)), robot-datasets, and so on, (b) add the CRPS and the Negative log predictive density scores to the results. \n\n(4) Literature overview: The introduction focuses on prior work in approximate GPs. Recent work has resulted in methods to scale up exact GPs. \nExact Gaussian Processes on a Million Data Points\nKe Alexander Wang, Geoff Pleiss, Jacob R. Gardner, Stephen Tyree, Kilian Q. Weinberger, Andrew Gordon Wilson\n\nExact Gaussian processes for massive datasets via non-stationary sparsity-discovering kernels\nMM Noack, H Krishnan, MD Risser, KG Reyes \n\nIn addition, it seems that the Vecchia approximation is neither discussed nor compared to. \nPlease add a short discussion to the manuscript and a reason why those methodologies were not compared. \n\n(B) Minor:\n(1) The MSE alone is not a good score to judge the performance of a GP. It should be augmented by the CRPS. \n(2) Deep Kernel Learning is mixed in the intro with methods for scalability. That seems to be coming a little bit from left field, since the main purpose DKL is a flexible way to achieve non-stationarity, not scalability. \n(3) The comments on ill-conditioning surprised me. With a realistic noise model and a PSD kernel, ill-conditioning should never be an issue." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024scalable,\ntitle={Scalable Gaussian Process via Hilbert-Schmidt Singular Value Decomposition},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xUHL8mtSUL},\nnote={under review}\n}" }, "abstract": { "value": "Gaussian process regression is widely used for its flexible mean predictions and inherent uncertainty quantification. However, its scalability is limited by cubic time complexity, $O(n^3)$, and quadratic space complexity, $O(n^2)$, making it infeasible for large-scale datasets. Although recent advances have introduced approximate methods with time complexity $O(nm^2)$, where $m\\ll n$ is a tuning parameter, these methods each have their own bottlenecks, such as requiring a relatively large $m$ or involving expensive preprocessing steps. Moreover, for extremely large datasets with millions of samples, the space complexity $O(n^2)$ becomes another significant bottleneck. In this paper, we present a novel method based on the Hilbert-Schmidt singular value decomposition that obtains a low-rank decomposition ``for free\", reducing both time complexity to $O(nm^2)$ and space complexity to $O(nm)$, with no preprocessing overhead. We used simulated large-scale datasets to demonstrate the performance of our method compared to state-of-the-art approaches." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Scalability", "Gaussian process regression", "Hilbert Schmidt singular value decomposition", "compact Mat\\'ern" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/da401bb55ca4ae8c76df31c5c3cdda3541cd6e7d.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Scalable Gaussian Process via Hilbert-Schmidt Singular Value Decomposition" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xUMI52rrW7
Structural-Entropy-Based Sample Selection for Efficient and Effective Learning
main
Active
Sample selection;graph;structural entropy;blue noise sampling
other topics in machine learning (i.e., none of the above)
3;5;5;8
3;4;4;4
2;2;2;3
2;2;2;3
3;3;3;3
5.25
3.75
2.25
2.25
3
0.727607
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Are there anything you can prove about the node-level structural entropy values?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Overall, the proposed algorithm is intriguing due to its general empirical performance and some of the novel ideas it introduces. The concept of node-level structural entropy, which quantifies the contribution of an individual node to the global structural entropy of a graph, could be of independent interest. The experiments indicate that the proposed method outperforms existing selection methods in many common learning tasks. Furthermore, ablation studies validate the contribution of each module (node-level structural entropy and importance-biased blue noise sampling) to the overall effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies sample selection which aims to extract a small, representative subset from a larger dataset. The authors introduce a novel sample selection scheme, termed Structural-Entropy-based Sample Selection (SES), which uses and extends the concept of \"structural entropy\" (Li and Pan, 2016), which assesses how nodes and edges within a graph are hierarchically organized to form multi-level communities. The proposed scheme seeks to address the limitations of existing selection methods, which often prioritize local information and neglect the broader, global context of the samples.\n\nThe algorithm begins by constructing a k-NN graph G for the dataset based on similarity. It then calculates the structural entropy value for each node in G (referred to as node-level structural entropy). While structural entropy was originally defined for an entire graph rather than individual nodes, the authors extend this concept using the Shapley value method. Roughly speaking, node-level structural entropy calculates the average increase in structural entropy when a node is added to all potential subgraphs of G. After that, each node is assigned an importance value, which is the product of its structural entropy and training difficulty. An importance-biased blue noise sampling method is then used to select samples. instead of sampling solely based on importance scores, this sampling process prevents the selection of overly similar samples, thus maintaining diversity within the selected subset.\n\nThe effectiveness of the SES scheme is demonstrated through experimental studies and compared to other selection methods in various tasks such as supervised learning, active learning, and continual learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I believe the results could benefit from stronger theoretical justification. While the high-level ideas are reasonable, the mathematical properties are not mentioned in this paper. A deeper discussion on these aspects should be provided. Especially I am interested in what are the mathematical properties of the node-level structural entropy." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1, The construction method of the Encoding tree is challenging. The author mentioned the Huffman tree construction; how does this affect the conclusions of this article?\n\n2, Equation (5) in the paper calculates the overall importance score of the node. Is this importance score node-wise or point-wise sampling? A data point may appear in multiple nodes, does this affect the proposed sampling method?\n\n3, Have you considered other ways of combining global structural entropy and local training difficulty indicators, rather than through the multiplication method (S(u) = Se(u) * St(u))?\n\n4, Can you provide more theoretical explanations or proofs to demonstrate why structural entropy, as a global indicator, is helpful for considered tasks (e.g. supervised, active, and continual learning)?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1, The method utilizing the encoding tree effectively measures local structural information across different scales, providing valuable insights. Various experiments on different learning tasks demonstrate SES’s effectiveness.\n\n2, Structural entropy is decomposed from the overall graph level to individual nodes, resulting in a node-level definition of structural entropy. By leveraging properties of the Shapley value, authors show that the Shapley value of a node can be computed in linear time relative to the number of edges." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a Structural-Entropy-based sampling (SES) method for data selection. This approach integrates global structural information, quantified using structural entropy, and local information (training difficulty) to choose informative and representative samples for training. The authors show that incorporating global structural information can improve the quality of sample selection. Traditional methods often focus on local data properties, such as training difficulty, while ignoring broader connectivity patterns. SES constructs a sample graph using k-nearest neighbor (kNN) relationships to model sample similarity and applies structural entropy at a node level to evaluate each sample’s significance in preserving the overall data structure." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1, The overall idea is relatively simple and incremental, mainly based on the concept from Li and Pan 2016. Also, the proposed method is very heuristic without solid theoretical validation. For example, in line 212-213, \"Given our emphasis on ..., we only use....\". Can you give more detailed explanation? In my opinion, this is not a serious claim for a research article. In line 352, you say \"10X speedup\" because you use only 10\\% of dataset. Can you provide the realistic experimental time for this claim? Do you count the construction time of your sample? \n\n2, While the Shapley value of a node can be computed in linear time relative to the number of edges, the edges in a kNN graph is O(k|X|). It is still time consuming to calculate the Shapley values for all nodes of the encoding tree, especially for large dataset.\n\n3, The author does not clearly explain why preserving the global structure of the dataset is beneficial or provide theoretical guarantees regarding its impact on performance in tasks such as supervised, active, and continual learning. Similarly, the rationale for prioritizing samples with high local information (i.e., training difficulty) is insufficiently justified.\n\n4, The blue noise sampling method effectively promotes diversity, yet the balance between selecting challenging samples and maintaining sample diversity could be further clarified. A comparative analysis with methods that explicitly optimize for diversity, such as clustering-based selections, would help to clarify the advantages of SES.\n\n5, The method relies on the quality of the embedding used to construct the kNN graph. If the embedding representation is poor, structural entropy may not accurately capture the global structure of the data.\n\n6, Some experimental parts are not sufficient. For example, in the continual learning part, they only consider three baselines and two memory sizes (100, 200). Moreover, do you consider both Class-Incremental Learning and Task-Free Continual Learning?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.The writing of this paper is good and easy to follow.\n\n2.This paper has a certain degree of innovation, introducing a structural entropy as a global metric for sample selection." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel sample selection method that can capture both local and global information of data simultaneously. Specially, first, they use a $k$-NN to construct the sample graph of original data. Second, they employ structural entropy to measure global information and use training difficulty to capture the local information of the graph. At lats, they utilize the importance-biased blue noise sampling to select a set of diverse and representative samples. \n\nThe main contribution of this paper is to propose a new global metric for sample selection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Hypergraphs [1] can capture more complex relationships, and I believe that if the authors had used hypergraphs instead, the results would have been even better. I would be delighted to see the authors add new experiments to verify my hypothesis.\n\n2. How would the authors define informative and representative samples? They mention these concepts multiple times, but do not provide a detailed explanation.\n\n3. Taking sample as a node is not very reasonable, as it treats the sample as a whole and ignores the local information contained in the sample itself. Perhaps using a Hyper-class representation [2] would yield better results. I would be delighted to see the authors conduct new experiments to verify my hypothesis.\n\n4. The experimental results do not show the variance, so I hope the authors can make up the variance.\n\n5. Just a little confused, why most of the comparison algorithm's experimental results are not as effective as random selection? So what's the meaning of those comparison algorithms? Or are those comparison algorithm chosen by the author appropriate? Are the parameters of the those algorithms not optimally tuned?\n\n6. The effect of blue noise sampling (BNS) is not as good as message passing (MP) when the sampling rate is greater than or equal to 20%, why not directly use MP? Simply because BNS does not require hyperparameter tuning, it sounds far-fetched.\n\n7. As can be seen from Figure 4, (b) has better boundary division, while (c) has several categories of samples grouped together. It seems that (b) is better. How can we understand this phenomenon?\n\n[1] Learning with hypergraphs: Clustering, classification, and embedding.\n\n[2] Hyper-class representation of data." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The idea is interesting and reasonable.\n2. The experiments are sufficient and the experimental results are good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a sample selection method with structural entropy. It first introduces the structural entropy of a graph into the sample selection task, and then applies the Shapley value to decompose the structural entropy of a graph to the node-level structural entropy. At last, it designs a score for sample selection with the node-level structural entropy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main techniques used in this paper are all existing methods, e.g. the structural entropy and Shapley value. The paper seems an application of these techniques in the sample selection task. The paper should clarify its novelty and technical contributions.\n\n2. The paper derives Eq.(3) from the Shapley value, and then removes the second term, leading to Eq.(4), which is used in the method. However, the motivation is unclear. Why should we remove the second term? What is the advantage of removing this term? The ablation study of this term should be conducted in the experiments. The paper only claims that they want to design a global metric and the second term is a local metric. I do not think it's convincing enough. What is the advantage of only considering global information? Moreover, in Eq.(5), the method combines the global score S_e and the local score S_t. It seems a little contradictory. In Eq.(4), they do not want the local term but in Eq.(5) they need the local term. It seems strange and not well-motivated.\n\n3. Eq.(5) needs more justification. Why should we multiply S_e and S_t? What is the advantage? Why not just sum S_e and S_t, or combine them with other forms? I think more ablation study is needed to justify the effectiveness of Eq.(5).\n\n4. In the experiments, the experimental setup should be introduced in more detail. For example, what is the difference between the supervised setting and the active learning setting? They both need to select some samples to train the model and then is used to predict the test data." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024structuralentropybased,\ntitle={Structural-Entropy-Based Sample Selection for Efficient and Effective Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xUMI52rrW7},\nnote={under review}\n}" }, "abstract": { "value": "Sample selection improves the efficiency and effectiveness of machine learning models by providing informative and representative samples. Typically, samples can be modeled as a sample graph, where nodes are samples and edges represent their similarities. Most existing methods are based on local information, such as the training difficulty of samples, thereby overlooking global information, such as connectivity patterns. This oversight can result in suboptimal selection because global information is crucial for ensuring that the selected samples well represent the structural properties of the graph. To address this issue, we employ structural entropy to quantify global information and losslessly decompose it from the whole graph to individual nodes using the Shapley value. Based on the decomposition, we present $\\textbf{S}$tructural-$\\textbf{E}$ntropy-based sample $\\textbf{S}$election ($\\textbf{SES}$), a method that integrates both global and local information to select informative and representative samples. SES begins by constructing a $k$NN-graph among samples based on their similarities. It then measures sample importance by combining structural entropy (global metric) with training difficulty (local metric). Finally, SES applies importance-biased blue noise sampling to select a set of diverse and representative samples. Comprehensive experiments on three learning scenarios --- supervised learning, active learning, and continual learning --- clearly demonstrate the effectiveness of our method." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Sample selection", "graph", "structural entropy", "blue noise sampling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0ecccdff385bd6e7217224ca1dd20f835c9d5c3f.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Structural-Entropy-Based Sample Selection for Efficient and Effective Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xVOMtecrAS
See Further When Clear: Adaptive Generative Modeling with Curriculum Consistency Model
main
Active
adaptive curriculum learning;noise schedule;flow matching;consistency models
generative models
3;3;5;5
5;4;4;3
2;2;2;4
2;2;3;2
2;1;2;3
4
4
2.5
2.25
2
-0.707107
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See above" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The strategy is rather simple and can easily be applied to diffusion models and flow-based models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the Curriculum Consistency Model (CCM), which stabilizes and balances the learning complexity across timesteps. It defines the distillation process as a curriculum and introduces Peak Signal-to-Noise Ratio (PSNR) as a metric to quantify the difficulty of each step in this curriculum. Extensive empirical studies are performed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper writing is disastrous. There are too many expression and grammar errors. In particular, the paper tile emphasizes consistency models but the flow-based models are highlighted in the main context. \n\n- The multi-step iterative generation method has been explored by the literature, see [1]. The GAN trick is not motivated. \n\n- The paper should compare to more recent consistency models like improved CD, multi-step CM, etc. The arguments on the issues of CMs in this paper are partially problematic. \n\n[1] SCott: Accelerating Diffusion Models with Stochastic Consistency Distillation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In CIFAR-10 experiment, CCM is only applied to FM models. Was there a reason for not applying it to diffusion-based models?\n- Prior works have reported that training of CM is difficult in terms of both stability and time complexity. Can you report those aspect of CCM as well?\n- Have you tried *fine-tuning* a pretrained DM or FM with the proposed training procedure? According to ECT it reduces the required training time and increases stability of training. \n- Does the proposed value of $T_\\text{SNR}$ also generally work well with other models / datasets? If not, \"our method is not very sensitive to PSNR\" might not be a valid claim." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper presents concrete motivation for each component of the proposed method. \n- The proposed method shows some promising empirical results.\n- The paper conducted various ablation studies to justify the importance of each proposed component." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel training procedure for consistency models, introducing: 1) a dynamically adjusted training objective based on learning difficulty, measured by PSNR, and 2) a distillation target acquired through multi-step generation. The resulting model, named the Curriculum Consistency Model (CCM), demonstrates performance that is comparable to, if not better than, existing baseline models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The dynamical adjustment, which is one of the major contribution of this paper, resembles **continuous-time training schedule** of ECT [1]. \n- Multi-step iterative generations will increase the time complexity of training. \n- Performance on CIFAR-10 is very promising, however, on large scale datasets such as ImageNet 64x64 or CoCo2017, the gain is either marginal or none. If the proposed method do increase time complexity of training, this result is not promising enough.\n\n\n(Minor remarks)\n- Personally, I believe Chapter 2 is not so kind for readers especially if they are unfamiliar with flow models.\n- $G$ was not defined on line 151.\n- The (0, 1) or (0, T) convention of FM and diffusion models seem to be opposite of one another, making it hard to read.\n- Assigning $x_\\text{target}$ is missing in Algorithm 1.\n\n\n[1] Geng et al. \"Consistency Models Made Easy\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "**Q0 |** Please address the concerns and questions in Weaknesses.\n\n**Q1 |** PSNR is computed between the student and teacher outputs. If I understand correctly, and the teacher output corresponds to the $u$->1 mapping, then PSNR and the CM loss are quite similar. Why not simply use the CM loss as is?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "**S1 |** The paper addresses an important problem of improving the training of consistency models.\n\n**S2 |** The proposed learning strategy is reasonable and well-motivated. \n\n**S3 |** Compared with standard CD, CCM accelerates convergence and improves overall performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel training method for CMs that adaptively selects consequentive timesteps to ensure that the difficulty of the learning targets is maintained throughout the training process across different samples. The method is validated on CIFAR10, ImageNet64 and T2I generation, demonstrating strong performance when combined with adversatial training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**W1 |** Recent works [1,2] have also addressed the training complexity and instabilities in CMs and proposed various related training techniques to improve convergence. However, these works are neither discussed nor compared with.\n\nI believe the presence of [1,2] largely limits the main contribution and that CCM needs to be carefully compared against the techniques from these works, omitting the effect of the GAN loss.\n\n**W2 |** CCM has been applied only to FM models on CIFAR10 and ImageNet64. It makes comparisons with the original CD, CTM, iCT[1], and ECM[2] difficult. Could the authors evaluate CCM on the corresponding diffusion models with and without GAN loss?\n\n**W3 |** For T2I generation, the improvements seem negligible. FID, especially at 5K, is an unreliable metric, as noted in [3, 4]. Therefore, I believe the demonstrated FID gains may be unrepresentative or marginal. Could the authors conduct a human preference study or consider adding FID30K and alternative metrics, e.g., CMMD[3], ImageReward[5], and PickScore[6]? \n\n**W4 |** Inconsistent terminology and notation regarding the student, teacher and target models/outputs. Figure 1 denotes a teacher model that maps $u$ to 1, yet in Section 2, the teacher maps $t$ to $u$.\n\nL210-211: $x_{est}$ - teacher output, $x_{target}$ - student output. L252: $x_{target}$ - teacher output. Algorithm 1: $x_{est}$ is a student output and $x_{target}$ is missing.\n\n**W5 |** The related work section can be largely extended. I recommend citing and discussing CM-related works [1,2,7], as well as other distillation methods [8,9,10,11,12,13]. A comparison with these works would be highly beneficial.\n\n**W6 |** The experiment in Figure 7 needs more details.\n\n---\n[1] Song et al. Improved Techniques for Training Consistency Models, 2023\n\n[2] Geng et al. ECT: Consistency Models Made Easy, 2024\n\n[3] Jayasumana et al. Rethinking FID: Towards a Better Evaluation Metric for Image Generation, 2024\n\n[4] Podell et al. SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis, 2023\n\n[5] Kirstain et al. Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation, 2023\n\n[6] Xu et al. ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation, 2023\n\n[7] Salimans et al., Multistep Distillation of Diffusion Models via Moment Matching, 2024\n\n[8] Berthelot et al. TRACT: Denoising Diffusion Models with Transitive Closure Time-Distillation, 2023\n\n[9] Luo et al. Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models, 2023\n\n[10] Yin et al. One-step Diffusion with Distribution Matching Distillation 2023\n\n[11] Yin et al. Improved Distribution Matching Distillation for Fast Image Synthesis, 2024\n\n[12] Zhou et al. Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation, 2024\n\n[13] Kim et al. PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher, 2024" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Given the potential for other similarity metrics, such as SSIM or LPIPS, to assess similarity in pixel space and thus learning complexity, have the authors explored these options?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1 - The method is simple yet effective, demonstrating significant performance improvements over the base model in CIFAR and ImageNet settings.\n\nS2 - The experiments are thorough, covering both Diffusion and Flow models and spanning unconditional, class-conditional, and text-conditional generation tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a straightforward and effective approach for optimizing the training time grid for discrete consistency models, resulting in competitive performance across various image generation benchmarks. By defining a notion of \"learning complexity\" using PSNR, the work provides insights on the training dynamics of consistency models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1 - The concept of \"learning complexity\" lacks theoretical justification. In Sec 3, the authors propose that high complexity leads to confusion, while low complexity reduces learning efficiency, but these observations appear mostly empirical. Including more theoretically motivated explanations, as seen in [1] regarding sampling time grid optimization, especially their discuss on the relation between time grid and the KL upper bound, would add depth to the discussion.\n\nW2 - The authors should cite [2] and discuss its model, as it is closely related and also focuses on improving training efficiency in consistency models. Additionally, experiment results with NFE = 2 should be included for a more comprehensive comparison, as it's almost standard practice in most consistency model papers.\n\nW3 - The notation $G_\\theta$ is not clearly defined and is inconsistently used between Section 2.1 and Algorithm 1. In Section 2.1 the last variable in $G_\\theta$ seems to be the target time step, while in Algorithm 1. it is replaced with condition $c$. \n\n[1] Sabour, Amirmojtaba, Sanja Fidler, and Karsten Kreis. \"Align your steps: Optimizing sampling schedules in diffusion models.\" arXiv preprint arXiv:2404.14507 (2024).\n[2] Geng, Zhengyang, Ashwini Pokle, William Luo, Justin Lin, and J. Zico Kolter. \"Consistency Models Made Easy.\" arXiv preprint arXiv:2406.14548 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024see,\ntitle={See Further When Clear: Adaptive Generative Modeling with Curriculum Consistency Model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xVOMtecrAS},\nnote={under review}\n}" }, "abstract": { "value": "Significant advances have been made in the sampling efficiency of diffusion models, driven by Consistency Distillation (CD), which trains a student model to mimic the output of a teacher model at an earlier timestep. However, we found that the learning complexity of the student model varies significantly across different timesteps, leading to suboptimal performance in consistency models.\nTo address this issue, we propose the Curriculum Consistency Model (CCM), which stabilizes and balances the learning complexity across timesteps. We define the distillation process as a curriculum and introduce Peak Signal-to-Noise Ratio (PSNR) as a metric to quantify the difficulty of each step in this curriculum.\nBy incorporating adversarial losses, our method achieves competitive single-step sampling Fréchet Inception Distance (FID) scores of 1.64 on CIFAR-10 and 2.18 on ImageNet 64x64.\nMoreover, our approach generalizes well to both Flow Matching models and diffusion models. We have extended our method to large-scale text-to-image models, including Stable Diffusion XL and Stable Diffusion 3." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "adaptive curriculum learning", "noise schedule", "flow matching", "consistency models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/af3a26bd58d23df6e2c815987a84d1eb38cb0273.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/3f839082fdfa91e0d0d5e40e4b79d1aff0b579d8.pdf" }, "title": { "value": "See Further When Clear: Adaptive Generative Modeling with Curriculum Consistency Model" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xVU6rY37X9
Partial Channel Dependence with Channel Masks for Time Series Foundation Models
main
Active
Time Series;Foundation Model;Channel Dependence;Transformer
learning on time series and dynamical systems
3;3;5;5;6
5;3;4;4;3
2;1;2;3;2
2;2;2;2;3
2;2;3;3;3
4.4
3.8
2
2.2
2.6
-0.356348
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weaknesses. Some other (related) questions are as follows.\n1. In the experiments involving TimeSiam, which encoder was used? The TimeSiam paper proposes both PatchTST (CI properties) and iTransformer (CD properties) as encoders. Since they have different characteristics, specifying the encoder used is essential. Additionally, since TimeSiam is utilized for classification tasks in your paper, it should be categorized appropriately, and the performance of CM + TimeSiam in classification tasks should also be evaluated.\n2. The CI framework does not attend between channels but attends with timestamps or patches in each channel. Then, how is the formulation of A in equation (1) justified as the identity matrix in equation (1) indicates the channel relationship? For example, how does PatchTST match to this formulation?\n3. Is the CD ratio consistent across different models for the same dataset? To establish the CD ratio as a reliable metric for measuring dataset CD, it should be verified whether CD ratios computed using different models (e.g., iTransformer vs. UniTS) yield consistent results.\n4. Since the CM has been validated on datasets with significant CD, it would be beneficial to test its performance on synthetic datasets with uncorrelated channels to verify that CM yields a low CD ratio and does not introduce unnecessary dependencies.\n5. Time series data often exhibit non-stationarity, which can affect correlation measures. How does your method handle non-stationary data, and does the CM adjust for changes in channel dependencies over time?\n6. Have you considered other measures that can capture nonlinear or more complex dependencies between channels, such as mutual information? This could potentially enhance the CM's ability to model complex inter-channel relationships." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The paper critiques the limitations of using correlation coefficients for measuring inter-channel relationships and proposes a new way to measure channel dependence through the CD ratio. The CM introduces very few parameters (α and β from domain parameters) yet effectively learns the implicit inter-channel relationships, leading to performance improvements in time series models.\n\n2. The authors validate their approach through various experiments, including few-shot and zero-shot settings, and provide thorough ablation studies and analyses to demonstrate the effectiveness and suitability of the CM structure and its relationship with CD." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the limitation that time series models often consider only explicit heterogeneity among datasets, such as varying sequence lengths and numbers of channels, while overlooking implicit heterogeneity like channel dependencies within the data. The authors propose a module called the Channel Mask (CM) to enable models to reflect channel dependence (CD) by adjusting the degree of CD based on dataset characteristics, achieving Partial Channel Dependence (PCD). By integrating CM into existing time series models, they demonstrate improved performance across various time series tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper lacks theoretical grounding or in-depth analysis to explain why the proposed method leads to performance improvements. A deeper understanding of the underlying mechanisms would clarify and strengthen the contribution.\n1.1. For example, how can (or how should) we define the concept of “implicit heterogeneity”? Why do we need this concept? While there are studies on channel dependence in time series machine learning, is the concept in this work different from existing studies? Furthermore, how is the rigorous definition of implicit heterogeneity connected to the proposed CM method? What is the rationale behind the use of correlation information between different real-world time series datasets in different domains to achieve better performance?\n1.2. For another example, while the use of correlation matrices primarily captures linear relationships between channels, the application of a sigmoid function in the CM introduces nonlinearity to the model. The authors mention that static correlations (global CD) are reflected through the CM, while dynamic, local correlations are captured by the attention mechanism. This design may help address some aspects of nonlinearity and temporal variation in channel dependencies. However, it remains a question whether this approach is fully sufficient to reflect the complex and changing characteristics inherent in time series data. A clear explanation of the rationale behind how the CM models these dynamic changes, or further investigation into its effectiveness in this regard, would strengthen our understanding. These considerations might be related to the rigorous definition of implicit heterogeneity in time series.\n1.3. Meanwhile, the authors state that “However, most previous works have focused on the model architecture to either capture or disregard CD, often overlooking the potential differences in CD across datasets.” However, they do not show specific theoretical analysis results on why and how existing studies on the model architecture are limited in capturing the differences in CD across datasets.\n\n2. The technical contribution is also limited.\n2.1. Following the above comment, the claim that existing CD models overlook differences in CD between datasets may not be fully substantiated. Since models like Crossformer, iTransformer, and TimeSiam learn attention patterns specific to each dataset, it's unclear whether they truly neglect dataset-specific CD differences.\n2.2. While the authors introduce PCD as an intermediary concept between channel independence (CI) and channel dependence (CD), the structure of CM, which uses a channel correlation matrix, cannot be applied to CI models. The study applies CM to CD models (iTransformer, UniTS) and shows performance improvements but does not verify whether applying CM to CI models can enhance performance. Thus, the proposed PCD cannot be extended to CI settings, limiting its utility in models that assume channel independence.\n2.3. The CM seems to be applicable only to Transformer models that apply attention along the channel axis and cannot be applied to models that apply attention along the time axis. For instance, existing studies like ST-MEM [1] and UniTST [2], which learn CD by applying attention along the time axis after channel flattening, cannot utilize CM. This limitation reduces the general applicability of the proposed method to other architectures.\n- [1] Na, Y., Park, M., Tae, Y., & Joo, S. (2024). Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram. arXiv preprint arXiv:2402.09450.\n- [2] Liu, J., Liu, C., Woo, G., Wang, Y., Hooi, B., Xiong, C., & Sahoo, D. (2024). UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting. arXiv preprint arXiv:2406.04975.\n\n2.4. The CD ratio proposed in the study is calculated after combining CM with the time series model and training, and its value may vary depending on the model used. The paper does not provide sufficient evidence to demonstrate that the CD ratio is consistent across different models for the same dataset, raising concerns about its adequacy as a metric for measuring channel dependence.\n2.5. Meanwhile, although the authors started their argument from the emergence of time series foundation models (TSFMs), they do not provide sufficient validation of this work for different TSFMs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can you provide comparisons of your method against the same architecture with full channel dependence and full channel independence?\n- What theoretical guarantees or analysis can you provide to show when partial dependence would outperform the extreme cases?\n- How does your method handle scenarios where dataset boundaries are ambiguous or when data comes from multiple unknown sources?\n- Can you quantify the computational overhead and memory requirements of your approach compared to full dependence and independence cases?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Introduces a novel methodology for handling channel relationships specifically designed for pretraining foundation models in time series\n- Proposes a systematic framework for incorporating dataset-specific channel dependencies into the pretraining process\n- Demonstrates superior performance in challenging scenarios (few-shot and zero-shot learning)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel concept called Partial Channel Dependence (PCD) to address implicit heterogeneity in time series data, specifically focusing on varying dependencies between channels. The authors propose a channel mask mechanism that combines correlation matrices (for relative dependencies) with learned domain parameters (for absolute dependencies). The approach is evaluated across multiple time series tasks (forecasting, classification, imputation, and anomaly detection) in both few-shot and zero-shot settings, demonstrating its versatility across different foundation models and single-task models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Dataset Identification Requirement**: The method requires knowing exactly which samples belong to which dataset during pretraining. This is a strong assumption that may not hold in real-world applications where data sources might be mixed or unclear.\n- **Lack of Fixed vs. Variable Dependency Analysis**: The paper assumes variable channel dependencies are necessary but doesn't justify why a simpler fixed dependency structure wouldn't work equally well. Without this comparison, the added complexity of variable dependencies might be unnecessary.\n\n- **Missing Critical Baseline Comparison**\n\n - No comparison against the same architecture with full channel dependence\n - No comparison against the same architecture with full channel independence" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tCould the authors apply a similar masking approach to CI models like PatchTST or PITS, and compare their performance across different datasets? If possible, please provide performance comparisons for the original settings w CM and w/o CM." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The method is straightforward, and the motivation is clearly articulated.\n* Extensive experiments across various scenarios validate that PCD enhances the performance of Transformer-based multivariate time series models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Partial Channel Dependence (PCD), a method designed to capture the varying dependencies between channels across different datasets. PCD achieves this by applying channel-wise attention multiplied by the corresponding dataset mask. Experimental results demonstrate that PCD yields an average performance improvement across various time series tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The results for iTransformer and PatchTST presented in Table 3 differ significantly from those reported in the original papers. Additionally, the ETTh2, ETTm1, and ETTm2 datasets are well-known multivariate time series datasets. Could you please provide a comprehensive comparison of the same baselines on these datasets across all prediction lengths?\n* While PCD does enhance the performance of Transformer-based models for multivariate time series forecasting, I recommend including recent MLP-based and CNN-based models in the baselines, such as RLinear and ModernTCN. Additionally, GNN-based models like CrossGNN are also adept at capturing multivariate relationships. Including these would strengthen the performance comparisons.\n* The length of the input time series is an important factor influencing experimental results. Therefore, I would like to see the impact of varying sequence lengths on the results w CM and w/o CM.\n* There are several literature on channel dependency modeling, such as [1-2]. Detailed discussion is needed.\n\n[1] Zhao et al., Rethinking Channel Dependence for Multivariate Time Series Forecasting: Learning from Leading Indicators, ICLR 2024\n\n[2] Qi et al., Enhancing Multivariate Time Series Forecasting with Mutual Information-driven Cross-Variable and Temporal Modeling" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This work focuses on a highly valuable research direction and highlights the importance of Partial Channel Dependence in Time Series analysis.\n2. This work proposes a concise approach to adjust the correlation matrix obtained from prior knowledge." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces the concept of partial channel dependence (PCD) to address implicit heterogeneity in time series (TS) data. By utilizing a channel mask that incorporates a correlation matrix to encode relative dependencies between channels and domain parameters to learn dataset-specific absolute dependencies, the authors refine the correlation matrix for better channel dependency adjustments. The effectiveness of PCD is validated across four TS tasks—forecasting, classification, imputation, and anomaly detection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The concept of Partial Channel Dependence (PCD) is already discussed in the CCM[1] where the authors employ a clustering approach to capture latent PCD. The current content lacks a detailed comparison and discussion with this work.\n2. It is important to note that not all foundation models are based on attention mechanisms (TTM [2]), and not all methods that utilize attention mechanisms effectively capture the attention between channels like UniTS (TimesFM[3], Timer[4], MOIRAI[5], MOMENT[6]). As a plugin for foundation models, the generality of the CM method is insufficient.\n3. The paper does not specify how the correlation matrix mentioned was constructed, and references [7] and [8] primarily focus on capturing correlation relationships with lag properties. In contrast, this work does not explore lag properties but instead utilizes the complete sequences. The construction method of the correlation matrix needs to be explained in detail.\n4. Domain parameters highly correlated with the construction method of the correlation matrix, If use other methods, they may not be effective. The effectiveness of domain parameters in the experiments lacks further validation.\n5. In the experiments, there is a lack of comparison with other plugins, such as LIFT[8], and each task only validates the plugin's improvement on a few models, making its generality difficult to confirm. More importantly, a large number of foundation models have not been considered in the experiments, such as [2-6]. The existence of these issues raises serious doubts about the effectiveness and generality of the plugin. \n\n[1] From Similarity to Superiority: Channel Clustering for Time Series Forecasting.\n\n[2] Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series.\n\n[3] A decoder-only foundation model for time-series forecasting.\n\n[4] Timer: Transformers for Time Series Analysis at Scale.\n\n[5] Unified Training of Universal Time Series Forecasting Transformers.\n\n[6] MOMENT: A Family of Open Time-series Foundation Models.\n\n[7] Vcformer: Variable correlation transformer with in-herent lagged correlation for multivariate time series forecasting.\n\n[8] Rethinking channel dependence for multivariate time series forecasting:Learning from leading indicators." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In conclusion, while the paper presents an extensive array of experimental evidence, the motivation for addressing channel dependency heterogeneity is not entirely convincing in terms of its necessity for building a foundational model. Although channel dependencies may be relevant to specific tasks in the time-series domain (e.g., forecasting), the underlying rationale and its influence on design choices are not fully explained in the manuscript. Additionally, certain key experimental results appear inconsistent, which is problematic given the paper's emphasis on empirical performance gains over theoretical justification. Nevertheless, the proposed architecture is straightforward and appears tailored to address varying channel dependencies, and it has the potential to be impactful if the dependencies are better elucidated and some of the reported results are clarified.\n\nIf the authors address these points and resolve potential experimental issues in the main results table, I would be inclined to raise my rating." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper presents a straightforward algorithm with clear explanations.\n- The manuscript covers a broad range of experiments across various time-series tasks (anomaly detection, forecasting, imputation, etc.) in different settings (supervised, few-shot/zero-shot domain/task) with extensive analyses (e.g., robustness to missing values) that support its claimed advantages.\n- The newly introduced experiment, \"masked channel prediction,\" is particularly novel and promising, though it is only briefly discussed in the manuscript." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a time-series foundation model designed to capture varying channel dependencies across different datasets. By using a correlation matrix as a prior, the model incorporates a learnable domain parameter to construct a domain-specific channel mask, effectively capturing global channel dependencies. In conjunction with local channel dependency masks (as introduced in prior works, such as iTransformer), the proposed model accommodates diverse channel dependencies across datasets, leading to performance improvements in tasks such as forecasting, imputation, and anomaly detection (in full-shot, few-shot, and zero-shot settings)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Despite the demonstrated performance gains across multiple tasks and settings, some of the motivation and design choices remain insufficiently addressed in the manuscript. Additionally, certain reported experimental results differ from what appears in the paper, and key experimental details are not fully explained.\n\nFirst, I would appreciate clarification from the authors regarding the principles underlying channel dependency in the proposed time-series foundation model:\n\n- Why is it crucial to account for varying channel dependencies? To what extent is this heterogeneity in channel dependency necessary for constructing a robust foundation model for time-series data? Prior research has highlighted its importance for forecasting tasks both empirically [1][2] and theoretically [3], yet the rationale for its role in the foundation model is not fully studied. What is the anticipated impact of the time-series foundation model, and how does it address channel dependencies? (Simply stating \"inherent heterogeneity\" may be insufficient in this context.)\n- The global mask encapsulates correlation across various domains during training, which is then reused during testing without further adjustment. Is it sufficient to rely on the global correlation matrix alone? Since local correlations may vary over time, the correlation matrix at test time might differ significantly from that during training. How do you ensure its stability under these conditions?\n- Is correlation an appropriate metric for capturing channel dependencies? Some studies in forecasting suggest a \"causal relationship\" between channels, while others tackle spurious correlations (where channels appear correlated but are not causally related).\n\nFurthermore, several design choices would benefit from clarification:\n\n- What motivated the choice to use domain-specific global attention while sharing local attention across multiple domains? What specific roles do global and local attention play in the model's functioning?\n- Is the \"pair-wise dot product\" between global and local attention sufficient to achieve the intended effect? Under extreme test-time scenarios where variables v1 and v2 exhibit low global correlation (approaching zero) but high local correlation, this design might yield low attention scores, potentially failing to capture abrupt correlation increases under certain test-time conditions.\n\nIn addition, some minor experimental inconsistencies and essential experimental details require clarification:\n\n- Some scores of baselines appear lower than what has been reported in the original paper. In Table 3, the MSE/MAE score on iTransformer is way higher than it has been reported (Appendix F, Table 8 in [4]) Have these results been reproduced, and if so, what could account for the discrepancies?\n- How is the global correlation matrix defined? Does it only consider the correlation matrix over the training period?\n- In zero-shot experiments on new datasets, how is the domain parameter handled? As the domain parameter cannot be directly learned for unseen domains, is it substituted with that of a similar domain?\n\n[1] Rethinking Channel Dependence for Multivariate Time Series Forecasting: Learning from Leading Indicators (ICLR 2024) \n\n[2] Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series (arXiv 2024) \n\n[3] Time-Series Forecasting for Out-of-Distribution Generalization Using Invariant Learning (ICML 2024)\n\n[4] ITRANSFORMER: INVERTED TRANSFORMERS ARE EFFECTIVE FOR TIME SERIES FORECASTING (ICLR 2024)" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce the concept of partial channel dependence (PCD) to partially adjust the channel dependence (CD) captured by the model through the proposed channel mask (CM), which contains dataset-specific information." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024partial,\ntitle={Partial Channel Dependence with Channel Masks for Time Series Foundation Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xVU6rY37X9},\nnote={under review}\n}" }, "abstract": { "value": "Recent advancements in foundation models have been successfully extended to the time series (TS) domain, facilitated by the emergence of large-scale TS datasets. However, previous efforts have primarily focused on designing model architectures to address explicit heterogeneity among datasets such as various numbers of channels, while often overlooking implicit heterogeneity such as varying dependencies between channels. In this work, we introduce the concept of partial channel dependence (PCD), which enables a more sophisticated adjustment of channel dependencies based on dataset-specific information. To achieve PCD, we propose a channel mask that captures the relationships between channels within a dataset using two key components: 1) a correlation matrix that encodes relative dependencies between channels, and 2) domain parameters that learn the absolute dependencies specific to each dataset, refining the correlation matrix. We validate the effectiveness of PCD across four tasks in TS including forecasting, classification, imputation, and anomaly detection, under diverse settings, including few-shot and zero-shot scenarios with both TS foundation models and single-task models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Time Series", "Foundation Model", "Channel Dependence", "Transformer" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3536f3f5f0c48d18414fd9b73b6cc97c02ffa3b9.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/147c0c2589209dfe9e6c47d6775bd84dbdacb2e8.zip" }, "title": { "value": "Partial Channel Dependence with Channel Masks for Time Series Foundation Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xVefsBbG2O
Diffusion Models are Evolutionary Algorithms
main
Active
Machine learning;evolutionary computation;Evolutionary Algorithms;Diffusion Models;Optimization
generative models
3;3;6;8
4;4;3;4
2;2;3;3
2;2;3;2
2;2;3;3
5
3.75
2.5
2.25
2.5
-0.272166
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In the text is not mention what g could be or some options. It could be like the utility function used in Natural Evolutionary Strategies? because it seems to assign more weight/importance to high ranking solutions.\n\nI think it would be very interesting in a future work to explore more the latent diffusion version of the algorithm, and techniques to understand which parameters could be ignored or which parameters should be perturbed together." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "By connecting methods from different fields the work proposed an interesting new evolutionary algorithm that can be further improved using variations and techniques found in diffusion models.\n\nThe text is very clear and the math derivation from diffusion to an EA algorithm is easy to follow. Figure 1 really helps to visualize how the population of the diffusion EA spreads to the higher fitness regions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors try to connect Diffusion Models and Evolutionary computation by arguing that both processes do iterative refinements through an update rule plus some perturbation. In evolutionary algorithms the update comes in the form of natural selection, in diffusion is the denoising phase. The perturbation corresponds to mutation for evolution, and the diffusion phase for diffusion models. \n\nFrom the above connection an evolutionary algorithm that performs in its iteration a diffusion process is proposed. Instead of aiming to recover some distribution of the data, the goal is to turn the random initial population points towards a distribution centered around the optimized function optimum." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Maybe I missed it but there is no discussion on how to choose the mapping to a density function g() or which g() was used for the experiments. It seems is important for the search to work properly to have a mapping that makes it clear which points should be paid more attention by assigning to them bigger weights. Maybe there is some connection to common selection strategies in evolutionary algorithms that could be discussed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "As mentioned in the appendix, different Alpha and noise schedule settings are tested, what about the detailed experimental results? Is there any analysis about the results of different settings, which could be useful for the users?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-written and easy to follow. The proposed method Diffusion Evolution is well-motivated, with clear explanation and illustration." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new evolutionary algorithm inspired by diffusion models. It views the evolution processes as the denoising process of diffusion models, and designs an evolutionary algorithm named Diffusion Evolution, which is based on the denoising framework of the famous diffusion model DDIM. During evolution, it takes each individual in the population as a solution in the denoising process, and updates it with similar updating rules in DDIM. To facilitate better performance, it further follows previous works to optimize in the latent space. Experiments on several benchmark functions and a simple cart-pole controlling problem show that compared with classic baseline methods include CMA-ES, OpenES and PEPG, the proposed Diffusion Evolution can obtain more diverse solutions with good fitness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "As there has been a variety of evolutionary algorithms with various inspirations, the most important issue when proposing a new method should be clarifying its strength compared with previous methods, from the perspectives of theoretical analysis and extensive experiments. However, the proposed Diffusion Evolution in this paper is lack of theoretical analysis. Meanwhile, the experiments are much too simple. For example, only five synthetic functions and a simple cart-pole controlling problem are included. As the diversity of solutions seems to be the strength of the proposed Diffusion Evolution, except for methods like CMA-ES, which are not specified for diversity, comparison with other kinds of evolutionary algorithms like Quality-Diversity (QD) are necessary." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "In the experiments section, the authors select three different evolutionary strategies for performance comparison. However, the rationale for choosing these specific methods is not clearly articulated. I would appreciate more insight into this selection process, particularly since each of these methods was introduced over five years ago." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The idea of regrading the diffusion model as an evolutionary algorithm is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript introduces an innovative perspective by interpreting the diffusion model as an evolutionary algorithm, highlighting the mathematical similarities between the diffusion and evolutionary processes. It proposes a Diffusion Evolution Method that employs iterative denoising to heuristically optimize solutions within the parameter space, drawing an analogy between the generative process of the diffusion model and the selection and mutation mechanisms in biological evolution." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The mathematical demonstration process in this manuscript is insufficient. The authors conceptualize the evolution process as a transformation of the probability density function, deriving the diffusion evolution algorithm from the Bayesian formula. However, this derivation overlooks key complexities inherent in evolutionary dynamics, such as genetic drift and gene recombination. Additionally, the application of the Bayesian formula assumes conditional independence, which may not be held in the context of evolutionary processes due to the interactions and competition among individuals. Furthermore, the manuscript does not provide a clear explanation of how to effectively map a complex fitness function into a probability density function, which is essential for the practical implementation of the proposed model.\n2. Since the author claims that the diffusion model functions as an evolutionary algorithm, the experimental section must reflect this comparison appropriately. Specifically, in scenarios involving multiple tasks and comparisons, aspects such as speed, performance metrics (including best, worst, median, and average results), and the stability of performance should be examined. Additionally, the model's excellence should be illustrated through the mean variance of the results. Currently, the scope of experiments conducted is insufficient and does not adequately support the claims made in the manuscript.\n3. There are several grammatical errors and unclear expressions throughout the article. For example, the definitions of certain terms lack clarity, the derivation process for the formulas is inadequately explained, and the labeling of the charts is inaccurate.\n4. The authors categorize diffusion models as evolutionary algorithms on the basis that both methods perform distribution transformation. However, this classification may be overly broad. For instance, semantic segmentation models also involve transforming distributions—from real images to pixel-level segmentation results. Should we, therefore, classify these segmentation models as evolutionary algorithms as well? This comparison seems inaccurate. A more robust argument would establish that the Markov process within a non-equilibrium thermodynamics framework (diffusion) function as an unconstrained parameter optimization technique (evolution). If this argument cannot be substantiated, I recommend revising the title to better reflect the content.\n5. The manuscript presents two primary sets of experiments: multi-objective evolution and latent space diffusion evolution. However, the multi-objective evolution experiments are limited to a two-dimensional parameter space and a simplistic fitness function, which fails to demonstrate the proposed approach's efficacy in high-dimensional parameter spaces or with more complex fitness functions. Similarly, the latent space diffusion evolution experiments focus solely on the CartPole task, lacking validation across a broader range of reinforcement learning tasks. Furthermore, the experimental results do not include any statistical significance tests, making it challenging to determine whether the observed improvements in algorithm performance are statistically significant. This omission significantly undermines the credibility of the findings presented." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The work potentially contributes new insights and methods for both diffusion model and evolutionary algorithm communities, even if the practical impact in specific real-world applications remains to be validated. Can the authors address this?\n\nCan statistical tests be added?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "he core idea of treating diffusion models as evolutionary algorithms is innovative. It extends diffusion models to broader applications by framing them as tools for evolutionary tasks, potentially contributing new knowledge to both evolutionary biology and AI communities.\n\nTechnically: The paper provides thorough mathematical grounding for the equivalence between diffusion and evolution, specifically explaining diffusion as a probabilistic denoising process analogous to evolutionary mechanisms like mutation and selection.\n\nExperimentals: Comprehensive experiments benchmark Diffusion Evolution against traditional algorithms (e.g., CMA-ES, PEPG) ascross various fitness landscapes, demonstrating its strength in maintaining diversity and achieving multiple optima.\n\nApplications in High-Dimensional Problems: The adaptation of Diffusion Evolution to high-dimensional tasks via latent space diffusion showcases its practical viability in reinforcement learning contexts, with empirical results suggesting its potential in complex environments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores the theoretical and practical parallels between diffusion models and evolutionary algorithms, proposing the \"Diffusion Evolution\" approach, which adapts diffusion models for evolutionary tasks. This new method, Latent Space Diffusion Evolution, enhances the ability to find diverse and optimal solutions in high-dimensional parameter spaces, particularly within reinforcement learning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Complexity of Explanation: The theoretical connections between diffusion and evolution, while compelling, are highly complex, and the paper occasionally sacrifices clarity for depth. This might limit accessibility to a broader audience.\n\nAlthough promising, the methodology may have limitations in scenarios requiring open-ended evolution, a challenge acknowledged briefly. More thorough discussion could help set realistic expectations for potential users.\n\nWhile the Diffusion Evolution algorithm demonstrates superiority in finding diverse solutions, certain comparisons (e.g., high-fitness solutions) against traditional evolutionary algorithms lack statistical significance or detailed analysis of computational cost versus benefit, particularly in high-dimensional settings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024diffusion,\ntitle={Diffusion Models are Evolutionary Algorithms},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xVefsBbG2O},\nnote={under review}\n}" }, "abstract": { "value": "In a convergence of machine learning and biology, we reveal that diffusion models are evolutionary algorithms. By considering evolution as a denoising process and reversed evolution as diffusion, we mathematically demonstrate that diffusion models inherently perform evolutionary algorithms, naturally encompassing selection, mutation, and reproductive isolation. Building on this equivalence, we propose the Diffusion Evolution method: an evolutionary algorithm utilizing iterative denoising -- as originally introduced in the context of diffusion models -- to heuristically refine solutions in parameter spaces. Unlike traditional approaches, Diffusion Evolution efficiently identifies multiple optimal solutions and outperforms prominent mainstream evolutionary algorithms. Furthermore, leveraging advanced concepts from diffusion models, namely latent space diffusion and accelerated sampling, we introduce Latent Space Diffusion Evolution, which finds solutions for evolutionary tasks in high-dimensional complex parameter space while significantly reducing computational steps. This parallel between diffusion and evolution not only bridges two different fields but also opens new avenues for mutual enhancement, raising questions about open-ended evolution and potentially utilizing non-Gaussian or discrete diffusion models in the context of Diffusion Evolution." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Machine learning", "evolutionary computation", "Evolutionary Algorithms", "Diffusion Models", "Optimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/51cbb736f950b3c3bd50484d1ceeefb5f1521ccc.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Diffusion Models are Evolutionary Algorithms" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xVw8YNEtH3
Reset Method based on the Theory of Manifold Optimization on Real Manifolds
main
Active
Manifold Optimization;Real Manifolds;Method;Deep Learning.
optimization
1;3;5
4;5;3
1;2;3
1;2;2
1;1;2
3
4
2
1.666667
1.333333
-0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "N/A" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper seems to be significantly altered for format or been generated by an automated program. Specifically:\n1. Figures 1, 2 have captions with notably smaller font size. On the other hand, there are many extra spacing in the paper that could have been utilized: Figures 1, 2 use only 1/3 of the width, so do Tables 1-9; Lines 83-90 are empty sections. Table 1 is not even referenced in the text.\n2. The citations are weirdly repeated numerous times: Line 50-75, notably Hu et al 2020 has been cited for 5 times. It happens multiple times throughout the paper: at lines 137-141, 145-148, 185-194, to name a few.\n3. Even if one ignores all these unusual things, the numbers reported in the tables looks indistinguishable from the alternatives." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "See summary" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Minors: \n1. Line 152, manifold $M$ not $\\mathcal{M}$.\n2. $J$ is loss function, $f$ is objective function.\n3. Line 218 $x$ vs $x_{i-1}$ that comes from lazy typing to include all $grad$, $f$ and $x_{i-1}$ inside the command mathrm. \n4. Again, inconsistent notations: retraction map $R$ and $\\mathcal{R}$\n5. $\\phi^{\\lambda_i}$: $\\lambda_i$ is the power???\n6. No definition of $C_{i+1}$, it seems that the reader needs to obtain it from similar formula for $C_{i-1}$.\n7. Introducing Barzilai-Borwein method with notation $\\omega_{i-1}$, $\\zeta_{i_1}$ and $\\zeta_{i_2}$ without explanation. \n8. Algorithm 1: Compute $\\zeta_i$ according to equation (7), but equation (7) is an inequality.\n9. Theorem 4.2, no definition of $x^{\\star}_{i+1}$.\n10. Line 249: $C_{i-1}$ is a convex combination of $C_{i-1}$ and $f(x_{i-1})$?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper proposed a method for optimizing on real manifolds. \nSome theories are provided with support from empirical results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper provides an analysis of optimization on manifold. It proposes a Reset method to improve the convergence and model stability when compare with other mentioned methods. More particular, section 4 recall important tools in Riemannian manifold and give summary of some gradient descent methods. Section 4.2 presents the Reset method that contains three steps to update, coming from $x_{i-1}$ to $x_{i+2}$. There are two adjusting steps using gradient direction and function $B_{x_i}$. Theorem 4.1 proved that the accumulation points is also the stationary point. Theorem 4.2 give some upper bound for the difference between consecutive gradient updating using SGD, Adam and AdamW. Experiments is carried out on CIFAR-10 and CIFAR-100 for image generation task and cluster constrat task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It is not clear about the insight/motivation of each mentioned/proposed method presented through out the paper. The first one are derivations in inequalities (6) and (7). There is not picture or figure to illustrate clearly the advantage of method. I could only imagine that the first order of Taylor approximation is not good enough, thus we could obtain a better one via Armijo search, when some certain conditions satisfied. That goes through an interpolation between some bounds and expect that the interpolation will help to achieve a better bound. The work also mentioned the Barzilai-Borwein method but there is no explanation about reason for computing the correlation between two vectors. \n\nExperiment results: For Image generation task, in Table 2, when adding your method with different type of manifold (in fact it is not clear for me why we have different type of manifold here), the proposed method is at best with spv, but does not work better with \"e\", \"fr\", \"o\", etc. In Table 3, there are some mixed performances between those method when comparing with each other except the \"spv\". Is there any explanation for the performance of each method?\n\nFor image generation task, there is no picture shown, rather than the number. Since average precision is already high, is the improvement noticeable in the pictures?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "+ What is the precise definition of $B_{x_i}(x_i)$ including what are its inputs, outputs, its purposes, and the motivation for introducing this function?\n\n+ What is the usage of $\\zeta$'s? Seems like they have never been used in the pseudocode or the algorithm in Eq. (5).\n\n**References:**\n\n[1] Brendan O’donoghue and Emmanuel Candes. Adaptive restart for accelerated gradient schemes. Foundations of computational mathematics, 15:715–732, 2015.\n\n[2] Bonnabel, S. (2013). Stochastic Gradient Descent on Riemannian Manifolds. IEEE Transactions on Automatic Control, 58(9), 2217-2229.\n\n[3] Yun, J., & Yang, E. (2023). Riemannian SAM: Sharpness-Aware Minimization on Riemannian Manifolds. In Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The restart method was first introduced by O'donoghue & Candes [1] in the Euclidean settings. This paper suggests an approach to extend this method to the Riemannian setting. If revised properly, this could be an interesting perspective for improving generic optimizers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel Riemannian optimizer that incorporates an additional \"reset\" step to adjust the learning rate." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Presentation**: Firstly, several serious representation issues make the paper incoherent and difficult to follow, obscuring the key messages and contributions. Key issues include:\n\n+ *Related Works and Preliminary Sections*: The related works and preliminary sections are placed in the appendix. To improve clarity, many sections could be condensed or moved to the appendix, creating space for these key areas—for example, explanations of SGD, Adam, AdamW, and most experimental results in the tables.\n\n+ *Main Algorithm Presentation*: The algorithm presentation has gaps in clarity and completeness. For instance, variables such as $ \\zeta_{i_1} $ and $ \\zeta_{i_2} $ appear in the pseudocode but are unused in subsequent steps. In Eq. (5), the process of obtaining $ \\alpha_{i+1} $ is unclear. Additionally, the line \"Set $ R_{x_{i+1}}(-\\alpha_{i+1} \\nabla f(x_{i+1})) \\leftarrow B_{x_i}(x_i) $\" is confusing, as we cannot \"set the retraction\" to a specific value.\n\n+ *Function $ B_{x_i}(x_i) $*: The definition and role of $ B_{x_i}(x_i) $ are unclear and potentially redundant. What are its inputs and outputs? Although it’s described as a \"step size correction function,\" suggesting it outputs the step size, in Eq. (5), it produces $ x_{i+1} $, implying it represents the next model. Furthermore, it is stated that $ B_{x_i} $ is selected from SGD, Adam, or AdamW—all Euclidean optimizers—suggesting $ x_{i+1} $ might not lie on the manifold, making it impossible to compute $ x_{i+2} $ directly from $ x_{i+1} $. Additionally, the use of $ B_{x_i}(x_i) $ as notation is confusing since it appears to take only a single input $ x_i $. The reasoning for adding a correction step with operator $B $, which is supposed to be a primary contribution, is also not discussed.\n\n+ *Experimental Results*: The experimental results were provided on many real manifolds. However, it is unclear if the improvements are attributed to the Reset method or the manifolds. In particular, it would be more fair to compare the proposed method with other Riemannian optimizers, such as RSGD. \n\n+ *Minor Formatting Issues*: Multiple minor issues are present, such as inconsistent font sizes in table captions, unusual word choices (e.g., “contenting” in the pseudocode), inconsistent font styles (e.g., $x_i$ in the text but not italicized in Eq. (5)), table formatting inconsistencies, and citation format errors (e.g., \\cite{} vs. \\citep{}).\n\n**Soundness:** In both theory and experiments, it is critical to compare the Reset method with other Riemannian optimizers, such as RSGD [2] or Riemannian-SAM [3], in addition to Euclidean baselines like SGD, Adam, and AdamW.\n\nThe theoretical contributions are relatively limited. Theorem 4.2 only demonstrates the gradient deviation of the Reset method when compared with SGD, Adam, and AdamW, and it includes an unavoidable term $\\epsilon_0^2 $, indicating a potentially unfavorable trait of the Reset method. Additionally, comparing the Reset method, a Riemannian optimizer, to Euclidean methods may be unfair from a theoretical perspective.\n\nThe experimental results are also not particularly insightful. Specifically, all results on the CIFAR-10, CIFAR-100, STL-10, and SVHN datasets achieve accuracies of at least 98%, making it challenging to discern performance differences. On the Market-1501 and DukeMTMC-reID datasets, variations of the Reset method applied to the same base optimizer yield considerable performance differences, suggesting that the method may be sensitive to variant choices, making tuning more challenging.\n\n**Contribution:** To my understanding, the restart method was proposed by O'donoghue & Candes [1]. This work seems to be a trivial extension of this work to the Riemannian setting, which makes its contributions limited. Moreover, the motivation and intuition of the proposed algorithm are unclear, mostly due to the incohesive presentation." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Based on the theory of real surface optimization, we propose a new optimization method, named the Reset Method." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024reset,\ntitle={Reset Method based on the Theory of Manifold Optimization on Real Manifolds},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xVw8YNEtH3},\nnote={under review}\n}" }, "abstract": { "value": "Manifold optimization is prominent in the fields of applied mathematics, statistics, machine learning, and in particular, deep learning. By leveraging the intrinsic geometric properties of manifolds, constrained optimization problems can be transformed into unconstrained optimization problems on certain manifolds. An innovative method, Reset Method, is introduced that combines manifold optimization and standard methods (SGD, Adam and AdamW), aiming to enhance the improvement of precision. The efficacy of our proposed method is corroborated by extensive deep learning experiments, providing visible higher precision." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Manifold Optimization", "Real Manifolds", "Method", "Deep Learning." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/915ece546feb0284b8d4c61dd77d3e759323fd5a.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Reset Method based on the Theory of Manifold Optimization on Real Manifolds" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xW4J2QlqRx
Context Matters: Leveraging Contextual Features for Time Series Forecasting
main
Active
Time series forecasting;Contextual features;Predictive modeling
learning on time series and dynamical systems
3;5;5;5
4;3;5;4
2;3;3;3
2;2;1;1
2;3;3;2
4.5
4
2.75
1.5
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The writing is easy to follow" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a work on aggregating contextual information into time series forecasting. Specifically, the authors propose to use a universal context encoder to encode the contextual information as embedding and boost the time series forecastor with the encoding. Experimental results suggest that the contextual information boost the performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The contribution is limited. This paper mainly discuss about contextual information in time series forecasting. However, it is well known that the contextual information is helpful for time series. What really matters is how to make use of it. In this paper, the authors list three reasons about why it is hard to make use of the information, such as multi modality and non-uniformity across domains. However, when really talking about how to make use of the context, the authors just simply concatenate the category variables with continuous variables. This design is unrelevant with the challenges mentioned. What if some out-of-domain dataset contain some variables that are unseen on the training data? Also, the design is well-justified. Why not align the metadata as one modality (like text) and then convert it to embeddings with one shared encoder? Do we need to align the timely metadata (like news with timestamp) with the speific timestamp before forwarding them all into the cross attention?\n\n2. The theoretic analysis is useless but takes 1.5 pages. All the theoretic results are trivial corollary of conclusions from introductory undergraduate-level text book of information theory and machine learning. I suggest that the authors are just trying to decorate their papers with some theoretic analysis\n\n3. Experimental results are not comprehensive enough. We prefer experiments on more datasets (like ETT) and baselines (like TimesNet).\n\n4. The authors list 4 reasons why they prefer fine-tuning in Sec 5.2. But there is lack of empirical supports. I did not see training curves that reflect the unstable training of all-from-scratch. \n\n5. The bitcoin-news dataset description is too short. This could be the most insightful part about how to collect contextual data. But there is no details. How and where did you get the data? Are the data filtered? Are the collection based on time or keywords? Some many details are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The presentation and problem setup are clear and easy to follow. However, I have two major concerns regarding the theoretical analysis of the benefits of adding exogenous variables and the overall technical contribution of the paper.\n1. By definition, mutual information is non-negative. Even in the worst-case scenario where A and C are completely independent, the increase in mutual information is 0. A higher mutual information value does not necessarily translate to higher accuracy or better performance. Noisy or even misleading data can also increase mutual information as long as some degree of dependency exists. Could the authors elaborate on this issue?\n2. Adding exogenous variables to time series forecasting models is not a novel concept. It is a natural extension of general time series forecasting models and is a dominant approach for work in some domains, such as financial market predictions. For example, older transformer-based models like Autoformer can be adapted to include exogenous variables and are often used as baselines in other papers. Moreover, newer transformer-based models, such as Timexer, also support this approach natively. There are also attempts to use large language models (LLMs) to incorporate exogenous market information for stock movement prediction, such as Plutos. Therefore, I believe the overall contribution of the work is somewhat weak, as it does not introduce a new concept.\n3. It is expected that the authors should at least compare the proposed framework with other models that can accept the same input. Otherwise, it is difficult to justify the performance of the proposed framework." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The presentation is clear.\n2. The problem studied is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a framework designed to incorporate exogenous variables into time series forecasting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The contribution is limited.\n2. There is a lack of comparison with SOTA models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Refer to \"Strengths And Weaknesses\"." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The starting point of this article is good. Exogenous contextual features can indeed serve as key auxiliary information to influence time series forecasting.\n2. The paper is mostly well-presented and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This article proposes ContextFormer, a novel plug-and-play framework that utilizes cross-attention blocks to integrate multimodal metadata, including categorical, continuous, time-varying contexts, and even textual information, into existing any context-agnostic base forecasters. The author selects two SOTA models of PatchTST and iTransformer as the base forecasters and validates ContextFormer framework on seven real-world datasets spanning energy, traffic, environmental, and financial domains. Experimental results confirm the effectiveness of the framework." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Introduction part summarizes three challenges in incorporating metadata into forecasting models, but this article does not provide a detailed explanation of why ContextFormer can solve these problems. For example, the first challenge mentions the importance of aligning time series history and multimodal metadata. However, there is a lack of evidence on how ContextFormer ensures that the two have been aligned, the alignment effect, and the impact of aligned representation on forecasting results. Although the effectiveness of ContextFormer is reflected in the final MSE/MAE metrics, I mainly focus on these intermediate results and suggest the author to supplement these explanations.\n\n2. In related works part, the author lists some methods for forecasting with covariates. As far as I know, it also includes methods such as TFT, TSMixer, and TimeXer, etc., but this article does not compare with these methods. It is suggested that the author supplements these comparison experiments.\nSome references:\nTFT: https://doi.org/10.1016/j.ijforecast.2021.03.012,\nTSMixer: https://arxiv.org/pdf/2303.06053,\nTimeXer: https://arxiv.org/pdf/2402.19072.\n\n3. In Table 2 caption, it is mentioned that \"The best results for each base model in each row are highlighted in bold.\". However, the results of the Retail dataset (horizon=96, MSE metric) do not match this statement. It is recommended to revise the wording.\n\n4. Doubt about lines 491-492: \n 1) The explicit motivation for comparing with Chronos and whether the comparison is fair? \n 2) Are the results in Table 2 sufficient to support this conclusion? I agree with the example provided by the author (486~487), but for other datasets, I have the following questions: (a) Is it still necessary to compare the context-aware model with Chronos for the two datasets of Air quality and Electricity, as the context-agnostic model is already better than Chronos? (b) For the Retail and Bitcoin datasets, Chronos performs the best. I hope the author can provide detailed explanations to alleviate my concerns." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The authors can refer to the weakness listed above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. As a plug-in solution, ContextFormer can be integrated with various predictive backbones for a diverse array of applications. \n2. The two-stage fine-tuning methodology guarantees that the lower bound of ContextFormer is at least equivalent to that of context-agnostic approaches.\n3. The extensive experiments look hopeful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces ContextFormer, a method designed to incorporate multimodal contextual information into existing time series forecasting models. Traditional models typically rely solely on historical data, neglecting external influences such as news, weather, and market trends that can significantly impact predictions. ContextFormer effectively addresses this limitation by integrating diverse types of metadata—including categorical, continuous, time-varying, and textual information—using cross-attention mechanisms. The experiments demonstrate that ContextFormer can enhance forecasting performance by up to 30% compared to state-of-the-art models across various datasets in fields such as energy, traffic, environment, and finance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Inadequate Presentation.** The organization of the paper lacks rationality. Excessive emphasis is placed on the importance of external context, which is well-known. The methods section is overly succinct. The descriptions of Metadata Embedding and Temporal Embedding are overly concise, leaving the application dimensions and the output shape ambiguous. Furthermore, the embedded interaction in the Information Fusion component is similarly unclear, too. Its applicability to both a single embedding for one variable (iTransformer and PatchTS) and a single embedding for one timestamp (Informer and Autoformer) across two distinct architectures remains uncertain. In summary, the workflow of the algorithm is perplexing.\n\n2. **Limited Innovativeness.** Although the author presents a case in the introduction regarding the influence of news on stock prices, the treatment of this unstructured external text information is only mentioned in Appendix C.7 cursory and lacks corresponding experimental results on BITCOIN-NEWS. The models and data discussed throughout the paper primarily rely on structured auxiliary information, despite incorporating both continuous and categorical variables. The prior research has focused on integrating structured external information to enhance prediction accuracy, including exogenous variables [1] and timestamps [2]. The paper's core contribution remains ambiguous. Moreover, The three challenges outlined in the introduction are unpersuasive.\n\n3. **Unrigorous Experiments.** As stated in Weaknesses 2, there has been some prior work aimed at integrating structured external information to enhance prediction accuracy. This paper should be compared with these frameworks that utilize external information, rather than solely with models that lack auxiliary information.\n\n**References**\n\n[1] 2024, TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables\n\n[2] 2024, Rethinking the Power of Timestamps for Robust Time Series Forecasting: A Global-Local Fusion Perspective" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024context,\ntitle={Context Matters: Leveraging Contextual Features for Time Series Forecasting},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xW4J2QlqRx},\nnote={under review}\n}" }, "abstract": { "value": "Time series forecasts are often influenced by exogenous contextual features in addition to their corresponding history. For example, in financial settings, it is hard to accurately predict a stock price without considering public sentiments and policy decisions in the form of news articles, tweets, etc. Though this is common knowledge, the current state-of-the-art (SOTA) forecasting models fail to incorporate such contextual information, owing to its heterogeneity and multimodal nature. To address this, we introduce ContextFormer, a novel plug-and-play method to surgically integrate multimodal contextual information into existing pre-trained forecasting models. ContextFormer effectively distills forecast-specific information from rich multimodal contexts, including categorical, continuous, time-varying, and even textual information, to significantly enhance the performance of existing base forecasters. ContextFormer outperforms SOTA forecasting models by up to 30% on a range of real-world datasets spanning energy, traffic, environmental, and financial domains." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Time series forecasting", "Contextual features", "Predictive modeling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/670e33ae98e00b23c0a071c911c512036a765b10.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Context Matters: Leveraging Contextual Features for Time Series Forecasting" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xXTkbTBmqq
OLMoE: Open Mixture-of-Experts Language Models
main
Active
large language models;mixture-of-experts;open-source
foundation or frontier models, including LLMs
8;8;10
2;3;5
4;4;4
3;4;4
4;4;4
8.666667
3.333333
4
3.666667
4
0.944911
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See Above." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The writing in this paper is clear and easy to follow.\n- The paper advances MoE research by providing a fully open-sourced, state-of-the-art MoE architecture, which is beneficial for the research community.\n- The paper presents a thorough analysis of key design choices in MoE, offering valuable guidance on building high-performance MoE models.\n- The analysis is insightful, with discussions on phenomena such as router saturation and expert co-activation providing fresh perspectives and meaningful implications for the field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces OLMoE, a fully open, state-of-the-art language model built on a sparse Mixture-of-Experts (MoE) architecture. The authors conducted extensive experiments to validate the effectiveness of the proposed method, including evaluations after pre-training and adaptation phases. Additionally, they explored key design choices within the MoE framework, examining factors like expert granularity, routing strategies. Their analyses provided valuable insights into MoE, including router saturation, expert co-activation, and domain/vocabulary specialization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have a question regarding the experimental results: were the model parameters quoted directly from the original paper for the results shown in Table 2? For instance, in the original paper, OpenMOE’s activation parameter count is reported as 2.1B, whereas Table 2 shows an activation parameter count of 2.9B for OpenMOE. I recommend that the authors carefully verify the accuracy of these values." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The work is well presented and possible suggestions for improvements are addressed in the future work section." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1) Strong empirical results with state-of-the-art performance for 1B active parameters.\n2) Good exploration of the MoE design space which forms a good guide for MoE model design.\n3) Novel analysis of routing behavior in MoE models during training and inference.\n4) This is the only MoE model where the model weights, code, data and checkpoints are openly available and thus the work is entirely reproducible." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a mixture-of-experts (MoE) LLM model called OLMoE that has 1B active parameters and 7B total parameters. The OLMoE model Pareto dominates many state-of-the-art models in the performance vs. active parameters space. The paper explores and presents insights on what is optimal in the design-space of MoE parameters and present analysis of routing behavior in MoEs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Other state-of-the art MoE models in related works are not exactly in the same parameter count configuration (1B/7B) so an exact comparison cannot be made to this model's performance.\n2) Most of the design choices and training choices are based on prior work and the novelty is more in the design space exploration and analysis of routing behavior." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1) What do you think about the necessity of expert parallelism? This model used dropless MoE, so it anyway will be unbalanced when using expert parallelism during training and inference. Without expert parallelism, it is still okay when the model is small. However, if we are aiming at a very large model, which has very large experts even if we are using the \"fine-grained MoE\", the expert parallelism would still be required? So how can we handle the token drop problem in this case?" }, "rating": { "value": 10 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1) There is no doubt that training MoE LLMs is challenging. This work offers a couple of important takeaways about how to train a good MoE LLMs, which is very helpful to the community.\n2) The presentation is very clear. For instance, the Table 1 delivers many key designs clearly at the early section of the paper.\n3) The model performance is good as well. As shown in Table 2 and 3, the model performs competitive with dense open models and partially open models (e.g. Qwen, Deepseek).\n4) The Analysis in Section 5 is informative, which greatly help readers and authors to understand how is the model working. This can also greatly speedup the growth of the community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work is devoted to sharing the insights, data, and checkpoints of a series of MoE LLMs. The model achieved promising results on various benchmarks as a fully open model family." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Although the model has been relatively large, it is still much smaller than the SoTA MoE LLMs. I understand it is hard to get enough training resource for a fully open projects." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A state-of-the-art Mixture-of-Experts LLM with 1B active and 7B total parameters trained for 5T tokens that is 100% open-source" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024olmoe,\ntitle={{OLM}oE: Open Mixture-of-Experts Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xXTkbTBmqq},\nnote={under review}\n}" }, "abstract": { "value": "We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain it on 5 trillion tokens and further adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available models with similar active parameters, even surpassing larger ones like Llama2-13B-Chat and DeepSeekMoE-16B. We present novel findings on MoE training, define and analyze new routing properties showing high specialization in our model, and open-source all our work: model weights, training data, code, and logs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "large language models", "mixture-of-experts", "open-source" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/46823ebaa15350504c20f9133a77c76ccdca4f0b.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "OLMoE: Open Mixture-of-Experts Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xYquBPHppn
A VARIATIONAL FRAMEWORK FOR GRAPH GENERATION WITH FINE-GRAINED TOPOLOGICAL CONTROL
main
Active
Controlled Graph Generation
generative models
3;3;5;6
5;4;5;2
3;1;2;3
2;2;2;3
3;2;2;3
4.25
4
2.25
2.25
2.5
-0.628539
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The method utilizes a conditional VAE that integrates information from both the adjacency matrix and attribute vectors during training, resulting in more precise graph generation.\n\n2. The scalability of the method is quite good. The method can be used to generate large-scale graphs, which is quite competitive compared to other auto-regression models. \n\n3. The paper is well-written and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces CGRAPHGEN, a novel framework for controlled graph generation that allows for fine-grained control over graph topological properties. The authors propose a conditional variational autoencoder (VAE) that, unlike previous approaches, utilizes both the graph adjacency matrix and attribute vectors during training for improved decoder tuning and relies only on attributes during inference. This enables CGRAPHGEN to generate graphs that closely match the specified structural attributes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The baselines and the datasets are quite simple. The authors are recommended to compare with more recent graph conditional generation methods. e.g. [1] [2] [3]\n\n[1] Yang, Carl, et al. \"Conditional structure generation through graph variational generative adversarial nets.\" Advances in neural information processing systems 32 (2019).\n[2] Ommi, Yassaman, et al. \"Ccgg: A deep autoregressive model for class-conditional graph generation.\" Companion Proceedings of the Web Conference 2022. 2022.\n[3] Mo, Zhanfeng, Tianze Luo, and Sinno Jialin Pan. \"Graph principal flow network for conditional graph generation.\" Proceedings of the ACM on Web Conference 2024. 2024.\n\n2. it is unclear how the hyper-parameters are defined. In Figure 5, the performance seems quite stable for different gamma, e.g. there's a drop when gamma = 0.8 on arxiv dataset. \"When γ increases and more information is drawn from the prior pθ, the generation error increases.\" is not always true.\n\n3. No theoretical analysis of how the proposed method can reduce the generation error better than other baseline methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What is d(Z_c) in Eq. (6)? There is no explanation for this notation." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. mixing the attributes and graph representation in latent space for VAE is somehow new for controlled generation. \n2. the results for controlled graph generation is seemingly good regarding attribute alignment." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on controlled graph generation that generates graphs satisfying specific topological attributes. It introduces a new scheduling technique, MIXTURE-SCHEDULER, to combines desired attributes with adjacency matrix representations during training for precise graph generation, and it then uses only attributes during inference. Experiments demonstrate that generated graphs have better aligned attributes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The biggest concern is the paper lacks of a rigorous deduction for the VAE model and learning objective. For most VAEs, we generally start from the miminization of log likelihood and use variational inference to factorize it. However, the formulations in this paper are very heuristic. We do not know whether the mixing of attributes and graph representation is valid. Mixing the prior with posterior looks also weird to me. What I expect should be starting something like $P(G|c) = \\int_{Z_G, Z_c} P(G|Z_G, Z_c, c)P(Z_G|\\theta, c) P(Z_c|c) dZ_G dZ_c$.\n2. It seems the graph encoder/decoder can only deal with adjacency matrix, but how about graphs with node features?\n3. The evaluation only measures the attributes, but the validness of the graph in many domains is also important (e.g. for molecules)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the Weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper proposes a new setting for the controlled graph generation task, which is highlighted by the injection of fine-grained topological control.\n2. The proposed method seems technically sound to me." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a conditional variational autoencoder for graph generation with fine-grained topological control. The proposed model incorporates a scheduling technique to integrate representations from both the adjacency matrix and attribute distribution to enable fine-grained control." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The number of baseline models compared in the experiments appears to be limited.\n2. I'm not sure if it's reasonable to use only the MAD metric to evaluate the generation results based on various topological attributes." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Is there any code for the proposed model?\n- Have you tried other more complex neural network architecture?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The model offers flexibility in controlling multiple structural properties (e.g., graph density, connectivity, clustering coefficient), enabling accurate graph generation across various domains.\n- The mixture-scheduler seems to be novel and it smoothly integrates prior and posterior distributions, improving the quality and stability of generated graphs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes CGRAPHGEN, a novel conditional variational autoencoder framework for generating graphs with fine-grained control over topological attributes. The framework introduces a MIXTURE-SCHEDULER, a scheduling technique to combine structural and attribute-based latent representations. Experiments on multiple datasets show that CGRAPHGEN outperforms baseline models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed model doesn't seem to be much of an improvement compared to GraphVAE-like models. The condition architecture is very common in generative models, and feature/attribute based conditional graph generation seems to be a common trick in most methods. Therefore, I think the proposed model may lack enough novelty.\n- Lack of baselines. I have noticed this paper include the diffusion-based model (EDGE), why not other SOTA graph generative models like DruM, DIGress and so on. For graph generation, I think it is more convincing to compare these models or at least other vae-based models. As far as I know, I believe these models can also incorporate the attribute feature to achieve conditional graph generation.\n- For the mixture-scheduler part, I don't really understand the meaning of regarding the time $t$ as epoch in the training stage. From Figure 4(b), it seems there is no clear effect on whatever the $\\beta(t)$ is.\n- In your ablation, I find the experiments with masked only one attribute, is there any flexibility attributes choice?\n\n[1] Efficient and degree-guided graph generation via discrete diffusion modeling. \n[2] Graph generation with diffusion mixture.\n[3] Discrete denoising diffusion for graph generation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A {VARIATIONAL} {FRAMEWORK} {FOR} {GRAPH} {GENERATION} {WITH} {FINE}-{GRAINED} {TOPOLOGICAL} {CONTROL}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xYquBPHppn},\nnote={under review}\n}" }, "abstract": { "value": "Controlled graph generation is the process of generating graphs that satisfy specific topological properties (or attributes). Fine-grained control over graph properties allows for customizing generated graphs to precise specifications, which is essential for understanding and modeling complex networks. Existing approaches can only satisfy a few topological properties such as number of nodes or edges in output graphs. This paper introduces CGRAPHGEN, a novel conditional variational autoencoder that, unlike existing approaches, uses graph adjacency matrix during training, along with the desired graph properties, for improved decoder tuning and precise graph generation, while relying only on attributes during inference. In addition, CGRAPHGEN implements an effective scheduling technique to integrate representations from both adjacency matrix and attribute distributions for precise control. Experiments on five real-world datasets show the efficacy of CGRAPHGEN compared to baselines, which we attribute to its use of adjacency matrix during training and effective integration of representations, which aligns graphs and their attributes in the latent space effectively and results in better control." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Controlled Graph Generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/73a1124bb7f1385fb0c3a2a12f0bddcf2738a898.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "A VARIATIONAL FRAMEWORK FOR GRAPH GENERATION WITH FINE-GRAINED TOPOLOGICAL CONTROL" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xYzOkOGD96
Grounded Video Caption Generation
main
Withdraw
vision-language models;VLM;LLM;video grounding;automatic annotation;pseudo-labeling
datasets and benchmarks
Evangelos Kazakos;Cordelia Schmid;Josef Sivic
~Evangelos_Kazakos2;~Cordelia_Schmid1;~Josef_Sivic1
3;3;3;3;5;6
3;3;4;5;4;4
2;2;2;2;2;2
3;2;2;2;2;3
2;2;1;2;4;3
3.833333
3.833333
2
2.333333
2.333333
0.166574
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Clarification on Novelty: Needs a clearer explanation of differences from previous work, specifically the distinct contributions beyond those in “Grounded Video Description” (Zhou et al., 2019)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Novel Task, Dataset, and Model: The paper introduces a new task with a manually annotated evaluation set and a large-scale automatically generated dataset.\nDataset build up: Uses LLMs to generate consistent video-level captions from frame-level annotations, improving temporal consistency.\nModel that support vlm to output bbox for video: VideoGLaMM incorporates spatio-temporal adapters and temporal objectness, supporting consistent tracking of objects in video." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a new task (GROC), a manually annotated test dataset, and an automatically generated dataset. It introduces VideoGLaMM, a model designed to generate captions and track object bounding boxes over time. Key features include spatio-temporal adapters, a bounding box decoder, and a temporal objectness head, all aimed at bridging multimodal understanding for video and language." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Limited Model Comparison: It would be better for VideoGLaMM to include comparisons against additional models like InternVL and Qwen-VL, which also produce bounding boxes in text format, to strengthen its evaluation. Since the paper aims to establish a new task, a more comprehensive benchmark would enhance the reliability of its proposed contributions.\n\nDiscussion Scope: Additional discussion is needed on alternative methods for VLM bounding box outputs, specifically addressing why GLaMM was prioritized over other methods like [1,2]. This would help clarify its selection rationale within the broader context of existing approaches.\n\n[1] Bannur, Shruthi, et al. \"MAIRA-2: Grounded Radiology Report Generation.\" arXiv preprint arXiv:2406.04449 (2024).\n[2] Zou, Ke, et al. \"MedRG: Medical Report Grounding with Multi-modal Large Language Model.\" arXiv preprint arXiv:2404.06798 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "1. The GROC evaluation dataset is labeled by human annotators, and the authors only mention their protocols for human annotators in Appendix F. The authors may need to disclose more details regarding the labeling template, the platform they use, the salary of annotators, and other factors to ensure that the labeling process is a fully responsible practice.\n\n2. The collected and planned-to-be-released training dataset is generated fully automatically with the help of LLMs. Safety and bias issues need to be addressed during the generation process, which are ignored in this paper." }, "flag_for_ethics_review": { "value": [ "Yes, Privacy, security and safety", "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Most of my major concerns can be seen in the weakness section. Here I leave a few more:\n1. Line 229-230 Sec. ?? -> Reference is missing.\n2. The baselines are insufficient. One simple strategy is to directly feed the frames with grounding boxes into a video LLM. This is a widely used method called visual prompting, which can be tested in this training dataset without any additional structure or model tuning.\n3. The ablation study is not convincing to me. As shown in Figure 3, I think most of the audience would like to know whether including grounded information really helps with video captioning; that is, a comparison of removing and keeping the grounding video encoder." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The released training and evaluation datasets could be useful in the field of video captioning after addressing the necessary ethics concerns." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This submission gives a task, dataset, and model for grounded video caption generation. The main contributions are:\n\n* They define the task as GROunded Video Caption Generation (GROC) and create a manually annotated test dataset specifically for this task.\n \n* They design an automatic annotation pipeline that uses an existing model for grounding and a LLM for frame-level and further video-level captioning. They use this approach to the randomly selected subset of HowTo100M, and generate the final HowToGround training dataset.\n\n* They propose a VideoGLaMM model, trained on the collected HowTo100M dataset, which achieves best performance on grounded video caption generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I would say the most important weakness of this paper is **over-claim, fails to discuss and compare previous published research and show no respect to previous work**. Let's start with the beginning:\n\n1. The authors claim that they propose a new task called grounded video caption generation (GROC). However, I don’t think so at all. Please refer to paper [1] titled \"Grounded Video Description.\" I think that, in vision-and-language research, most researchers use the terms \"video description\" and \"video captioning\" interchangeably (the slight difference may be that \"video description\" sometimes includes more details). Paper [1] is the first to collect grounded video-text datasets and propose a model that uses grounding information to generate better video descriptions. They didn’t even claim that this task is a so-called new task because it is viewed as using alternative guidance to improve video description generation, not as a new task. One quick and intuitive way to compare these two is to check Figure 1 in this submission and Figure 2 in paper [1]. Thus, **I completely reject the claim that this task is newly proposed by the authors of this submission**.\n\n2. The more interesting thing is that the authors actually cited paper [1] but **did NOT mention it at all in the main pages**. Instead, they \"secretly\" placed it in the Appendix. In my opinion, a paper with so much overlap should be carefully discussed in the related work section, in section 3 where the authors claim their novelty, and also at the end of section 5, where the authors describe how they construct their datasets.\n\n3. As for the experiments, the proposed model is also lacking of novelty. Put more relevant papers here, like [2, 3, 4], which are also not properly cited and discussed in this submission. Thus, **I also clearly reject the claim in line 055-056 that \"At the same time, producing natural language video descriptions where the described objects are spatio-temporally grounded with bounding boxes in videos has received much less attention\"**. From my understanding, the authors replace the previsous LSTM networks with large language models. For sure, LLMs can get better results than LSTMs. However, these methods and models are not compared in their experiments. I don't believe that, in the era of LLMs, simply replacing the language module in previous methods with an LLM can be considered a big innovation in the overall model structure.\n\nIn summary, I clearly think this submission needs major revisions and a complete reframing in order to meet the publication standards of a conference like ICLR.\n\nReferences:\n\n[1] Grounded Video Description, Zhou et al. CVPR 2019.\n\n[2] Learning to Generate Grounded Visual Captions without Localization Supervision, Ma et al. ECCV 2020.\n\n[3] Attend and Interact: Higher-Order Object Interactions for Video Understanding, Ma et al, CVPR 2018.\n\n[4] Comprehensive Visual Grounding for Video Description, Jiang et al. AAAI 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "* L140 mentions \"video selection where ‘interesting’ videos are selected.\" What criteria define a video as \"interesting\"? Please provide details.\n* L262 mentions \"After rejecting videos for which our proposed automatic annotation method failed…\" How were these videos rejected? Was it done manually or through an automatic process? More details are needed.\n* Although the HowToGround dataset is generated from aggregated image-level annotations, this approach may struggle with capturing interactions that require understanding the temporal dimension, such as determining whether someone is walking forwards or backwards. Is this a limitation of your dataset and the resulting model? Quantitative evaluation of actions requiring temporal interactions could strengthen the paper.\n* How does the VideoGLaMM generalize to other datasets, apart from the HowToCaption dataset?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The paper tackles the important task of grounded video captioning, which has numerous applications in downstream tasks such as video retrieval and open-world object tracking in videos.\n* The writing is clear.\n* The paper introduces the HowToGround dataset for grounded video captioning and provides a high-quality, manually annotated test set for the task.\n* The paper proposes an architecture for grounded video captioning, building upon a pre-trained grounded image captioning model." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the task of grounded video captioning, where, given a video, a model must generate a caption and provide bounding boxes for the objects appearing in the caption. The authors introduce two datasets for this task: a small, manually annotated test set, and a larger, automatically annotated training set. Finally, they train a grounded video captioning model on this dataset and evaluate it against two baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The first paragraph of the introduction feels more like a related work section, making it hard to follow. I would suggest starting by clearly stating the task you aim to solve and providing a high-level summary of the existing work.\n* The method for automatically creating the dataset uses SVOs, which can lead to important details being omitted from captions, such as the color of the shirt in Figure 2. This omission could hinder downstream tasks. For instance, a model aiming to find videos of orange shirts might miss this information, which is otherwise captured in current captioning models like GlaMM.\n* The paper’s main contribution is the creation of the HowToGround dataset. However, the choices made during dataset creation are not justified, nor does the paper include any ablation study of the creation steps. Including such a study, showing alternative choices for each step, would greatly strengthen the paper. For example, how about using SAM-2 or an open-world object detector to identify object locations? How would VideoGLaMM perform if step 3 were omitted from the dataset creation process? How well does the resulting VideoGLaMM for different train set sizes?\n\nTypos/Errors:\n* L230 - Section reference is broken." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- An automated grounded video annotation pipeline was designed, significantly reducing annotation costs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a new task: grounded video caption generation, which aims to generate captions for videos while also providing bounding boxes for the objects mentioned in the captions. \nTo achieve this, the authors designed an automatic grounded video annotation pipeline. Based on the dataset constructed using this pipeline, the paper trains a model called VideoGLaMM, which performs well on the grounded video caption generation task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The authors claim to introduced a new task: grounded video caption generation. However, there is prior research on this task, such as PG-VIDEO-LLAVA[1]. How do the authors justify their assertion of novelty?\n\n[1] PG-Video-LLaVA: Pixel Grounding Large Video-Language Models.\n\n- For videos that contain two or more events, how should the automated annotation pipeline be applied? For example, consider a scenario where a man picks up a knife to chop vegetables and then puts the vegetables into a pot. If the knife and pot appear simultaneously in the frame but the action hasn't progressed to the pot yet, how can we avoid generating a bounding box for the pot prematurely?\n\n- The case studies (e.g., Figure 4) presented by the authors seem overly simplistic. Can VideoGLaMM be effectively applied to videos that involve multiple events?\n\n- There are very few comparative methods included in the study. The authors should compare more methods, such as the integration of video captioning models with object detection models, and PG-Video-LLaVA." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper proposes an interesting and reasonable task of generating captions for videos while simultaneously grounding the corresponding objects.\n\nThe paper presents a technically sound framework to achieve GROC." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a task called grounded video caption generation. It presents a manually annotated benchmark comprising 1,100 video clips, as well as a 50k training set created using an automatic annotation pipeline. GLaMM is adapted with video encoders to achieve GROC. Similar to GLaMM, the authors evaluate VideoGLaMM using a collection of video captioning and object detection metrics. The evaluation results indicate that the proposed method outperforms comparable approaches across various downstream tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I am primarily concerned with the automatic data generation pipeline. In this process, the authors used GLaMM to generate frame-wise captions and then converted these into Subject-Verb-Object (SVO) triplets. Since there is not alignment between generated SVOs, if there are two different instances with the same label in different frames, it is possible that the LLM might treat them as a single instance, potentially leading to inaccurate conclusions and relationships between them.\n\nSimilarly, in the tracking by language process, instances are tracking with only textual description of the phrases. This cannot guarantee 2 different instances with same label are mistakenly treated as the single instances. For example, imagining a video that there are 2 different people appearing in different frames but all drinking beers, the process is likely to link them as one person by just considering its textual description. The strategy described in Supp. is not very convincing as there can be 2 different people both with white shirts and drinking.\n\nThe process appears to capture only a portion of the video content, potentially overlooking valuable concurrent events within the video clip.\n\nI am also curious about the visual encoders used in VideoGLaMM. Is there any specific reason or necessity for using two different image backbones separately for captioning and bounding box generation?\n\nthe reference in Line 230 is missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Did the automatic annotation process take into account the ambiguities mentioned in the weaknesses section? Are there any methods in place to reject annotations for data that the model finds ambiguous?\n\n2. Why was SAM2 not considered in the model's structural design? As far as I know, this model targets video and seems more suitable for your task.\n\n3. I hope to see a comparative analysis between the current vLLM and video grounding models in your task, including their differences in performance, efficiency, and application scenarios. This would help in better understanding the strengths and weaknesses of each model and provide guidance for future research directions." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Propose a new task: grounded video caption generation, which is significant as large-scale spatio-temporal grounding of natural language is a key step in advancing fields such as human-robot interaction and embodied perception.\n2. Propose a manually verified test dataset, which is likely to further drive progress in this field.\n3. The model architecture is both innovative and intuitive, with subsequent ablation studies demonstrating the effectiveness of these design choices." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel task—grounded video caption generation—along with a newly annotated dataset, GROC. The authors introduce a language-based tracking method to create pseudo-annotations for a large-scale instructional video dataset, named, HowToGround, which is then used to train a newly designed model. Finally, they compare the model's performance against still-image grounding baselines, reporting state-of-the-art results in grounded video caption generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There are concerns regarding the technical soundness of the AUTOMATIC ANNOTATION METHOD section. I understand that the grounded video caption generation task requires high precision; however, due to the richness and ambiguity of language, I am skeptical about the consistency of the language-based tracking method in object localization. For example, while the case of the woman drinking a beverage in the authors' paper is unambiguous, many instances will likely present significant ambiguity. If, in the third frame of Figure 2, the image caption still states 'holding a glass,' how can one determine whether 'beverage' and 'glass' refer to the same object?\n\n2. The writing quality is poor, containing a lot of redundancy, and some phrases read awkwardly and lack fluency. For instance, the phrase 'Differently than the image-based task, where bounding boxes...' could be improved. Additionally, some sentence structures need adjustment, and paragraph transitions require attention, particularly when describing the GROC construction process. There are also typos to address, such as in Section ?? on page 5. Overall, the writing and illustrations need further polishing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@misc{\nkazakos2024grounded,\ntitle={Grounded Video Caption Generation},\nauthor={Evangelos Kazakos and Cordelia Schmid and Josef Sivic},\nyear={2024},\nurl={https://openreview.net/forum?id=xYzOkOGD96}\n}" }, "abstract": { "value": "We propose a new task, dataset and model for grounded video caption generation. This task unifies captioning and object grounding in video, where the objects in the caption are grounded in the video via temporally consistent bounding boxes. We introduce the following contributions. First, we present a task definition and a manually annotated test dataset for this task, referred to as GROunded Video Caption Generation (GROC). Second, we introduce a large-scale automatic annotation method leveraging an existing model for grounded still image captioning together with an LLM for summarising frame-level captions into temporally consistent captions in video. \nFurthermore, we prompt the LLM to track by language – classifying noun phrases from the frame-level captions into noun phrases of the video-level generated caption. We apply this approach to videos from the HowTo100M dataset, which results in a new large-scale training dataset, called HowToGround, with automatically annotated captions and spatio-temporally consistent bounding boxes with coherent natural language labels. Third, we introduce a new grounded video caption generation model, called VideoGLaMM, and train the model on the new automatically annotated HowToGround dataset. Finally, results of our VideoGLaMM model set the state of the art for the new task of grounded video caption generation. We perform extensive ablations and demonstrate the importance of key technical contributions of our model." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Evangelos_Kazakos2", "~Cordelia_Schmid1", "~Josef_Sivic1" ] }, "authors": { "value": [ "Evangelos Kazakos", "Cordelia Schmid", "Josef Sivic" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "vision-language models", "VLM", "LLM", "video grounding", "automatic annotation", "pseudo-labeling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "kazakos|grounded_video_caption_generation" }, "pdf": { "value": "/pdf/7c363c0c76057bc0f14cf8ad8d057a1f6e8bc84b.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Grounded Video Caption Generation" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xZ2lTzfyFv
Improving Generalization with Flat Hilbert Bayesian Inference
main
Active
Bayesian Inference;Sharpness-aware Minimization
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
3;6;8;8
4;3;3;3
2;2;3;3
2;2;3;3
3;3;3;3
6.25
3.25
2.5
2.5
3
-0.916949
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The method is tested only in fine-tuning regime. Is there any reason for that? \nTesting the proposed models trained from scratch would strengthen the empirical significance of the proposed method. I am not sure which datasets would be suitable, but datasets like ImageNet and its variants (ImageNet-A [1], and ImageNet-C[2]), and scientific machine learning benchmarks might be good datasets as uncertainty prediction would be critical. \n\n[1] https://arxiv.org/abs/1907.07174\n[2] https://arxiv.org/abs/1903.12261" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The work extends generalization bounds from finite-dimensional parameter spaces to functional spaces, which leads to FHBI, a theoretically grounded Bayesian inference algorithm.\n- The performance improvement by the proposed method is validated through extensive comparisons with previous works." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents an algorithm called Flat Hilbert Bayesian Inference (FHBI), which incorporates Sharpness-aware minimization (SAM) technique in Bayesian inference. Specifically, together with SAM, the authors perform Stein variational gradient descent (SVGD) as the dynamics of the model parameters, which leads the particles (models) to flat and diverse modes. FHBI is tested on VTAB-1K, a collection of various classification tasks, and achieves better performance on average among different Bayesian NN methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I don't see any particular weaknesses in this paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Have you compared SVGD+SAM to SGLD+SAM [1, 2]? \n\n\n[1] Van-Anh Nguyen, Tung-Long Vuong, Hoang Phan, Thanh-Toan Do, Dinh Phung, and Trung Le. Flat seeking bayesian neural networks. Advances in Neural Information Processing Systems, 2023.\n\n[2] Yang, Xiulong, Qing Su, and Shihao Ji. \"Towards Bridging the Performance Gaps of Joint Energy-based Models.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Incorporating SAM into SVGD can somehow improve the stability. The idea generally makes sense. \n2. The paper is clearly written overall, and it is easy to capture the motivation.\n3. The empirical results are superior and tested on various datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper combines sharpness aware minimization (SAM) with SVGD in RKHS as the Flat Hilbert Bayesian Inference (FHBI). It extends the proof of [1] to infinite-dimensional functional space.\nEmpirical validations show better performance of FHBI compared to previous Bayesian inference approaches on various datasets.\n\n[1] Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Even though I agree that the proposed method may be effective in some conditions, I am not convinced by the theory in this paper. Following are my concerns. \n1. Notations. \n * Empirical posterior should be $\\mathbb{P}\\_{\\theta|S}$ not $\\mathbb{P}\\_S$, which is the data distribution. And the prior distribution should be $\\mathbb{P}_{\\theta}$.\n * $p(\\theta|S)\\propto p(\\theta)p(S|\\theta) = p(\\theta)\\prod_{i=1}^n p(y_i, x_i|\\theta)$\n The exponential average loss should be related to a specific distribution. I do not know how you can obtain this form directly. \n * What is \"general loss\"? I have never heard this terminology. $\\mathcal{L}_{\\mathcal{D}}(\\theta)$ is often called population loss/ true error/ generalization error in different papers or learning theory books. \n2. For Theorem 2, what is the exact definition of $h(1/\\rho^2)$? It should be clearly presented in the main paper. In your derivation, $\\mathcal{O}(\\rho^2)$ is simply ignored, which means $\\rho$ should be very small, which in turn gives a large $h(1/\\rho^2)$.\n3. You are actually deriving an upper bound of the true risk, which is the empirical risk plus some complexity term. However, the sample complexity is not directly reflected in the bounds presented in the main paper. You claim that you can approximate the true posterior $p(\\theta|\\mathcal{D})$. This is impossible if you only have limited samples $n$. \n\n4. Experiments\n\t* How does data augmentation affect the empirical results? Have you used it in all baselines or just your method? \n\t* As shown in Figure 3, the runtime of the FHBI increases fast w.r.t the number of particles. Compared to SVGD, the margin also grows with the number of particles. Where is the computation bottleneck? Can you find any way to reduce it? \n\n### Minor: \n1. Is the template used correctly? There is no line number. \n2. Typo. In the related work section, sharness -> sharpness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In my understanding, Theorem1 expresses the tradeoff relation between (1) worst-case empirical loss that scales with the size of the neighborhood (radius of the step size) (2) wellness of the upperbound approximation based on the empirical loss. The reviewer is making in his mind a super-rough analogy of \"local linear approximation of a complex function\", whose worst case error scales with the size of the neighborhood, and \"functional ascending step\" is analogous to choosing the direction of worst error in this approximation. \n\nIf this analogy is correct, the neighborhood radius $\\rho$ as well as the update size $\\epsilon$ would be roughly analogous to a stepsize in Euler-Maruyama simulation of the ODE, except that in the context of FBHI the system to be simulated is infinite dimensional. Would the method improve its performance by choosing $\\rho$ and $\\epsilon$ to be small, and running the iterations for greater number of times, as in ODE simulation? Also, with this intuition it feels as if $\\rho$ shall be similar to $\\epsilon$; why are they chosen quite differently in the experimental section? \n\nWhile the reviewer is not too confident of the intuition based on this super-rough analogy, the reviewer also believes that it is crucial that the paper provides some explanation (either numerical or experimental, or at the very least, the heuristic with intuitive explanation) regarding the choice of $\\rho$ and $\\epsilon$, both for the sake of the future practical user of FBHI and the for the sake of the future schemes that will possibly branch out from FBHI.\n\n- In the similar note as above, because the \"wellness\" of RKHS method generally depends on the affinity between the choice of the kernel (and the hence the nature of the continuity of functions in the space) and the dataset. In the case of this research, the choice, I believe, is indirectly related to dataset because RKHS is a space of functions on \"parameters\". Is there any ablation study regarding this choice, or at the very least, a good heuristics that would help the user to choose appropriate Kernel in the applications? \n\n- It is explained in the paper that $\\rho=0$ would correspond to SVGD, and \"in equation\" it seems so. However, because the paper emphasizes its connection to SVGD, the reviewer wishes to see why this comes about in connection to Theorem 1 and the derivation of SVGD. \n\n- Is there comparisons against SAM in the setting of section 6.2?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This research presents a scheme that generalizes both SVGD and SAM in the context of Bayesian Inference. \nThe idea of FBHI itself is very clearly stated and convincing, and its efficacy are validated with ample experiments. \nThe layout of the derivation is very instructive as well. \nThe claimed sharpness advantage (that FBHI would is more sharpness-aware) is also validated in experiments, empirically \nshowcasing the mechanism behind the advantage of the method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Flat Hilbert Bayesian Inference(FHBI), a method to compute a posterior inference by integrating along the flow of distributions that converges to the target posterior. By formulating the derivative of the flow with the pushforward function that lives in a vector valued RKHS, the framework naturally establishes the \"flow of infinite number of particles\" which allows one to target the \"general posterior\" as opposed to \"empirical\" posterior in the setting of Bayesian Inference. Importantly, the method differs from Stein Variational Gradient Descent in that it derives the infinitesimal pushforward function that reduces the \"worst upper bound\" of the KL divergence in a way that resembles adversarial training.\n\nThe propsoed paradigm is made implementable by the computable form of $\\lambda p(\\theta | S)$, together with the upper bound of the KL divergence to general posterior defined with the KL divergence to empirical posterior. \nThe Efficacy of the algorithm is verified through experiments and its scaling properties are investigated and compared with respect to SVGD." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the relation of FHBI to SAM and the interpretation with spatial and angular repulsive force is insightful, the reviewer was a little confused in the introduction and in the multiple reference to SAM that sounded as if SAM would be the major idea on which to develop FHBI (which, in retrospect, is not \"directly\" so?). Meanwhile, it is true in the algorithm that when m=1, the algorithm agrees with SAM in the end. In the current presentation the connection to SAM seems something a posteriori.\n\nIf indeed the sharpness-aware philosophy is indeed the \"motivation\" of FHBI (which is, unfortunately, not yet clearly conveyed to this reviewer), the reviewer would like to see more analytical connection to SAM's derivation. \n\n- In the similar note, as the reviewer will post in the \"Questions\" section, the reviewer feels that the supposedly the important connections to SVGD and SAM are not clearly explained through objective function and derivations (**more than $\\rho=0$ and $m=1$ )" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Major:\n* Something seems amiss in going from the empirical posterior $p(\\theta|\\mathcal S)$ to what you call the general posterior $p(\\theta|\\mathcal D)$. Let $L_n(\\theta) = \\frac{1}{n} \\sum_{i=1}^n -\\log p(y_i|x_i,\\theta)$. Then the empirical posterior is $p(\\theta|\\mathcal S) = \\exp(-n L_n(\\theta)) p(\\theta)$. The population counterpart to this is $\\exp(-n L(\\theta)) p(\\theta)$. But this is *not* your general posterior which no longer has any dependence on sample size $n$. \n\nMinor:\n* Currently, the way the loss $\\ell$ shows up in Section 3 might give the wrong impression that it is something one can freely choose. In fact the loss has to be (proportional) to the negative log likelihood. I think you need to write out explicitly what $\\ell(f_\\theta(x),y)$ is. \n* The sentence above Eqn (5), \"In turn, the solution $\\hat f^*$ that solves the maximization problem above is given by\". Which maximization problem above? There are so many approximations here, please use \\label and \\ref to refer to exact maximization problem. (I also doubt this is a true statement, (5) is not really a solution but an approximation of a solution right?)\n* Would you consider labeling the iterative procedure right below Lemma 1 and making it painfully clear how that turns into Algorithm 1?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* Originality: \nI'm uncertain about the level of originality here. Variational families within an RKHS framework have, to my knowledge, been explored previously. \n* Quality: The work appears to be of high quality. The results seem technically correct, and while I haven’t meticulously checked every mathematical detail, the derivations appear consistent with expectations.\n* Clarity: Overall, the paper is well-written. I appreciated the clear, step-by-step walkthrough of the optimization problem and the detailed exposition of various bounds and relaxations necessary to implement the proposed approach.\n* Significance: The proposed method, Flat Hilbert Bayesian Inference (FHBI), is presented as a generalization of Stein Variational Gradient Descent (SVGD) and Sharpness-Aware Minimization (SAM). The experiments demonstrate promising potential for FHBI in applications like LoRA-style fine-tuning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In traditional variational inference, the goal is to approximate an intractable posterior \\( p(\\theta | \\mathcal{S}) \\) by selecting a variational distribution \\( q \\) from a family \\( \\mathcal{Q} \\) that minimizes a divergence, often the KL divergence, resulting in the optimization problem:\n\n$$\n\\arg \\min_{q \\in \\mathcal{Q}} \\, \\text{KL}(q(\\theta) \\parallel p(\\theta | \\mathcal{S})),\n$$\n\nwhere \\( p(\\theta | \\mathcal{S}) \\), termed the \"empirical posterior\" by the authors, is the target distribution.\n\nThis paper introduces a variational family within a Reproducing Kernel Hilbert Space (RKHS) framework and further reformulates the optimization problem to focus on approximating the \"general posterior,\" \\( p(\\theta | \\mathcal{D}) \\), over the dataset \\( \\mathcal{D} \\). The optimization problem thus becomes:\n\n$$\n\\arg \\min_{f \\in \\mathcal{H}^d, \\, \\|f\\| \\leq \\epsilon} \\, \\text{KL}(q_{[I + f]}(\\theta) \\parallel p(\\theta | \\mathcal{D})),\n$$\n\nwhere \\( q_{[I + f]}(\\theta) \\) represents the transformed variational distribution. This reformulation leads to multiple steps of approximation, relaxation, and bounding, culminating in an iterative optimization procedure (detailed below Lemma 1).\n\nThe proposed method, Flat Hilbert Bayesian Inference (FHBI), is designed to enhance generalization in Bayesian inference by leveraging the structure of RKHS." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The general posterior defined, $p(\\theta|\\mathcal D)$, does not appear to me to be the correct theoretical counterpart to the \"empirical\" posterior. See more in questions below. \n* If the paper’s originality hinges partly on targeting the \"general posterior\" $p(\\theta|\\mathcal D)$, I have reservations about its practical benefits. In addition to my concerns about the missing sample size term, targeting this posterior seems unlikely to yield practical advantages and might even be counterproductive. The authors expend considerable effort introducing approximations to recast the optimization problem in terms of the \"empirical posterior,\" which requires potentially loose bounds. The value of this detour would be enhanced by a more thorough discussion of why these approximations are justified or necessary.\n* The paper emphasizes \"improving generalization\" as a primary benefit of FHBI, yet this claim seems tenuous without more foundational support. It's unclear what is meant by \"enhancing generalization in Bayesian inference\" mathematically, as the method doesn’t inherently introduce any features that theoretically boost generalization. Rather, it appears that FHBI, when applied to fine-tuning, yielded improved generalization performance on a benchmark dataset compared to baseline methods. This distinction between observed outcomes and the initial methodological intent would be clearer if the authors clarified that FHBI’s generalization performance was more an empirical finding than a theoretically driven design choice." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024improving,\ntitle={Improving Generalization with Flat Hilbert Bayesian Inference},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xZ2lTzfyFv},\nnote={under review}\n}" }, "abstract": { "value": "We introduce Flat Hilbert Bayesian Inference (FHBI), an algorithm designed to enhance generalization in Bayesian inference. Our approach involves an iterative two-step procedure with an adversarial functional perturbation step and a functional descent step within the reproducing kernel Hilbert spaces. This methodology is supported by a theoretical analysis that extends previous findings on generalization ability from finite-dimensional Euclidean spaces to infinite-dimensional functional spaces. To evaluate the effectiveness of FHBI, we conduct comprehensive comparisons against seven baseline methods on the VTAB-1K benchmark, which encompasses 19 diverse datasets across various domains with diverse semantics. Empirical results demonstrate that FHBI consistently outperforms the baselines by notable margins, highlighting its practical efficacy. Our code is available at \\url{https://anonymous.4open.science/r/Flat-Hilbert-Variational-Inference-008F/}." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Bayesian Inference", "Sharpness-aware Minimization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/07ef79c33f20124245927a1a00e4c805b551a813.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Improving Generalization with Flat Hilbert Bayesian Inference" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xaXvHdH9Y4
P-BERT: Hardware-Aware Optimization of BERT Using Evolutionary Techniques
main
Active
Model Compression;Large Language Models;Computation Complexity;BERT;Hardware-Aware
applications to computer vision, audio, language, and other modalities
3;3;3;5;5
1;5;4;3;3
3;1;2;2;2
1;1;2;2;2
3;2;2;2;3
3.8
3.2
2
1.6
2.4
-0.123091
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Fig 1, if a particular hidden state has already been marked for pruning in the previous layer, then in the current layer, the counter should simply increment by 1, and this hidden state should not be pruned again. However, in Fig 2, for layer 2, why is S_{768} still selected for pruning?\n2. In Equation 6, why does it use the number of layer i? Why does it assign greater weights to the deeper layers?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ It targets BERT optimization, which is important.\n+ It proposes a novel metric ICCR, which provides a new way of evaluating the model efficiency.\n+ The flow charts (Fig 1 and 5) and examples (Fig 2) can help understand the paper.\n+ It provides the setting of hyper-parameters and is open-sourced." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents P-BERT which is an optimization of the standard BERT model. It is developed through the integration of three model compression techniques: pruning, quantization, and knowledge distillation. It also introduces a novel metric, the Inverted Computational Complexity Ratio (ICCR), to better capture model efficiency and complexity. Experimental results demonstrate that P-BERT achieves a reduction in computational complexity of at least 60% while maintaining comparable accuracy to the baseline BERT across several natural language processing tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- P-BERT performs well in CoLA, while its performances on other tasks are less competitive.\n- The genetic algorithm is time-consuming and may get stuck in a local optimum. Therefore, giving enough reasons and motivations to choose the generic algorithm would be better.\n- Knowledge distillation can guarantee the accuracy of the model but may be time-consuming.\n- It would be better to provide more evaluations to show that the ICCR metric fits well for BERT optimization." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Is the proposed methodology of using the genetic algorithm to prune and quantize applicable to other models?\n\nIs the computational complexity metric in Eq 6 able to capture the significance of the deep layers? Could you provide some details like layerwise precision and % of parameters pruned of one of the resultant models?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The proposed technique is a generalized methodology to explore pruning and quantization strategies for a given model and task. The Pareto front and the loss curve plots are quite insightful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper combines pruning, quantization and knowledge distillation to reduce the computational complexity of the BERT models. A genetic algorithm is used to hone down on the parts of the layers to prune and the appropriate precision of the layers for quantization. Knowledge distillation is later used to transfer knowledge from the baseline BERT model. They also use Inverted Computational Complexity Ratio (ICCR) as a metric to evaluate model compression factor. Results section shows the trend of decreasing estimated inference time with increasing ICCR for four different benchmarks. There are also comparisons of the performance of P-BERT’s estimated inference time and accuracy against other BERT models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The results presented in the paper fail to emphasize the utility of this approach. Table 5 presents other models with higher complexity ratios and lower inference time than what’s best achieved by P-BERT. Even for CoLA where P-BERT achieves the best accuracy among tuned models, the compression factor doesn’t translate to reduction in inference time.\n\nIf the chosen experimental hardware setup isn’t able to leverage the pruning and quantization benefits, then a different evaluation platform or metric could be selected." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Why MSE is chosen over MAE in Eq. 3 and 4? \nWhy do we need to consider L_hard, L_soft, L_hidn, and L_attn in Eq. 5? What's the significance of each? Any ablation study to compare which ones are more important?\nFigure 1, flow chart, why the sum of absolute? Any intuition or reference?\nFor the definition of K_model in Eq. 6, why do we need to include the layer index \"i\"? Does this mean that the last few layers carry more weights?\nWhy is the particular genetic algorithm chosen in Sect. 4.4? \nWhat challenges did you run into in Sect. 5.4 when comparing against those models?\nWould you like to consider palettization as an additional compression technique?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The use of a genetic algorithm to select compressed models based on the evaluation metric and the newly proposed inverted computation complexity ratio (ICCR) is seldom studied. It is an encouraging direction to explore.\nThe overall flow of the paper is easy to follow.\nThe comparison of the proposed P-BERT with other optimized BERT model on the selected evaluation tasks is well summarized." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores the combination of weights pruning, bit quantization, and teacher-student learning via knowledge distillation in compressing BERT models. They used a genetic algorithm to determine the cluster of weights to be pruned and quantized. They proposed a new metric, the inverted compuational complexity ratio, which is defined roughly as the ratio of computation required bewteen a standard BERT and a compressed model, to capture the extent of compression. They evaluated the trade-off bewteen compression and accuracy on several Hugging Face classification tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main compression techniques employed by the team are all common model compression techniques: pruning, quatization, and KD. Only a new genetic algorithm to search for prunable parameters do not seem sufficiently novel to me. The authors could conduct ablation studies to compare each of the three techniques in the P-BERT framework to help us understand what trade-offs can be made.\nThe ICCR is conveniently defined to describe the extent of compression, but it fails to capture how the compression is achieved (through pruning vs quantization, which layers, etc.). This could have hindereed the authors from interpreting the inconsistenties observed in Tables 1-4.\nThe presentation of the paper is another aspect that can be improved. \n1. The references are cited in a rather unusual manner. Include the references in parentheses will help readers. \n2. Tables and figures should not be bewteen texts (Fig. 1, 2, 3, 4, 7, 8; Table 5).\n3. The authors can help readers understand the results better by interpreting the results by referencing to their figures and tables. For example, Section 5.2.1 only describes the outcome, but the readers are left to understand the significance of those plots. I personally did not get what the authors mean to convey here.\n4. Some hyperparameters are given (without intution or prior knowledge, e.g. Section 5.2, 5.3). Some claims are laid without reference (Section 2, Section 4.3.2).\n5. Appendix can be part of the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Is there a specific rationale for combining all three techniques (quantization, pruning, and KD) in the proposed optimization, and would focusing on a single technique be more beneficial?\n- How do the authors justify treating all layers within BERT similarly for quantization and pruning, given the distinct characteristics of attention layers and feed-forward networks (FFN)?\n- What theoretical or empirical justifications are provided for the formulation of the Inverted Computational Complexity metric, especially concerning the inclusion of layer number as a factor?\n- How does the dependence of pruning rate and number of bits on the layer type impact the reliability of the proposed metric, and why was this not addressed in subsection 4.3?\n- Have the authors conducted an ablation study or provided theoretical evidence to demonstrate the credibility of their proposed metric and justify the specific choices made in equation (6)?\n- Given the limited number of observations shown in Figures 3 and 4, how confident are the authors in the generalizability of their results, and could a larger dataset or different experimental setup change these findings?\n- What comparisons can the authors provide to validate the superiority of their metric over FLOPs-based metrics in terms of hardware efficiency, especially considering the specificity of efficiency to hardware architecture and operation types?\n- How does the proposed approach compare with the latest state-of-the-art works in BERT model optimization involving quantization, pruning, or knowledge distillation?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The paper tackles an interesting problem for the ML research community—optimizing large models like BERT for hardware deployment on resource-constrained devices.\n- The discussion of different optimization techniques (quantization, pruning, and KD) is quite engaging and well-explained." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles the problem of optimizing Transformer-based models like BERT for hardware deployment. The authors propose a multi-level optimization using pruning, quantization, and knowledge distillation, aiming to make the BERT model more compact while preserving high accuracy on the target task. Furthermore, the paper introduced a novel metric, namely, Inverted computational complexity, to quantify the model’s computation requirements. The optimization process is conducted via a genetic algorithm to explore a predefined set of pruning and quantization parameters for the BERT model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper needs more novelty. The proposed optimization for BERT directly applies previous and well-known techniques (i.e., quantization, pruning, and KD). The authors didn't well explain why leveraging the three techniques altogether and not focusing on exploring one technique (e.g., pruning). Overall, the paper seems more like a direct application of existing methods without further improvement. To enhance the paper's novelty, the authors may consider discussing in details the benefits from each optimization technique or exploring opportunities to develop a new algorithm that more effectively integrates different optimization techniques in a way tailored specifically to the BERT architecture.\n- The paper also lacks proper discussion from an architectural point of view, specifically regarding the type of layers being pruned or quantized (e.g., attention or FFN). Different from CNNs, quantization or pruning in Transformers is not straightforward and needs careful consideration, especially at the attention layer level. However, the authors consider all layers the same and didn't explain why and how quantization/pruning is applied to the BERT layers. To strengthen the paper's contribution, the authors should include a breakdown of the impact of pruning and quantization on attention layers versus feed-forward layers and then explain their rationale for treating all layers uniformly.\n- The authors posit strong claims on their proposed Inverted computational complexity metric without theoretical or empirical evidence. First, the metric is formulated as a product of the layer number, pruning rate, and number of bits, which are also the parameters being explored by the genetic algorithm. What type of information (if any) is being extracted from this product needs to be clarified. For example, what's the utility of the layer number' i' in equation (6)? Second, the pruning rate and number of bits depend on the layer's type, which hasn't been discussed in subsection 4.3. Overall, to demonstrate the utility and credibility of the proposed metric, the authors must (i) theoretically discuss the metric computation in equation (6), (ii) conduct an ablation study (with different combinations of the metric's components), (iii) compare their metric against established hardware performance indicators (e.g., latency and memory) to show its practical relevance, and (iv) Discuss how the proposed metric accounts for different layer types, given that pruning and quantization may affect them differently.\n- The discussion in 4.3.2 needs to be more convincing since the results shown in Figures 3 and 4 cannot be generalized because of the limited number of observations (scatter points). Additionally, while the authors claim their metric is better than FLOPs because quantization is not included in the latter, both metrics are not a good proxy for hardware efficiency estimation [1]. This is because efficiency is specific to the hardware architecture and type of operations. For the authors to justify their claim, an ablation study could be conducted to compare their proposed metric and a FLOPs-aware quantization (where each layer’s FLOPs is multiplied by the number of bits).\n- There’s no comparison with the latest existing works on BERT model optimization with quantization [2], pruning [3], or knowledge distillation [4]. Without a comprehensive comparison with these SOTA works it’s hard to draw any tangible conclusion on the effectiveness and novelty of the proposed approach. Overall, the paper should discuss how P-BERT differs from or improves upon [2, 3, 4]. The authors could add a comparison table that includes their method alongside SOTA approaches, highlighting key differences and improvements.\n\n**References:**\n- [1]: Dehghani, Mostafa, et al. \"The efficiency misnomer.\" arXiv preprint arXiv:2110.12894 (2021).\n- [2]: Shen, Sheng, et al. \"Q-bert: Hessian based ultra low precision quantization of bert.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 05. 2020.\n- [3]: Liu, Zejian, et al. \"EBERT: Efficient BERT inference with dynamic structured pruning.\" Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. 2021.\n- [4]: Muhamed, Aashiq, et al. \"CTR-BERT: Cost-effective knowledge distillation for billion-parameter teacher models.\" NeurIPS Efficient Natural Language and Speech Processing Workshop. 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 1 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As mentioned in the weaknesses above, it would be helpful to clarify the following questions for the reviewers:\n- How does the proposed combination of three techniques differ from or improve upon previous approaches that have used subsets of these methods? Providing more detail on this could help highlight the originality of the work.\n- Intuition and Mathematical Justification of the Inverted Computational Complexity Ratio:\n - It would be useful to provide more discussion on the design of the K_model in the Inverted Computational Complexity Ratio. Specifically, the ratio is based on a weighted summation of j_i \\times b_i. Is there an assumption that deeper layers contribute more to the efficiency improvement metric? Any intuition or mathematically justification is welcome.\n - The ratio shows a linear correlation with the number of operations. What is the definition of the number of operations in this context? \n - If the authors intend to reflect reduction in computational cost due to low-bit quantization, it would be helpful to display the number of \"quantization bits\" in table 5. Why is this proposed ratio a better metric than using the number of parameters, FLOPs, and quantization bits to measure the efficiency of the compressed model? Adding more discussion is helpful for readers to understand the advantage of this new metric compared to existing traditional metrics. \n- In the result section, it could be helpful if the authors provide a more nuanced discussion of their model's strengths and weaknesses compared to these baselines, particularly in cases where P-BERT underperforms." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Explored the method of combing three techniques to reduce the Bert model to an efficient model.\n- Proposed a metric \"Inverted Computational Complexity Ratio\", by calculating the ratio of weighted summation of quantization bits and remaining number of values in each layer. \n- Showed complete experiments and provided ablation study in details for different ratios on multiple tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, authors proposed a method to combine multiple techniques to reduce the original Bert model to an efficient model for computational restricted hardwares and remain accuracy. The author also proposed a metric \"Inverted Computational Complexity Ratio\" to measure the efficiency of compressed Bert models. The methods shows reasonable results in accuracy and efficiency compared to baseline models, but it doesn't out perform all the baseline models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Weaknesses:\n\n- Not Sufficient Discussion of Original Innovation:\nThe proposed method is not showing enough discussion of original innovation, as it is essentially a combination of existing methods—pruning, quantization, and knowledge distillation. It would be helpful if authors could clarify the improvements of their proposed method compared to previous approaches. \n\t- The use of unstructured pruning with a genetic algorithm is a well-established technique for model pruning and has been extensively studied in prior works across multiple models. For example, see Multi-Objective Pruning for CNNs Using Genetic Algorithm (Chuanguang, et al., https://arxiv.org/pdf/1906.00399), and Pruning Decision Tree Using Genetic Algorithms(Jie Chen, et al., https://ieeexplore.ieee.org/document/5376632)\n\t- Similarly, quantization and knowledge distillation are applied in a straightforward manner. These techniques have already been proposed and thoroughly investigated for transformer-based models in earlier works, as referenced by the authors in the related works section.\n\n- Missing Design Reasoning of the Inverted Computational Complexity Ratio:\nThe proposed inverted computational complexity ratio could benefit from clearer mathematical reasoning and further justification.\n\t- The ratio is based on K_model, defined as the summation of i \\times j_i \\times b_i, where i represents the layer index. This makes the metric a weighted sum where the layer index serves as the weight. The issue with this definition is that K_model becomes disproportionately sensitive to deeper layers. For instance, Layer 11 contributes more to this metric when it is pruned or quantized, compared to other layers. It would be valuable for the authors to offer more insight and intuition into why deeper layers are weighted more heavily in their metric, and how this aligns with real-world computational overall efficiency improvements. \n\n- Performance is Not Significant Compared to Other Models:\nThe performance conclusion claimed by the authors is not sufficient enough when compared to other models.\n\t- The authors stated that their method achieved “promising results with competitive accuracy, particularly in CoLA”, but the evidence presented does not sufficiently demonstrate accuracy or efficiency advantages over competing models on multiple tasks, particularly when compared to models like TinyBERT and I-BERT. For instance, in Table 5, TinyBERT, despite having a much larger inverted computational complexity ratio, 27.1, outperforms P-BERT on accuracy in all tasks except CoLA. Moreover, I-BERT, which has a inverted computational complexity ratio of 2.9, similar level to P-Bert, still outperforms P-BERT in tasks such as MRPC, STSB, and even CoLA. It would be great to add more discussion of the performance gap when comparing to other baselines. Discussing both the advantages and disadvantages will offer a more balanced view and help readers better understand the specific contributions and limitations of the proposed approach.\n - The authors stated that the method is “hardware-aware optimization”, but the paper does not discuss much about the meaning of \"hardware-aware\" and how their approach is optimized for \"hardware-aware\". Assuming the authors are referring to “low-computational resource hardware,” the paper does not discuss its advantages and disadvantages compared to other baseline methods optimized for hardware. For example, one of the baseline models, I-BERT, which uses “integer-only distillation” shows advantages when deployed on hardware that supports only integer calculations. Providing similar examples of how the proposed approach is optimized for hardwares would strengthen the paper’s claims.\n\n- Typos (minor):\n - In section 5.5, authors mentioned the proposed method is not suitable for more powerful \"high-end systems, such as GPT\", which I guess should be referring to \"GPU\"." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "P-BERT combines pruning, quantization, and knowledge distillation to cut BERT's computational needs by 60%, maintaining accuracy and scores." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024pbert,\ntitle={P-{BERT}: Hardware-Aware Optimization of {BERT} Using Evolutionary Techniques},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xaXvHdH9Y4},\nnote={under review}\n}" }, "abstract": { "value": "Transformer-based models have emerged as the go-to standards in Natural Language Processing (NLP), revolutionizing the landscape of NLP applications. As complex models continue to proliferate, the need for more efficient computational processing becomes increasingly imperative. This has led to the rise of model compression techniques, implemented to target computational inefficiencies. Expounding on this, we propose Pyramid-BERT (P-BERT), the integration of three established model compression techniques to further reduce the computational inefficiency of the standard BERT models, and subsequently optimize BERT under the hardware characteristics. Specifically, the techniques employed are pruning, quantization, and knowledge distillation. The first two aforementioned correlated techniques work simultaneously to remove redundant specifications while leveraging knowledge transfer from baseline models. These techniques enable a substantial reduction in computational cost, making P-BERT highly suitable for portable, low-power devices such as cellphones, wearable devices, and smartwatches, and thus enabling hardware-friendly processing on various computing engines. Additionally, we will be proposing a new metric, the inverted computational complexity to quantify the complexity and efficacy of the model. This metric aims to more accurately capture the hardware-specific performance characteristics. Our experimental results show that P-BERT achieves a remarkable reduction of at least 60\\% in the inverted computational complexity ratio while ensuring comparable accuracy and scores across many downstream tasks compared with the baseline BERT models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Model Compression", "Large Language Models", "Computation Complexity", "BERT", "Hardware-Aware" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e8670a779d343f72650b9243645fc9ff04aaddfc.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/018cf7ce45b0760eac1c5bc2a869ee94fe1f92fa.zip" }, "title": { "value": "P-BERT: Hardware-Aware Optimization of BERT Using Evolutionary Techniques" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xaYlO03tIk
Stem-OB: Generalizable Visual Imitation Learning with Stem-Like Convergent Observation through Diffusion Inversion
main
Active
Robotics;Imitation Learning;Visual Imitation Learning;Robustness;Diffusion Model;Diffusion Inversion
applications to robotics, autonomy, planning
3;6;6;6
3;4;3;3
1;3;3;3
3;3;3;3
2;3;4;2
5.25
3.25
2.5
3
2.75
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to Weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* This paper focuses on enhancing the robustness of visual imitation learning, addressing a practical and impactful topic. The approach holds significant potential for advancing research in the field of robotics.\n\n* The idea of using diffusion inversion to remove low-level visual variations while preserving high-level scene structures is both novel and intriguing.\n\n* The evaluations are thorough, with experiments conducted in both simulated environments and on real-world robots. The real-world experiments highlight the method's strong generalization capabilities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Stem-OB, a method that enhances visual imitation learning by using image inversion from pretrained diffusion models to reduce low-level visual variations while preserving high-level scene structure. Stem-OB creates a shared representation that is robust to various appearance changes without additional training. Empirical results on several benchmarks demonstrate the effectiveness in challenging environments with lighting and appearance changes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I found the structure of this paper somewhat difficult to follow, as certain sections lacked clarity. For instance, in the preliminaries, the definition of diffusion inversion seemed to be a combination of both forward and backward diffusion. But in Figure 2, it seems that proposed method uses the noised observations as the policy input. If the proposed method is trained on the inversion-altered space, why using the noised version of the observation as input can improve the performance? How can the authors guarantee that applying forward diffusion to the image improves generalization?\n\n* There seems to be a trade-off when choosing the inversion step, which can not be either too large or too small. Is there any explanation about this phenomenon? \n\n* The theoretical analysis section was also challenging to understand. The authors discuss a loss between two latent variables, $x_0$ and $y_0$. What is the intuition of calculating the loss of two different images? From my understanding, the attribute loss here should refer to the inversed data $\\hat{x}_0$ and its original version $x_0$. Could the authors also clarify the statement, “images with fine-grained attribute changes tend to become indistinguishable sooner than those with coarse-grained modifications under identical diffusion schedules”? \n\n* Section 4.1 reminded me of another study [1], which employs diffusion models to purify noise within noisy demonstrations while preserving the optimal structure. Are there any conceptual similarities between that approach and the proposed method?\n\n* Typos: Line 242: $er$f should be $erf$.\n\n[1] Imitation Learning from Purified Demonstrations, ICML 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The most significant questions are already listed under weaknesses. In addition, I would be interested to get a clarification on the following points:\n\n- line 272: How many images?\n- line 428: Can the authors explain how they get to this conclusion? The training setting performance of RO in particular looks to me like it was not trained correctly.\n- line 430: Given the result in D2B, I don't think the conclusion can be that there is always a high success rate?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The idea is, as the authors write, 'simple yet effective' and the authors manage to neatly package a quite theoretical idea and a very practically result measured in real-world robotic experiments all in 1 paper\n- While I am less convinced of the theoretical grounding, the idea itself is well explained\n- The authors combine experiments on simulated datasets with actual real-world tests. Real-world tests are incredibly important for visual methods, yet in case of robotic applications very hard to make reproducible. The combination of both is a commendable experimental design.\n- Similarly, the authors conduct real-world user studies to investigate their hypothesis that diffusion inversion progresses along semantic hierarchies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to use diffusion inversion as input preprocessing for visual imitation learning methods in order to improve their generalization to visual changes between demonstration and test setup. The proposed approach is evaluated in real-world robotic experiments as well as on 2 simulated datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I find the derivation in Section 4.1 very hard to follow. Only the first sentence of Section 4.2 makes it clear that the whole derivation is based on the assumption that semantically similar images are closer in latent space. That is a HUGE assumption and the following experiments on intra- and inter-class confusion do not show this over multiple levels of semantic hierarchies, but only for the semantic leafs. The introduction (lines 60-65) however argues that much more than leaf classes, abstract semantic hierarchies are important to generalize over perceptual differences.\nIn the empirical study on diffusion inversion, one might argue that out of the 5 empirical examples, \"bowl\" and \"cup\" are most similar. However, Table 1 shows that the latent representation of 'cup' samples is actually on average closer to any of {'drawer', 'duck', 'faucet'} than to 'bowl'. To me this makes the assumption 'semantically similar images are closer in latent space' quite unbelievable. For sure, this assumption and therefore the formulation of semantic overlap as overlap of latent gaussians is anything but 'intuitive' (line 270) in presence of this data.\n- In relation to the above, I miss a clear definition of what kind of variation should be compensated through diffusion inversion. The abstract and introduction repeatedly claims that diffusion inversion can extract the 'high-level structure' of the scene, suggesting generalization over different object instances, shapes, appearances, relative placements, and robot arms. Section 4.1 considers 'variation', 'fine-grained attribute change', 'coarse-grained modifications', and 'semantic overlap' without defining any of these. The investigation in Section 4.2 is focused on variations within a semantic object class, i.e. a demonstration with 1 cup should be repeated with a different cup. The experiments then consider lighting change and a limited set of object appearance change of sometimes multiple object instances, while locations are fully fixed. The problem is that all of this currently does not fully fit together and it would be good if the authors can define more accurately what kind of variations they expect diffusion inversion to abstract / generalize over, and then design experiments accordingly to show improvement with exactly these variations.\n- There are a coupe of odd aspects about the simulation experiments that raise questions about the soundness of the results:\n - For the benchmarks from ManiSkill and MimicGen, why were not all tasks evaluated? For ManiSkill, plug-charger seems to be specifically excluded and for MimicGen 4 out of 12 tasks were picked without any explanation.\n - While I did not quickly find comparable numbers for ManiSkill, the MimicGen paper reports much higher success rates for the investigated tasks. E.g. for Threading, MimicGen reports around 19% sucess from just 10 videos and 98% success from 1000 videos. For the 500 videos used in the experiments here, the success is below 19% for 3 out of 4 variants, including the proposed method. Why are the achieved success rates so low? And can the proposed method actually improve anything in a more state-of-the-art setting?\n - Why is the RO baseline excluded from the simulation experiments?\n- In all experiments (real and simulated), most of the differences between methods are within the standard deviation, so it is very hard to say if any conclusion can be drawn. Why is the standard deviation so high?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The current method conducts experiments solely on Stable Diffusion. It would be valuable to understand if the conclusions drawn from Stem-OB are applicable to other generative models, such as Flux or SD 3 (which uses flow instead of diffusion). Demonstrating that Stem-OB generalizes across different models would strengthen the claim of robustness and broaden the impact of this approach beyond a single model framework." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written, with clear motivation and solid experimental validation, including both real-world and simulated experiments.\n\n2. The central idea resonates well: robotic observations often include excessive low-level details, while effective scene understanding requires capturing high-level structural information. This paper draws on an intriguing insight from Yue et al. (2024): \"Rather than uniformly removing information from different semantic hierarchies, the process brings structurally similar images closer in the early stages of inversion.\" Building on this, Stem-OB leverages this observation to project data into a space that prioritizes inherent structural features." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Stem-OB, a method that uses diffusion inversion to transform observations into a shared, robust representation. This approach enhances robustness to various appearance changes, such as shifts in visual patterns and lighting conditions. During inference, Stem-OB approximates the partial inversion process to maintain efficient inference speed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Stem-OB enhances the generalization capabilities of robotic systems by using diffusion inversion to map observations into a shared representation. However, another promising line of research—self-supervised representation learning—also aims to unify raw observations into a common representation space, reducing low-level visual discrepancies. I suggest that the authors consider benchmarking Stem-OB against these approaches, particularly methods that leverage self-supervised learning (SSL) for robotic representations, to offer a more comprehensive comparison." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above. I currently give a weak accept to the paper but am inclined to vote for acceptance should my points above be taken into consideration during the rebuttal." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper relies on a central observation from a previous paper (Yue et al. 2024), which found that during the inversion process of a diffusion model, fine-grained (or high-frequency) details of an image are destroyed first before the low-level semantic concepts. This paper builds on this basic concept and proposes a new preprocessing step that uses this theoretical property of diffusion models as a preprocessing step, which, in the process, improves the generalization of the IL algorithm. \n* The technical principle seems well established at this point, but this paper takes these theoretical findings and applies them to a new problem. While the whole approach is very simple from a technical standpoint, the paper re-introduces the necessary background in section 2 before developing the reader's intuition as to why this preprocessing might work in section 4.2 through theoretical explanation and small experiments. I believe that this paper warrants publication because it shows that diffusion model inversion can be useful for additional tasks. \n* The presentation of the paper is clear and easy to follow overall. \n* The results nicely highlight the drawbacks of previous methods and that the proposed method offers much better generalization capabilities. \n* I appreciated the detailed appendix, with theoretical derivations, additional details for reproducibility, and additional experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Visual imitation learning is a method for robots to learn tasks through visual human demonstrations. While multiple methods exist for preprocessing visual data that increase the generalizability of VIL, these methods are only partially efficient. To address the generalization issue, this paper proposes to use the inversion process of a pretrained diffusion model as a preprocessing step to increase the generalizability of visual imitation learning methods. The proposed method relies on the intuitive insight that images with only small perturbations between them are closer in latent space (and therefore become indistinguishable under diffusion faster), than data that exhibits large differences. The method is tested on real and synthetic data and shows that the proposed preprocessing significantly increases the models generalization capabilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I believe that the paper's technical novelty is small, as the main concept has already been introduced in other papers. That being said, I also believe that the paper is worth publishing, as it adds to the growing body of literature showing the usefulness of diffusion model inversion for various tasks, and the empirical validation is well executed.\n\nI have additional suggestions to improve the paper's readability and make it easier for the reader to understand.\n1. Section 2.2 can be cut (title), and the paragraph can be integrated into Section 3.1. \n2. As a reader, I would really like a short problem formulation that properly introduces the problem the paper addresses. This should properly define the inputs and outputs. I would also suggest rearranging the sections/subsections of the paper to something like this:\n* Introduction\n* Related Work\n* Problem Definition\n* Preliminaries\n* Method\nThis would greatly improve the flow of the paper. \n3. While the paper is generally easy to follow, some sentences are should be simplified or run through a writing program. I have added some examples below:\n* Line 85: \"To be specific, our method is as simple as inverting the image for reasonable steps before\" What is reasonable? Maybe rephrase?\n* Line 226: \"Intuitively, given a source image and its variation, distinguishing between them becomes increasingly difficult\" This sentence does not make a lot of sense (although I can guess what is meant).\nThere are many more of these convoluted sentences. Cleaning up the writing a bit would go a long way to improve the paper.\n\nThere" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024stemob,\ntitle={Stem-{OB}: Generalizable Visual Imitation Learning with Stem-Like Convergent Observation through Diffusion Inversion},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xaYlO03tIk},\nnote={under review}\n}" }, "abstract": { "value": "Visual imitation learning methods demonstrate strong performance, yet they lack generalization when faced with visual input perturbations like variations in lighting and textures. This limitation hampers their practical application in real-world settings. To address this, we propose ***Stem-OB*** that leverages the inversion process of pretrained image diffusion models to suppress low-level visual differences while maintaining high-level scene structures. This image inversion process is akin to transforming the observation into a shared representation, from which other observations also stem. *Stem-OB* offers a simple yet effective plug-and-play solution that stands in contrast to data augmentation approaches. It demonstrates robustness to various unspecified appearance changes without the need for additional training. We provide theoretical insights and empirical results that validate the efficacy of our approach in simulated and real settings. *Stem-OB* shows an exceptionally significant improvement in real-world robotic tasks, where challenging light and appearance changes are present, with an average increase of **22.2%** in success rates compared to the best baseline. Please refer to [this link](https://stem-ob.github.io/) for more videos and details." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Robotics", "Imitation Learning", "Visual Imitation Learning", "Robustness", "Diffusion Model", "Diffusion Inversion" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ae8800e1d43fc51454a8e5c4c43e50a67467bbcb.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/555799937b2fe8bf9844db9b7ed4824619e621db.zip" }, "title": { "value": "Stem-OB: Generalizable Visual Imitation Learning with Stem-Like Convergent Observation through Diffusion Inversion" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xaafWdM5jI
UFGTime: Reforming the Pure Graph Paradigm for Multivariate Time Series Forecasting in the Frequency Domain
main
Active
Multivariate Time Series Forecasting;GNN;Pure Graph Paradigm
learning on time series and dynamical systems
1;3;5;5
4;4;4;4
1;1;2;3
1;1;2;2
2;1;3;2
3.5
4
1.75
1.5
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Could you explain why lines 43-45 present a contradiction? Some papers do consider product graphs (eg https://arxiv.org/abs/2206.15174). Moreover, addressing temporal and spatial (graph) processing in subsequent steps does not seem to reduce expressiveness (see eg https://arxiv.org/abs/2103.07016).\n- Lines 92-93. Claiming that CNN cannot model spatial information is simply wrong; see e.g. the field of Geospatial Deep Learning.\n- Definition 1. Why is this structure called a graph if the only information is in the node features? Also, why is it called \"hypervariate\" if it has $D$ features, as per the original time series?\n- Sections 3.1 and 3.2. These sections describe well-known facts: a fully-connected graph is regular and each permutation leads to an automorphism. Is there an additional insight in these sections that I may have missed?\n- Definition 2. Similarly to Definition 1, why is this called \"hyperspectral\"? What makes it “hyper”?\n- Section 4.1. As a reference, could you indicate a few state-of-the-art approaches within the \"pure graph paradigm\"? Currently, I only see only FourierGNN being mentioned.\n- Line 212. The elements can only be rearranged as long as they are labeled or indexed by the corresponding frequency-node pair. This means they can be stored in memory in any order, but their indexing cannot be ignored. Therefore, the statement in line 215 does not logically follow from line 212. Is there an argument to support the conclusion in lines 215-217?\n- What is novel in the proposed use of framelets compared to existing literature?\n- Could you report standard deviations alongside the results presented?\n- Figure 6. What does a contour plot represent for discrete variables? Additionally, how can the optimal $k$ be a non-integer value?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The paper addresses limitations of a previously published method, FourierGNN." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a time-series forecasting model that first computes the Fourier transform for individual components of the time series. It then constructs a KNN graph using triples as nodes -- each of which consisting of a time-series index, feature dimension, and frequency. Finally, the model employs framelets to make predictions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The presentation is confusing, often blending obvious claims with unsupported statements and using ambiguous terminology.\n- The paper tackles challenges that seem obvious and already addressed in the literature: (1) the importance of maintaining temporal order in time-series processing, and (2) the unsuitability of fully connected graphs in graph-based processing.\n- As far as I could understand, the paper’s original contribution is limited to constructing a KNN graph from a spectral representation of the original multivariate time series." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Based on the above weaknesses, the following aspects of this work could benefit from additional explanations/extensions:\n- **[Q1] Discrepancies in MSE/MAE performances for different datasets/horizons:** Based on **[W2]**, could you explain why the model for specific datasets performs significantly worse in terms of MSE in comparison to baselines but has the best MAE values. \n- **[Q2] Visualizations for predictions/learned graph dependencies:** To address discrepancies in scores, some visualizations of predictions (true and predicted time series) corresponding to such cases (low MAE, large MSE), but also, in general, are a crucial qualitative way to access performance variations between models/datasets. Based on **[W5]**, could you provide some visualizations on the learned dependencies provided by the graph module of UFGTIME for some real-world datasets?\n- **[Q3] Proper comparison of different graph learning modules:** The paper could improve its discussion of related works by differentiating between pure graph paradigms and methods that produce sparse graphs in the context of GNN architectures.\n- **[Q4] Complexity analysis of GNN-based graph learning baselines:** The paper and results could benefit from a discussion on the complexity of different GNN-based baselines along different graph learning modules beyond the already mentioned FourierGNN.\n- **[Q5] Impact of window size and input variables on memory complexity:** including analysis/ablations that address these factors could provide a better understanding of the scalability and resource requirements of UFGTIME for real-world time series datasets.\n- **[Q6] Misc:** No standard deviations/averaging across multiple runs for the provided performance results are mentioned. Is it possible to demonstrate that the small enhancements observed across multiple cases are statistically significant compared to the best competitor?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Strong points of the presented work are the following:\n- **[S1]:** The authors showcase the importance of spectral graphs for time series forecasting while highlighting the importance of sparsity in the structure.\n- **[S2]:** The proposed framework remains simple in its design.\n- **[S3]:** Experimental results show the very competitive performance of the proposed method in terms of MSE with popular baselines.\n- **[S4]:** Ablation studies highlight the importance of performance when learning a (sparse) graph structure on underlying dependencies in forecasting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present a **GNN-based model tailored to time series forecasting** without an essentially existent aprior graph structure. They focus on overcoming the limitations of a recent state-of-the-art method that considers hypervariate (fully connected) graphs by highlighting the importance of capturing the most *crucial inter- and intra-series correlations* by incorporating some sparsity. They thus propose a method that extracts a KNN-based graph structure built upon representation extracted by the Fourier transform. Their introduced (hyperspectral) graph structure is then processed by the so-called *global Fourier framelet message-passing operator* to capture global patterns from the time series examples. After the **spectral graph embedding module**, the inverse Fourier transform returns the representation in the original time dimension, followed by a two-layer feed-forward network to predict the output vector (on the future steps) combined with the *trend embedding* of the original signal. The authors evaluate the performance of their proposed method against popular graph-based and non-graph-based models in forecasting and presenting properties of the critical components of their model through ablation studies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **[W1]:** Performance improvements achieved by the proposed method are mostly minor compared with FourierGNN for the datasets considered in short-term forecasting.\n- **[W2]:** The proposed method is outperformed by baselines on datasets used for long-term forecasting. Interestingly, in several cases, there is a discrepancy in the performances achieved in terms of MSE and MAE (e.g., MAE scores of the proposed UFGTIME on ETTh2 are the best against baselines, but corresponding MSEs are almost double in value compared to the Autoformer).\n- **[W4]:** The authors focus on the challenges of the pure graph paradigm, yet most graph-based methods for time series before this work [1] were similarly using KNN [3] or differentiable methods (Gumbel softmax) [2] to achieve graph sparsity. Additionally, embeddings were learned based on time series inter-variable correlations or temporal dynamics [2] or as node embeddings [3]. The paper misses an extensive discussion, e.g., in the related work section, on the critical characteristics of the graph modules introduced in the literature to adequately position the proposed UFGTIME's contribution.\n- **[W5]:** The learned graph dependencies are not evaluated qualitatively. Similar to the studies/visualizations for the graph learning module in [3], it would be interesting to assess the validity of learned time series dependencies produced by the proposed graph module.\n- **[W6]:** The proposed method is quite efficient regarding memory/time cost. However, it would be interesting for the discussion to be enhanced with explanations of the complexities of all considered GNN-based baselines. It is unclear to which extent the memory complexity/efficiency of the proposed method is affected by the window size and number of variables in the input, which could provide additional ablation studies.\n\n[1] Yi, K., Zhang, Q., Fan, W., He, H., Hu, L., Wang, P., ... & Niu, Z. (2024). FourierGNN: Rethinking multivariate time series forecasting from a pure graph perspective. Advances in Neural Information Processing Systems, 36.\n\n[2] Shang, C., Chen, J., & Bi, J. (2021). Discrete graph structure learning for forecasting multiple time series. arXiv preprint arXiv:2101.06861.\n\n[3] Wu, Z., Pan, S., Long, G., Jiang, J., Chang, X., & Zhang, C. (2020, August). Connecting the dots: Multivariate time series forecasting with graph neural networks. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 753-763)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "+ For the results in Sec. 3.2, what temporal characteristics were used in constructing the Laplacian matrices? Additionally, what are the attention weights applied in this context?\n+ I am curious about the necessity of this method—what are its key advantages compared to competitive time series models like PatchTST, TimeMixer, and a recent study https://arxiv.org/pdf/2403.14587?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ The work presents a feasible way of addressing the limitations of hypervariate graphs by transforming time series into the frequency domain and constructing hyperspectral graphs. The use of frequency-based graph representation is an interesting twist on how spatio-temporal dependencies are handled.\n+ This paper introduces a well-defined framework with innovative techniques, such as the graph framelet message passing operator. The theoretical analysis and complexity evaluations provide a thorough understanding of the approach.\n+ The technical descriptions of the novel hyperspectral graph and framelet message passing are articulated well.\n+ The work addresses practical limitations in a recent GNN-based time series forecasting method, proposing improvements that could lead to better generalization in various time series forecasting domains, including both short- and long-term applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the hyperspectral graph structure to enhance the ability of GNNs in capturing temporal dependencies for multivariate time series forecasting. The authors proposes UFGTIME, which transforms time series data into a frequency domain representation, preserves sequential information, and constructs a sparse graph with signal similarity using KNN. This graph structure is combined with a framelet message passing mechanism, aiming to improve the ability to capture both global and local temporal patterns without over-smoothing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "+ The core motivation of this work feels narrow. While improving upon FourierGNN is a valuable contribution, the paper does not clearly establish why the disjoint modeling architecture in existing GNN-based methods fails critically in practice, or why FourierGNN's approach is inherently superior. A stronger emphasis on the overarching challenges of multivariate time series forecasting would better justify the significance of this work.\n+ The storytelling could be improved, particularly in the introduction. The two key challenges in FourierGNN are not clearly articulated, and after reading the introduction, it is unclear if this research addresses a significant problem.\n+ The empirical validation in Sec. 3 is somewhat weak in certain aspects. For instance, the experimental setup for the evaluation in Tab. 1 is unclear, and the performance degradation after permutation looks significant without compared to benchmarking methods, leaving doubt about the strength of the claims. Additionally, the visualization results in Fig. 2 show minimal differences between sparse and fully connected Laplacian patterns without the attention weights. \n+ The construction of the hyperspectral graph closely mirrors that of the hypervariate graph, with the key distinction being that it bypasses direct modeling of sequential information by transforming time series into discrete frequency components.\n+ There is a lack of comparative discussion against competitive time series models, particularly recent approaches like PatchTST and TimeMixer. The advantages of UFGTIME over these models are not sufficiently explored, which reduces the clarity around the unique contributions of this research." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "--" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "* Developing effective deep learning architectures for time series forecasting is an important research direction, and graph-based approaches are appealing in many settings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a graph representation for groups of multivariate time series and a time series forecasting model based on this representation. The main motivation behind the method is to address limitations in a specific previous work (FourierGNN), which represents the input time series as a fully connected graph where each node is an observation in time and space. This representation clearly has the drawback of discarding the temporal ordering of the observations. The introduced method attempts this issue, but such a limitation is specific to FourierGNN and does not apply to a broader body of literature operating in similar settings. There are also some issues with the soundness of the method and of the empirical evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There are major weaknesses that prevent me from recommending acceptance.\n\n* **Soundness of the approach.** The proposed approach builds a graph representation by considering the K-nearest neighbors of a representation of the input obtained through an FFT along the temporal dimension. However, in such a representation, each node corresponds to a different frequency and entity. I don't understand why one would want to connect nodes that correspond to different frequencies. For example, with such a representation, signals that have all of their spectrum concentrated at totally different frequencies would get connected. This representation does not preserve the structure of the data, similarly to the method the paper is trying to fix.\n* **Poor novelty and weak motivation.** As noted, the limitation the paper aims to address is specific to a single model. Addressing such a narrow issue isn't sufficient when most state-of-the-art approaches do not have this drawback. Many spatiotemporal graph neural networks utilize sound graph representations of the input, and in many cases, the processing of spatial and temporal dimensions is well integrated (e.g., see [1]). Additionally, several signal processing methods operate on pure graph representations of sets of time series. For instance, [2] uses product graph representations. The visibility graph is another principled graph representation of input time series (e.g., see [3]). All of these representations avoid the issues the proposed method attempts to solve and are more sound than the one presented in the paper. Lastly, the novelty of the graph framelet operator is limited, as it appears to be a straightforward adaptation of an existing method.\n* **Soundness of the empirical evaluation.** The empirical evaluation has some issues. Firstly, there is a growing consensus that the ETT benchmarks used for the experiments in Tab 3 are not significant for evaluating deep learning methods for time series forecasting. For example, in [4], a simple autoregressive linear model trained using ordinary least squares achieves better results than those reported in the table in almost all the considered scenarios. Secondly, some of the results in Table 2 involve baselines that require a predefined input graph, which, according to the appendix, is a kNN graph obtained in an unclear way (e.g., which similarity metric was used to create this graph?).\n\nGiven these issues, the paper is far below the acceptance threshold by ICLR standards.\n\nMinor comments\n\n* There must be some reporting errors in Table 3. Looking at the MSE for UFG in ETTH1, the MSE at 192 steps is lower than the MSE at 96, which does not make any sense if the testing is done properly. Differently, the MAE increases as one would expect.\n\n[1] Wu et al., \"Traversenet: Unifying space and time in message passing for traffic forecasting\" TNNLS 2022\\\n[2] Sabbaqi et al., \"Graph-time convolutional neural networks: Architecture and theoretical analysis.\" PAMI 2023.\\\n[3] Lacasa et al., \"From time series to complex networks: The visibility graph\" PNAS 2008\\\n[4] Toner et al., \"An Analysis of Linear Time Series Forecasting Models.\" ICML 2024\\" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Pure Graph Paradigm for Multivariate Time Series Forecasting in the Frequency Domain" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024ufgtime,\ntitle={{UFGT}ime: Reforming the Pure Graph Paradigm for Multivariate Time Series Forecasting in the Frequency Domain},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xaafWdM5jI},\nnote={under review}\n}" }, "abstract": { "value": "Recent advances in multivariate time series forecasting have seen a shift toward a pure graph paradigm, which transforms time series into hypervariate graphs and employs graph neural networks (GNNs) to holistically capture intertwined spatiotemporal dependencies. While promising, this approach faces notable challenges. First, converting time series into hypervariate graphs often neglects essential temporal sequences, which are vital for accurately capturing temporal dependencies. Second, treating the graph as a complete structure can obscure the varying importance of intra- and inter-series connections, potentially overlooking key local patterns. To address these challenges, we introduce a novel hyperspectral graph data structure that embeds sequential order into frequency signals and employs a sparse yet meaningful topological structure. In addition, we propose the \\textsc{Ufgtime} framework, featuring a frequency-based global graph framelet message-passing operator tailored to hyperspectral graphs, effectively mitigating the smoothing issue and capturing global insights through sparse connections. Extensive experiments demonstrate that our framework significantly surpasses state-of-the-art methods, excelling in both short- and long-range time series forecasting while achieving superior efficiency. Our code is available at:~\\url{https://anonymous.4open.science/r/UFGTIME-E352}." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multivariate Time Series Forecasting", "GNN", "Pure Graph Paradigm" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bb48ed381c4c01129a121a561a5def019874cb6f.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "UFGTime: Reforming the Pure Graph Paradigm for Multivariate Time Series Forecasting in the Frequency Domain" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xajif1l65R
Rethinking Dataset Quantization: Efficient Core Set Selection via Semantically-Aware Data Augmentation
main
Active
Coreset Selection;Dataset Quantization;Data Augmentation;Efficient Deep Learning;Semantically-Aware Augmentation
applications to computer vision, audio, language, and other modalities
3;5;5;5
5;4;4;4
2;3;2;2
3;2;2;2
2;3;2;3
4.5
4.25
2.25
2.25
2.5
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Computational Efficiency: By removing the reliance on large pre-trained models, DQ V2 lowers computational costs.\n\n2. Good insight for data augmentation: The pre-trained MAE model is equivalent to a data augmentation method (in introducing prior knowledge and implicit regularization into the training process)\n\n3. The writing is clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the high computational cost of Dataset Quantization (DQ) due to its reliance on large pre-trained models like MAE and ResNet. They propose DQ V2, which removes pre-trained models by using a random CNN-based data augmentation that retains semantic structure by masking objects and replacing backgrounds, enhancing diversity without costly models. The goal of data augmentation (synthesizing) in their pipeline is to enhance data diversity and representation without relying on costly pre-trained models. \n\nEvaluation: Evaluated on ImageNette, CUB-200-2011, Food-101, and ImageNet-30, DQ v2’s performance is compared with DQ’s. DQ v2 achieves comparable or better performance than the original DQ method, showing an average improvement of about 1.57%." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of Quantitative Analysis on Computational Gains: While the paper claims computational benefits from replacing the MAE model with a CNN-based data augmentation strategy, it lacks specific measurements or comparisons to substantiate these gains. A quantitative analysis—such as GPU hours, memory usage, or training time—would provide stronger evidence of the efficiency improvements in DQ V2.\n\n2. Missing Baselines: I noticed that some recent coreset selection baselines for deep learning are missing: D2 Pruning[1], CCS[2], Moderate[3]. Those baselines seem to have a stronger performance than the proposed methods.\n\n3. Missing evaluation on ImageNet-1k: the paper argues that DQ-V2 is more efficient than DQ, but the method is only evaluated on the ImageNet subset. Previous methods including DQ all conducted evaluation on ImageNet-1k. It will be good to include an ImageNet-1k evaluation to demonstrate the scalability of the proposed methods.\n\n4. The data augmentation part is confusing: the goal of data quantization and coreset selection is to reduce the size of the training dataset, but the data augmentation method proposed in the paper expands the datasets -- the final expanded training dataset can be even larger, which is contradicted to the goal of coreset selection.\n\n5. Ablation study on data augmentation: The paper would benefit from a more detailed ablation study to assess the effectiveness of the data augmentation method used in DQ V2. Testing different data augmentation configurations (e.g., no augmentation, alternate augmentation techniques) would clarify its impact and help refine the methodology.\n\n[1] Maharana, Adyasha, Prateek Yadav, and Mohit Bansal. \"D2 pruning: Message passing for balancing diversity and difficulty in data pruning.\" ICLR 2024\n\n[2] Zheng, Haizhong, Rui Liu, Fan Lai, and Atul Prakash. \"Coverage-centric coreset selection for high pruning rates.\" ICLR 2023\n\n[3] Xia, Xiaobo, Jiale Liu, Jun Yu, Xu Shen, Bo Han, and Tongliang Liu. \"Moderate coreset: A universal method of data selection for real-world data-efficient deep learning.\" ICLR 2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The biggest question is what specific negative effects MAE actually introduces, as the authors' experiments and analysis do not clearly convey any significant drawbacks to using MAE." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The method proposed by the authors does indeed achieve comparable or even higher results without using MAE.\n\n2. The authors conducted extensive ablation studies on the parameters of the method itself, including experiments on patch size and data selection methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper examines the limitations of the DQ method and proposes corresponding improvements. The authors believe that using a pretrained MAE in DQ may cause issues, so they conducted experiments to see the impact on DQ when MAE is removed. The experiments, in a way, demonstrate the importance of MAE. The authors suggest using Tobias data augmentation as a substitute for MAE. According to their results, it is possible to achieve accuracy comparable to or even better than the previous DQ without using MAE." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation of this paper is somewhat unclear. From my understanding, the main value of DQ lies in reducing dataset size and storage requirements. However, as shown in Table 1, this method actually increases the storage usage of DQ. The problem it addresses is the need for a pretrained MAE in the original DQ, yet the authors' experiments do not highlight any obvious issues caused by using MAE. In my view, the authors have optimized a relatively minor aspect while losing sight of one of DQ’s key contributions. It would be beneficial for the authors to further elaborate on the advantages of this method.\n\n2. The logic of the proposed method is unclear. The authors first apply Tobias data augmentation, followed by dataset selection—what is the advantage of this sequence? What would the outcome be if Tobias data augmentation were added directly at the end based on DQ?\n\n3. The conclusions regarding line 210 may have some bias, as MAE was pretrained on ImageNet, which likely results in better reconstruction performance on ImageNette. The variables here are not limited to dataset size, so the effectiveness may not necessarily be due to the dataset size alone. It could also be influenced by the effectiveness of MAE itself." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weakness." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Using semantical-aware data augmentation to remove the pre-trained MAE model in DQ is interesting.\n2. The paper is well-organized.\n3. Experimental results show that the proposed DQ_v2 eliminates the drawbacks of DQ's dependence on pre-trained.\n4. The proposed method achieves performance improvement on multiple datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes DQ_v2, a corset selection method. To remove the pre-trained MAE in DQ, the authors investigate a data augmentation scheme, which can simulate the steps of pixel compression and reconstruction in DQ. Finally, the authors show the performance on several benchmark datasets, including CUB-200, Food-101, and ImageNet. The idea of using data augmentation to replace pre-trained MAE in DQ is somewhat novel to me. However, some critical concerns remain, please see weakness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In line 278, the authors say that the corset contains both original and augmented images. However, as far as I know, most existing corset selections only select original images from the datasets, meaning that there are no augmented images in corsets. So is this a fair comparison between DQ_v2 and other corset selection methods?\n2. The literature review section lacks comprehensiveness. Numerous recent studies closely related to the topic have not been studied, such as [1-5], which may affect the context and clarity of the proposed approach.\n[1] Tan, Haoru, et al. \"Data pruning via moving-one-sample-out.\" Advances in Neural Information Processing Systems 36 (2024).\n[2] Xia, Xiaobo, et al. \"Moderate coreset: A universal method of data selection for real-world data-efficient deep learning.\" The Eleventh International Conference on Learning Representations. 2022.\n[3] Yang, Shuo, et al. \"Dataset pruning: Reducing training data by examining generalization influence.\" arXiv preprint arXiv:2205.09329 (2022).\n[4] Maharana, Adyasha, Prateek Yadav, and Mohit Bansal. \"D2 pruning: Message passing for balancing diversity and difficulty in data pruning.\" arXiv preprint arXiv:2310.07931 (2023).\n[5] Yang, Suorong, et al. \"Not All Data Matters: An End-to-End Adaptive Dataset Pruning Framework for Enhancing Model Performance and Efficiency.\" arXiv preprint arXiv:2312.05599 (2023).\n3. In the semantic data augmentation section, the authors enhance diversity by replacing image backgrounds. However, it’s unclear if the potential for semantic ambiguity was considered—for instance, whether the new backgrounds might inadvertently introduce other objects, which could affect the intended semantics.\n4. The authors report only storage costs, but I recommend adding a comparison of training costs as well. This would provide a more comprehensive assessment of the method’s efficiency and practical applicability.\n5. The practical significance of the proposed method is unconvincing due to limited experimental validation. In the experimental section, all benchmark comparisons are with methods published before 2021. The compared baselines are outdated. While authors claim the comparison with state-of-the-art, many existing SOTA methods [1-5] are not compared. This weakens the method’s practical performance and significance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The goal of DQ is to reduce training data volume and improve data efficiency. Since the proposed method uses data augmentation, does it significantly increase the dataset size, potentially resulting in similar training costs as regular training?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The overall writing of the paper is smooth and easy to understand.\n- DQ V2 replaces MAE-based quantization with a simple augmentation strategy, achieving better performance without pre-trained models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Dataset Quantization V2 (DQ V2), an enhanced version of the original Dataset Quantization (DQ) method, focusing on efficient coreset selection without relying on large pre-trained models like MAE. Instead, DQ V2 integrates a new data augmentation strategy called Tobias, which uses randomly initialized CNNs to preserve the semantic regions of images while replacing background areas, mimicking the effect of pixel quantization. Extensive experiments demonstrate that DQ V2 achieves improved performance and training stability across multiple datasets, while also reducing computational complexity. The results suggest that DQ V2 provides a practical solution for data compression and coreset selection, paving the way for further enhancements in semantic-aware data augmentation and broader applications in complex visual tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper claims good scalability for the proposed method, but the experiments are still focused on smaller datasets and do not include evaluations on mainstream large-scale datasets like ImageNet-1k.\n- The coreset selection methods chosen for comparison, such as GraNd, Grad-Match, and GC, are from 2021. The paper should include comparisons with more recent coreset selection and dataset quantization methods." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This paper proposes an efficient core set selection method based on semantically-aware data augmentation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024rethinking,\ntitle={Rethinking Dataset Quantization: Efficient Core Set Selection via Semantically-Aware Data Augmentation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xajif1l65R},\nnote={under review}\n}" }, "abstract": { "value": "Dataset quantization (DQ) is an innovative coreset selection method to choose representative subsets from large-scale datasets, such as ImageNet. Although DQ has made significant progress, it heavily relies on large pre-trained models (like MAEs), leading to substantial additional computational overhead. We first identify that removing this pre-trained MAE model degrades DQ’s performance and increases the variance in model training. Where MAE plays a crucial role in introducing prior knowledge and implicit regularization into the training process. Second, we investigate a data augmentation scheme that can simulate the steps of pixel compression and reconstruction in DQ by simply using a randomly initialized ResNet model. This randomly initialized ResNet model can take advantage of the inductive bias of CNNs to locate the semantic object region and then replace the other region with other images. Therefore, we can use a random model or trained model in the early training stage to enhance semantic diversity while selecting important samples. We remove the module that contains the pre-trained MAE model and integrate the data augmentation scheme into the DQ pipeline, which formulates a new simple but efficient method, called DQ v2. Our method achieves performance improvements across multiple datasets, such as ImageNette, CUB-200, and Food-101." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Coreset Selection", "Dataset Quantization", "Data Augmentation", "Efficient Deep Learning", "Semantically-Aware Augmentation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6f309d87fdd9e243320102cf97700445d1aecbbe.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Rethinking Dataset Quantization: Efficient Core Set Selection via Semantically-Aware Data Augmentation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xak8c9l1nu
Computational Explorations of Total Variation Distance
main
Active
total variation distance;TV distance;mixtures of products;equivalence checking;Ising models;computational complexity;FPRAS
learning theory
5;6;8;8;8
3;4;3;4;3
3;3;4;4;3
2;2;3;3;3
3;3;3;4;3
7
3.4
3.4
2.6
3.2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I think the logic in lines 181–184 is incorrect, though the result is ultimately correct. The logic appears to use the implication $ A \\implies B $ to conclude $ \\neg A \\implies \\neg B $, which is not valid. The result holds, however, because $ P = \\sum_{i=1}^k w_i P_i^{\\leq n} $ and $ Q = \\sum_{i=1}^k v_i Q_i^{\\leq n} $, so if $ P \\neq Q $, then $ \\sum_{i=1}^k w_i P_i^{\\leq j} \\neq \\sum_{i=1}^k v_i Q_i^{\\leq j} $ for at least $ j = n $.\n\nFor line 451, it should be $ \\delta x_0 x_1 \\to \\delta x_0 x_k $. Additionally, some of the calculations in lines 441–458 are straightforward, so a few lines could be removed to streamline the presentation.\n\nIn the proof of Proposition 7, when setting parameters, it would be helpful to specify how small the parameter $ \\eta_0 $ ( and $h_0,\\delta$) needs to be to ensure that the relative error of the marginal probability remains smaller than $ O(\\epsilon / n) $. Could you clarify why machine precision is considered here? Would $ \\mathrm{poly}(N / \\epsilon) $ bits suffice to represent $ \\eta_0 $, where $ N $ denotes the input size?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The main contribution of this paper is the first result: deciding whether two mixtures of product distributions are the same. Suppose we have $k$ product distributions $P_1, P_2, \\ldots, P_k$, where each $P_i$ is an $n$-dimensional product distribution, i.e., $X \\sim P_i$ is a vector $(X_1, X_2, \\ldots, X_n)$. The paper takes the prefix of $X$, namely $X^{\\leq j} = (X_1, X_2, \\ldots, X_j)$ for $j \\leq n$. This distribution is denoted by $P^{\\leq j}_i$. Then, they consider the mixture of $P^{\\leq j}_1, P^{\\leq j}_2, \\ldots, P^{\\leq j}_k$, denoted by $P^{\\leq j}$. The algorithm decides whether $P^{\\leq j}$ and $Q^{\\leq j}$ are the same for all $j$. The algorithm is based on induction from $j = 1$ to $j = n$. The base case is trivial. The difficult part is that for $P^{\\leq j}$ and $Q^{\\leq j}$, the support of the distribution can be as large as $\\exp(\\Omega(j))$. To reduce the computational cost, the algorithm finds a \"sketch\" of the two distributions. One needs to check whether $P^{\\leq j}(x) = Q^{\\leq j}(x)$ for exponentially many $x \\in \\Sigma^{j}$. For each $x$, the algorithm views $P^{\\leq j}(x) = Q^{\\leq j}(x)$ as a linear equation. Instead of checking an exponential number of linear equations, the algorithm finds a basis of the linear system, and the size of the basis is $\\text{poly}(n)$. Then the algorithm only need to check the equations in the basis.\n\nOverall, the algorithm and the definition of $P^{\\leq j},Q^{\\leq j}$ are simple and clever, and I think deciding whether two mixtures of product distributions are the same is a basic problem in statistics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of computing the total variation distance (TV-distance) between high-dimensional distributions from a computational standpoint. For distributions with compact descriptions, calculating TV-distance poses significant challenges, as the direct approach incurs exponential time complexity relative to input size. The authors present two main contributions:\n\n(1) They provide a polynomial-time algorithm for determining whether two mixtures of product distributions are identical. \n\n(2) They establish the computational hardness of approximating the TV-distance between two arbitrary Ising models.\n\nThe first result is based on a simple and clever algorithm, the second result comes from the standard hardness result of approximating partition functions of Ising models. \n\nDue to the limited time for reviewing, I went through all the proofs and understood the main ideas; however, I did not verify every detailed calculation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The hardness result follows from standard results. Proposition 6 provides a self-reduction for the Ising model, which is used in the standard counting-to-sampling reduction. Therefore, the proof of Proposition 6 could be omitted. Proposition 8 essentially states that one can fix the value of a vertex $v$ by adjusting the function $h(v)$, allowing the TV distance to encode the marginal distribution.\n\nThe relationship between Theorem 1 and Theorem 2 is not very strong, as they pertain to different models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Can the equivalence checking algorithm be extended to other type of mixtures, say Bayesian networks with bounded treewidth?\n\nIs there any hope to soften the hardness assumption in estimating TV distance between Ising models by considering restricted classes of models?\n\nDo the authors have some recommendations for practical algorithms that could approximate the TV distance between Ising models despite the hardness result?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper addresses important computational questions about the total variation distance, which is fundamental in probability and statistics.\n\nThe algorithm for equivalence checking of mixtures of product distributions is new and provides a practical solution to a non-trivial problem.This hardness result bridges complexity theory and statistical measures and provides insight into why certain computational tasks are hard.\n\nThe proofs are well written, the results are accessible." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper discusses computational aspects of the total variation (TV) distance between probability distributions and has two contributions:\n\nIt provides a deterministic polynomial-time algorithm for testing whether two mixtures of product distributions are equivalent, which means the algorithm can decide whether the TV distance between them is zero.\n\nIt demonstrates that, unless NP is contained in RP, there cannot exist an efficient algorithm to estimate the TV distance between arbitrary Ising models. This result points out the computational hardness of this problem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It would be nicer if the paper could elaborate more on the practical applications of the equivalence checking algorithm with regard to performance on real-world data.\n\nThe hardness result could also be pushed further by thinking about the possibility of approximate algorithms with different complexity assumptions.\n\nIt would be even more applicable and helpful with more examples or case studies." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Was a poly-time randomized algorithm for checking equivalence of mixtures of product distributions known previous to this work?\n- To the best of your knowledge, what was known from previous work about approximating TV distance between Ising models?\n- Do your hardness result tell us anything about hardness of computing an additive approximation for the TV distance between Ising models?\n- Does the problem of estimating the TV distance between Ising models remain to be hard if one is given the values of the partition function $Z_1$ and $Z_2$ for each of the Ising models?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Mixtures of products are a fundamental family of probability distributions and checking their equivalence is one of the most basic questions about them. \n- The algorithm uses an interesting novel idea of keeping track of bases for the solution spaces of certain equations. This idea might find applications for testing equivalence of other classes of distributions.\n- The problem of estimating the total variation distance between two Ising model distributions is quite natural, as Ising models are a very well-studied class of probabilistic models.\n- The hardness result only relies on the assumption that NP is not in RP, which is a very mild complexity assumption." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper has two main contributions: (1) An algorithm for checking equivalence for mixtures of product distributions (2) A hardness reduction for approximating the total variation distance between two Ising models.\n\nThe problem of checking equivalence of product distributions is takes the following as input: we are given $w_1,\\cdots, w_k, P_1,\\cdots,P_k$ and $v_1,\\cdots, v_k, Q_1,\\cdots,Q_k$ where:\n1) Each of $P_1,\\cdots,P_k$ and $Q_1,\\cdots,Q_k$ is a product distribution over $\\Sigma^n$.\n2) $w_1,\\cdots, w_k$ are the weights for $P_1,\\cdots,P_k$ and $v_1,\\cdots, v_k$ are the weights for $Q_1,\\cdots,Q_k$.\nSo, the first distribution $P$ is the mixture of $P_1,\\cdots,P_k$ each weighted by $w_1,\\cdots, w_k$ and the second distribution $Q$ is the mixture of $Q_1,\\cdots,Q_k$ each weighted by $v_1,\\cdots, v_k$. The task is to determine wether the two mixture distributions are the same.\n\nHere is an example (from page 2) an equal mixture of $Bern(1/3)$ and $Bern(2/3)$ equals to the distribution $Bern(1/2)$. This illustrates how a mixture of two product distributions can equal a different product distribution.\n\nThe paper gives a deterministic algorithm whose run-time is $O(nk^4 |\\Sigma|^4)$ which is polynomial in the input size which equals $kn|\\Sigma|$. One idea the algorithm uses is that if the two mixtures $w_1P_1+,\\cdots, +P_kw_k$ and $v_1Q_1+,\\cdots, +v_k Q_k$ are equal, then (after a normalization) the mixtures $w_1P_1+,\\cdots, +P_jw_j$ and $v_1Q_1+,\\cdots, +v_j Q_j$ are also equal for every $j$. The algorithm utilizes the idea by iteratively establishing equivalence for $j=1,2..., k$ at every step $j$ establishing equivalence for $j+1$ assuming equivalence for $j$ (or to find some $x$ for which the probabilities of two distributions differ). As the paper shows, such steps can be achieved by keeping track of bases for solutions of certain types of equations.\n\nIsing models are a fundamental family of probability distributions over $\\{\\pm 1\\}^n$. Probability of each $x$ in $\\{\\pm 1\\}^n$ is proportional to $\\exp(P(x))$ where $P$ is a degree-2 multilinear polynomial. Ising models are a very well-studied family of distributions modeling systems for which random features have only pairwise interactions (this is because each term in $P$ has at most two variables). \n\nA Fully Polynomial-time Randomized Approximation scheme FPRAS is a randomized algorithm that gives a multiplicative $(1+\\epsilon)$-approximation to a desired quantity. Here, this means that one is given a pair of degree-2 polynomials $P_1$ and $P_2$ describing a pair of Ising models, and the goal of the algorithm is to output a multiplicative $(1+\\epsilon)$-approximation to the TV distance between the pair of Ising models. The paper shows that no poly-time algorithm can achieve this task (under a basic complexity-theoretic assumption).\n\nThe hardness proof proceeds by developing an approximation-preserving reduction to the problem of approximating the partition function of an Ising model, which is known from previous work to be hard to approximate. As an intermediate step, the reduction goes through the problem of approximating the marginal of an Ising-model distribution on one of the coordinates $x_i$ (this is referred to as the problem of approximating the atomic marginal)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The algorithm for mixtures of product distributions can only check whether P=Q exactly. The paper would be stronger if it gave an algorithm for approximating the distance between P and Q.\n- The paper rules out FPRAS for TV distance between a pair of Ising models, but it seems that there could still be a constant-factor approximation algorithm, and the paper would be stronger if this question was also addressed (i.e. it was shown that this is also hard, or an algorithm was given)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The paper mentions some open questions but is there any chance that the techniques used, e.g., in the first theorem can be extended to other types of more general distributions? (I personally doubt it)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper give two new results about computing TVD. \nThe polynomial time algorithm for TVD of mixtures of product distribution has its main strength in the simplicity and clarity of the approach.\nThe second result extends the studies on the complexity of dealing with Ising models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies some properties about the complexity of computing the Total Variation Distance between distributions. The authors consider the case of the mixure of product distributions and the case of Ising models. In the first case they show a polytime algo that has access to the marginals and checks the equivalnece of two mixtures. For the second case they show hardness of approximating TVD by an FPRAS under the hypothesis NP not included in RP.\nThe two results are sound and well written. \nHowever, the contribution is limited. \nI think neither of the two results are very significant alone and together the overall contribution is not very much more." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper appears to be a gluing of two minor results with little connection between them. \nThe second result builds upoon the Jerrum and Sincler previous analogous study. The first result bears more novelty, although it would not be, in my opinion, sufficient for a paper at ICLR.\nI think the main issue is with the specificity (very particular cases) of the two problems solved." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see above" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "see above" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies two fundamental questions about high-dimensional probability distributions.\n\n1. Is there a polynomial-time algorithm which can decide whether two *mixtures of product distributions on the hypercube* are equal to each other?\n2. Is there a polynomial-time algorithm which can accurately approximate the total variation distance between two Ising model distributions?\n\nThe paper answers (1) affirmatively, giving a very nice linear algebraic algorithm which can test equivalence between mixtures of product distributions on the cube, given access to the parameters of the distributions.\nThe paper answers (2) negatively, showing that unless NP is contained in RP (that is, unless NP-complete problems have randomized polynomial time algorithms), the total variation distance between Ising models is inapproximable.\n\nStrengths:\nThe paper studies truly fundamental problems which will be of broad appeal to the ICLR audience.\nThe results are convincing, and the equivalence-checking algorithm is very nice.\n\nWeaknesses:\nThese problems are so fundamental that it is a little hard to believe they are not already well studied.\nIt would be nice if the related work section were expanded.\nIt would be nice if the algorithm also meant you can test equivalence given samples.\n\nOverall, I recommend accepting the paper to ICLR, on the basis that the results are about a very fundamental set of problems, and the equivalence testing algorithm is very clean.\n\n\nQuestions:\n- Does result (1) mean you can also do equivalence testing from samples?\n- Is the linear algebraic method you describe similar to any existing equivalence tester in the literature?" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "see above" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024computational,\ntitle={Computational Explorations of Total Variation Distance},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xak8c9l1nu},\nnote={under review}\n}" }, "abstract": { "value": "We investigate some previously unexplored (or under-explored) computational aspects of total variation (TV) distance.\nFirst, we give a simple deterministic polynomial-time algorithm for checking equivalence between mixtures of product distributions, over arbitrary alphabets.\nThis corresponds to a special case, whereby the TV distance between the two distributions is zero.\nSecond, we prove that unless $\\mathsf{NP} \\subseteq \\mathsf{RP}$ it is impossible to efficiently estimate the TV distance between arbitrary Ising models, even in a bounded-error randomized setting." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "total variation distance", "TV distance", "mixtures of products", "equivalence checking", "Ising models", "computational complexity", "FPRAS" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/19c037430de9aa16c97d445fac7859a894c9b5ad.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Computational Explorations of Total Variation Distance" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xam3sR3ffY
Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges
main
Active
LLMs;NLP;LLM Evaluation;LLM-as-a-Judge;Benchmarks
generative models
3;3;3;5;8
4;3;4;4;4
1;2;2;3;4
1;2;2;3;3
2;2;3;3;3
4.4
3.8
2.4
2.2
2.6
0.357217
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The description of the platform and recruitment process for human annotations is unclear. Who are the annotators (e.g., crowdworkers), and what are the recruitment criteria?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The provided analysis regarding the LLM judges' sensitivity to prompts, error types, a lack of robustness, and the leniency bias are interesting and valuable to future studies.\n\n2. The paper is well-written and the findings are clearly presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work provides an examination of LLM judges regarding their performance and vulnerabilities in a reference-based evaluation setting for QA tasks. Using human annotation as the gold standard, a series of judge models are evaluated. For evaluation metrics, the manuscript proposes using Scott’s $\\pi$ instead of accuracy, highlighting it as a main finding. It also shows that while less capable judges perform poorly at the instance level, i.e., giving the same decision as the human annotators, they achieve higher correlation with humans at the system level, i.e., producing a ranking of evaluated models by aggregating the instance-level decisions. Further analyses are conducted on changes in recall and precision scores, sensitivity to prompts, robustness, and leniency bias." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It appears that some of the main findings in this work are either not well-supported, may lack generalizability, or have been discussed in previous work.\n\n1. Lack of generalizability: The task setting of the LLM judges selected in this work is reference-based evaluation of QA, which differs from the common application scenario where LLM judges evaluate various tasks without a gold reference (e.g., AlpacaEval, Arena Hard). Access to gold references makes the evaluation task significantly easier. Therefore, the findings in this work may not generalize well to more open-ended, general evaluation settings. While it is stated that this task setting was chosen to reduce human disagreement, there exists a related dataset, LLMBar [1], which can be used to perform a more general evaluation of LLM judges, achieving over 90% human agreement (evaluation accuracy).\n\n2. The finding that automatic evaluation metrics (specifically LLM judges) have a higher correlation with human evaluations at the system level than at the instance level has already been identified and well-discussed in related work on evaluating automatic evaluation of natural language generation tasks [2][3][4][5]. For example, [5] shows that automatic metrics can achieve higher system-level correlation with humans when they evaluate more instances. Therefore, this finding itself is not a novel contribution.\n\n3. The manuscript proposes using Scott’s $\\pi$ instead of accuracy as the evaluation metric for LLM judges, claiming that it \"appears to be better able to discriminate among various judge models.\" However, this claim is not well-supported, as the only evidence provided is that Scott’s $\\pi$ yields scores with a wider numerical range than accuracy, which could potentially be achieved by trivially rescaling the range. Further examination is needed to verify this claim, such as by demonstrating that Scott’s $\\pi$ offers greater statistical power, with tighter confidence intervals or a lower p-value in significance tests. Additionally, the notion of separability defined in Arena Hard [6] would be useful for comparing evaluation metrics.\n\n4. The finding that the true negative rate (resulting in a lower recall score) falls quickly with less capable judges does not hold when the two lexical-similarity-based metrics, exact match (EM) and Contain, are excluded. In fact, all the small LLM-based judges achieve higher recall scores than precision scores. The observed low true negative rate/recall score of EM and Contain is expected, as these metrics rely on lexical similarity and are likely to mark an answer that is correct but lexically different from the reference answers as incorrect.\n\n\nReferences\n\n[1] Zeng, Zhiyuan \"Evaluating large language models at evaluating instruction following.\" ICLR 2024\n\n[2] The price of debiasing automatic metrics in natural language evalaution (Chaganty et al., ACL 2018)\n\n[3] Re-evaluating Evaluation in Text Summarization (Bhandari et al., EMNLP 2020)\n\n[4] A Statistical Analysis of Summarization Evaluation Metrics Using Resampling Methods (Deutsch et al., TACL 2021)\n\n[5] Re-Examining System-Level Correlations of Automatic Summarization Evaluation Metrics (Deutsch et al., NAACL 2022)\n\n[6] Li, Tianle, et al. \"From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline.\" arXiv preprint arXiv:2406.11939 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The error analysis in Table 2 shows judges struggle with under-specified answers. Could you provide examples of or qualitatively explain the under-specified answers that fooled even the best judges?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- They focus on a specific scenario with high inter-human agreement which is an attempt to isolate the judge model behavior from task ambiguity. \n- Several dimensions are explored: 1) model sizes and families, 2) Multiple metrics, 3) Error analysis provided. \n- Insights such as ``smaller models can rank exam-takers as effectively as larger ones'', and the attempted explanation that \"chat models may \"unlearn\" some knowledge during alignment\"; \n- The work also provides some recommendations for practitioners using LLM as judges, e.g. using Scott's $\\pi$ along with accuracy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper offers a study of LLMs-as-judges. The authors investigate 13 models (2B to 70B), evaluating 9 different \"exam-taker\" models on the TriviaQA benchmark. They found 1) Only the largest models achieve reasonable alignment with humans, though still falling short of inter-human agreement, 2) Scott's π provides better discrimination between judges than percent agreement, and 3) Even models with lower alignment scores can effectively rank exam-taker models. Through detailed analysis, the paper uncovers several vulnerabilities in judge models, including sensitivity to prompt complexity and a tendency toward leniency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The scope remains limited to TriviaQA. For \"short, factual\" answers, consider adding the \"LLMBar\" datasets, which have high human agreement rates > 90%. Sufficient examples can be used according to your dataset selection criteria [1]. Without the inclusion of additional datasets [1], it remains unclear how well the ranking ability would transfer. \n- The original claim (line 316-318) about judge performance being worse at identifying correct answers could be an artifact of including metrics that are overly strict about exact wording matches rather than semantic meaning. The finding does not appear to be surprising or novel. \n- Further analysis would be beneficial: show example outputs from each judge model and identify common errors; In Appendix I, where the authors justify the sample size, adding a power analysis would be ideal.\n\nReferences:\n\n[1] [Evaluating Large Language Models at Evaluating Instruction Following](https://arxiv.org/pdf/2310.07641) \n[2] [The NarrativeQA Reading Comprehension Challenge](https://arxiv.org/pdf/1712.07040)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The explanation of why Scott's Pi should be used instead of Kappa in judging-the-judges scenarios is a significant contribution that will benefit future researchers.\n* The comprehensive analysis across multiple dimensions (alignment metrics, ranking correlation, error analysis) provides valuable insights into the strengths and limitations of different judge models.\n* The comparison between LLM judges and lexical judges (EM, Contains) offers a novel and important perspective. This insight becomes increasingly critical as NLP tasks grow more complex, helping inform efficient evaluation strategies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper evaluates the LLM-as-a-judge paradigm by comparing 13 judge models against human evaluations on TriviaQA, focusing on a scenario with high human agreement to isolate judge performance issues. \n\nThe results show that while the largest models achieve reasonable human alignment, they still fall notably short of human-human agreement levels and exhibit various vulnerabilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The evaluation relies solely on TriviaQA, making it difficult to deconfound the root cause: whether the best model's performance stems from better alignment, knowledge of TriviaQA content, or simply being favored by other LLMs. Other unusual findings may also be specific to TriviaQA: in Figure 1.a, EM's instability compared to Contains likely results from references providing multiple correct answers.\n* The paper lacks sufficient content for an ICLR long paper. I suggest expanding the scope by:\n * Including more evaluation datasets covering other types of tasks, such as objective long answers (e.g., code writing), using LLM judges to rank exam takers, etc.\n * Moving Appendix B (issues with Kappa) to the main paper and adding more experiments and analysis. This lesser-known fact would make the paper more informative and impactful." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Did the authors consider other datasets, or non-binary notions of answer quality?\n* Did the authors consider evaluating the alignment across models to understand how they might be ensembled to mimic a better judge?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "* The paper is clear and nicely visualizes the relevant findings\n* The authors explore a dozen models as judges\n* The authors use manual annotation to carefully unpack the judge behavior, especially the observation about judge leniency" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors examine models as judges on TriviaQA questions, comparing how often humans agree with their answers. While they find alignment is high, they can use Scott Pi metrics to distinguish the quality of the judges. They also find models are frequently lenient judges in a binary setting of “correct” vs “incorrect”." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* As a reader I'm having difficulties understanding the overarching goals of the paper. Typically researchers use LLMs as a judge for longer form, more subjective questions, where answer coherence, style, and correctness are all part of the judgement. But for TriviaQA, the chosen dataset, the questions have clear, short answers with reference documents, meaning Exact Match is already a strong metric. Here, humans are simply reporting the binary value “correct” vs “incorrect” on the model answers, which seems to have little to do with “human alignment” and more to do with which model got the right answer? Could the authors provide more information on how they think these insights may or may not generalize to more more complex judging tasks, as well as discuss the limitations of their findings?\n* The choice of TriviaQA is extremely relevant to the reported results. Could the authors justify this choice? And have they considered comparing their results on other types of datasets?\n“well-aligned models tend to produce more false negatives than false positives.” This does not seem to be supported by Figure 3b, where the most correct model’s errors are mostly false positive? Could the authors please provide details to explain this—in case I am misunderstanding?\n* Could the authors add more details discussing the extent to which their contributions are novel and provide the community with actionable? A discussion section would be helpful here." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Do you think the results generalize to different types of datasets/tasks?\n\nWhat do you think it's an acceptable agreement percentage?\n\nWhat are the takeways from this study? What are the best practices that can be adopted when using LLM-as-a-judge?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "* Thorough and timely study\n* Several interesting experiments" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a study on the performance of LLM-as-a-judge expressed as agreement with human assessment. The LLMs that are judged are run in an exam-taker context, using TriviaQA dataset. The study shows that only the largest and more recent Lllama-3.1 models approach human judgement. The rest observe widely different scores, some as low as 60% agreement. Different prompting styles seem to make a difference, with single digit performance improvements for the performant models.\n\nI think it would be nice to summarize all the takeaways in a section and best practices for doing assessment using LLM-as-a-judge." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* I would have liked to see more datasets; in fact, I would suggest reducing the number of exam-taker (e.g., I find the base models less interesting) and use different datasets" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We conduct a comprehensive study of the LLM-as-a-judge paradigm in a relatively controlled setup and report many interesting findings about its strengths and weaknesses" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024judging,\ntitle={Judging the Judges: Evaluating Alignment and Vulnerabilities in {LLM}s-as-Judges},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xam3sR3ffY},\nnote={under review}\n}" }, "abstract": { "value": "Offering a promising solution to the scalability challenges associated with human evaluation, the LLM-as-a-judge paradigm is rapidly gaining traction as an approach to evaluating large language models (LLMs). However, there are still many open questions about the strengths and weaknesses of this paradigm, and what potential biases it may hold. In this paper, we present a comprehensive study of the performance of various LLMs acting as judges, focusing on a clean scenario in which inter-human agreement is high. Investigating thirteen judge models of different model sizes and families, judging answers of nine different ‘examtaker models’ – both base and instruction-tuned – we find that only the best (and largest) models achieve reasonable alignment with humans. However, they are still quite far behind inter-human agreement and their assigned scores may still differ with up to 5 points from human-assigned scores. In terms of their ranking of the nine exam-taker models, instead, also smaller models and even the lexical metric contains may provide a reasonable signal. Through error analysis and other studies, we identify vulnerabilities in judge models, such as their sensitivity to prompt complexity and length, and a tendency toward leniency. The fact that even the best judges differ from humans in this comparatively simple setup suggest that caution may be wise when using judges in more complex setups. Lastly, our research rediscovers the importance of using alignment metrics beyond simple percent alignment, showing that judges with high percent agreement can still assign vastly different scores." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLMs", "NLP", "LLM Evaluation", "LLM-as-a-Judge", "Benchmarks" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/90088059d25efa77ec389180e2148d22173ae320.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1ee7874cf919a1d4a4559eedaf35d180c35a2bf3.zip" }, "title": { "value": "Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xao3fIJC6M
ChipVQA: Benchmarking Visual Language Models for Chip Design
main
Active
Multimodal LLM; Chip Design and Manufacturing; VQA
datasets and benchmarks
3;3;3
5;5;4
3;2;2
2;2;2
4;2;2
3
4.666667
2.333333
2
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- What do the abbreviations ‘MC’ and ‘SA’ mean in Table 1?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Proposing a multi-modality VLM benchmark is meaningful and has basic value, which can greatly help the communities of chip design and LLMs.\n- The benchmark is notably challenging due to the diverse types of visual content, allowing significant room for research and improvement.\n- This paper offers five distinct data categories in data collection and introduce their details, enhancing the diversity of the benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a benchmark in terms of chip design. Such benchmark is characterized as VQA tasks, parallel to existing text-based tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- ChipVQA contains only a total of 142 samples, which limits its scalability and utility. Specifically, there is respectively only one sample each for certain categories (flow, equations, neural nets), reducing the benchmark's effectiveness for evaluating specific content types.\n- As a benchmark, this paper lacks a further dev/test splitting, which restricts the flexibility for developers to conduct training and fine-tuning.\n- While the paper describes the benchmark as multi-modal, it only incorporates text and images. Although there are various types of visual samples, such as diagrams and graphs, all are treated as images in the experiments.\n- This paper lacks the discussion of an alternative aspect of VLMs, such as CLIP [1], which emphasize visual capabilities over language components.\n- Some typos: A missing space after \"ChipVQA\" in line 175, and a labeling error where \"Figure 1\" should read \"Figure 3.\"\n- The authors did not follow the citation instruction, as citations should be in parenthesis when the authors or the publication are not included in the sentence.\n\n[1] Radford, Alec, et al. \"Learning transferable visual models from natural language supervision.\" International conference on machine learning. PMLR, 2021." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I have some concerns about this benchmark.\n1. The number of QAs in this benchmark is only 146, which is too small to represent the five major class problems in the chip design. The small scale will make the evaluation results difficult to reflect the ability of VLM. The rare QAs belonging to specific small problems may lead to results that are not robust.\n2. The small benchmark could only be used for fast evaluation, which cannot be used to improve the chip design ability of VLM. Although the authors’ claim “demonstrates promising potential to enhance LLM/VLM problem-solving capabilities with minimal training overhead” in the conclusion, the small number of QAs in the benchmark makes this work not promising enough.\n3. The experiments cannot sufficiently support the effectiveness of this benchmark. The authors said “unlike existing benchmark efforts targeting at most undergraduate level engineering question Yue et al. (2024),”. However, the authors never compare the performance of VLM on the other benchmarks, such as Yue et al. (2024), with ChipVQA to justify its superiority.\n4. Experiment 4.1 seems to verify a common conclusion, that more knowledge will help VLM understand and reason. However, the low pass rate in Table 3 actually highlights the hardness of ChipVQA. Experiment 4.2 also seems to verify a common conclusion, that the higher image resolution will help improve the answer quality of VLM.\n5. The quotation marks are used in mistake in multiple places, such as “”Derive” on page 5, etc.\n\nIn conclusion, I think a large-scale high-quality VLM benchmark for chip design will be more attractive to the researchers. The authors could enlarge the limited number of QAs in the benchmark and provide more sufficient experiments and insights of applying VLM in chip design problems." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors proposed a new benchmark, ChipVQA, to evaluate the existing VLMs ability to understand and reason in chip design, which is a specific and important research area.\n2. ChipVQA is considerably challenging even for the most advanced VLM model, GPT-4o.\n3. The collected QAs span various chip design areas from abstract architecture design to realistic semiconductor manufacturing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors proposed a new benchmark, ChipVQA, to evaluate the existing VLMs ability to understand and reason in chip design, which is a specific and important area. ChipVQA is considerably challenging even for the most advanced VLM model, GPT-4o. Meanwhile, the collected QAs span various chip design areas from abstract architecture design to realistic semiconductor manufacturing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The limited number of QAs in this benchmark fails to adequately represent the chip design field and does not provide sufficient potential to support VLM development.\n2. The experiments are insufficient to demonstrate the superiority of this benchmark." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1.Whether the benchmark has some questions, which are not included by the anonymous repo, with higher quality?\n2.Whether the authors try to redraw the manuscripts of analog circuits by software like Visio to test the performance of VLM?\n3.Whether the authors test senior undergraduate students in majors related to Integrated Circuit on this benchmark as a comparison? I think only after this, they can say that majority of some previous benchmark primarily cover content up to the level of undergraduate engineering courses and their benchmark is an exception." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "ChipVQA is the first work which constructs a multiple fields and multi-modal benchmark in chip design and tests the performance of mainstream VLMs on the benchmark. Results reveal that this benchmark is challenging for current capabilities of VLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a comprehensive benchmark including 142 VQA questions covering five chip design disciplines: Digital Design, Analog Design, Architecture, Physical Design and Semiconductor Manufacturing. It is the first benchmark suite in the field of multi-modal chip design knowledge. Moreover, some experiments are implemented to test this benchmark, and it is proved that the benchmark is challenging for the capabilities of current VLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although the authors emphasize the quality of their benchmark and say that these questions are developed by “seasoned chip design experts, each with over ten years of industry experience” and majority of some previous benchmark primarily cover content up to the level of undergraduate engineering courses, as a reviewer who majored in Integrated Circuit and System during undergraduate studies, I think the most questions of their benchmark showed in the anonymous repositories only have the level of undergraduate courses in majors related to integrated circuit. Moreover, most of the questions are common for students in majors related to Integrated Circuit. The authors only collect them together from textbooks, course exams, manuscripts and so on, and some of the images of the questions are rough, hastily written manuscripts which make it even difficult for VLMs to recognize the content.\n\nApart from the benchmark quality, there’s not much novelty in this work, because authors mainly test some mainstream VLMs on this benchmark. In the part where the authors demonstrate their discovery, the third, the forth, and the fifth are obvious. And the first and the second also doesn’t have much value, considering that some image paired with questions are just hastily written manuscripts." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We present a benchmark suite for VLM on chip design and manufacturing knowledge" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024chipvqa,\ntitle={Chip{VQA}: Benchmarking Visual Language Models for Chip Design},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xao3fIJC6M},\nnote={under review}\n}" }, "abstract": { "value": "Large-language models (LLMs) have exhibited great potential to as-\nsist chip designs and analysis. Recent research and efforts are mainly\nfocusing on text-based tasks including general QA, debugging, design\ntool scripting, and so on. However, chip design and implementa-\ntion workflow usually require a visual understanding of diagrams,\nflow charts, graphs, schematics, waveforms, etc, which demands\nthe development of multi-modality foundation models. In this paper, we propose ChipVQA, a benchmark designed to evaluate the\ncapability of visual language models for chip design. ChipVQA includes 142 carefully designed and collected VQA questions covering five chip design disciplines: Digital Design, Analog Design, Architecture, Physical Design and Semiconductor Manufacturing. Un-\nlike existing VQA benchmarks, ChipVQA questions are carefully\ndesigned by chip design experts and require in-depth domain knowledge and reasoning to solve. We conduct comprehensive evaluations\non both open-source and proprietary multi-modal models that are\ngreatly challenged by the benchmark suit. ChipVQA examples are available at\nhttps://anonymous.4open.science/r/chipvqa-2079/." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Multimodal LLM; Chip Design and Manufacturing; VQA" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/00763d656f22547ec6b04f886275a34687498fde.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ChipVQA: Benchmarking Visual Language Models for Chip Design" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xawA8X5dHq
Multiple Choice Questions and Large Languages Models: A Case Study with Fictional Medical Data
main
Active
large language models;medicine;benchmark;evaluation;clinical knowledge;multiple choice questions
datasets and benchmarks
3;3;5;5
4;5;4;4
2;2;2;3
2;1;2;2
2;2;3;2
4
4.25
2.25
1.75
2.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- In Line 55-58 in the introduction- 'To address these concerns, this study proposes evaluating LLMs using a multiple-choice question test based on entirely fictional medical knowledge. By doing so, we aim to determine whether traditional evaluations are sufficient for assessing the clinical knowledge and reasoning abilities of LLMs for the medical domain, free from the influence of pre-existing data.' This seems to indicate the motivation of the study. Can the authors say more? What were the hypotheses they wanted to study? Did they expect the model to perform poorly? and why? By creating a similar QA study on are they establishing anything different?\n\n- Is it possible to provide more information on the development of the 'Glianorex' textbook? How were the models steered to generate the content? \n\n- There is very limited information on how the models are set up experimentally- i.e. what was the input uniform across all 14 models given the differences in context windows? \n\n- How long were the paragraphs used for generating the questions/options? Is there a chance that not all the questions that were generated were about the fictional gland? \n\n- Can the authors comment more on why the generated questions were difficult for medical professionals? Is it because they were unfamiliar with the working of the fictional gland? or because the answers were linguistically hard to disambiguate? Can they provide examples?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The study generates a 'fictional' textbook on a fictional gland and uses the synthetic data to evaluate model performance on QA.\n- I found it fascinating that the doctors found no major flaws in the generated QA. \n- The study evaluates a variety of LLMs (in zero-shot setting)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript aims to assess the QA performance of LLMs on 'fictional' data. Authors generate a textbook about an inexistent gland, generate QA from paragraph fragments from this generated textbook and subsequently evaluate the QA using LLMs in a zero-shot setting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It is known that evaluations for using LLMs for healthcare are insufficient and incomplete. Newer, better ways of benchmarking them are necessary, for e.g., using real life patient data. LLMs, due to their size and scale are known to memorize their training data. LLMs performing well on QA benchmarks indicate they encode the knowledge (complete/incomplete). With this case study, the aforementioned evidence is only revalidated and makes a limited case for novelty. As mentioned in the future work section, may be interactive, scenario-based assessments are an interesting direction!\n\n- Line 216- 'By generating entirely fictional content, we ensured that no pre-existing data could influence the models’ - this seems hard to be convinced of- even if the gland is fictional, the model is exposed to data about the other organs/biological aspects of the human anatomy that play a role in relation to this gland. The LLM may also be exposed to information/vocabulary on how a gland is supposed to function and what their possible medical conditions that occur to other glands. This could also be a possible explanation as to why models show better performance than a random baseline.\n\n- Line 488 in conclusion: 'This study demonstrates that LLMs can achieve high scores on multiple-choice questions based on entirely fictional medical knowledge, even without prior exposure to the content.' I disagree that this knowledge is completely fictional. Although the gland is fictional, the information of how 'a gland' is supposed to functional is not fictional medical knowledge along with other information about the human anatomy.\n\n- Yes, the models was able to do well on ~80-90 questions. What about others? Could the authors discuss errors? Where did the models go wrong? Were the errors the similar to the ones highlighted in previous work in MedQA? Why were they hard? Did the medical experts find them easier? \n\n- I find the quality of the generated QA may not be completely equivalent/ comparable to existing work- \n Minor corrections\n- Line 457: missing text\n- Line 485: incomplete sentence" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "LLM Generation Chain:\n- How do you control for the fact that both content and questions are LLM-generated?\n- Have you considered comparing performance between LLM-generated questions and human expert-generated questions about the same fictional content?\n\nRelation to Known Limitations:\n- How does this work advance our understanding beyond existing studies showing LLMs lack reasoning capabilities (GSM8K, Physics of Language Models, etc.)?\n- Could you compare your findings about medical MCQ performance to similar pattern-matching behaviors documented in math, physics, and symbolic reasoning tasks?\n- What makes medical MCQs different from other domains where LLM pattern-matching has been studied?\n\nBenchmark Comparisons:\n- How does model performance on your fictional medical questions compare to performance on real medical benchmarks like USMLE?\n- Could you analyze if models exploit similar patterns in both real and fictional medical questions? e.g. Have you considered creating paired real/fictional questions with matched reasoning requirements to isolate the effect of domain knowledge vs. pattern matching?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The study evaluates a comprehensive range of models - from proprietary systems like GPT-4 to open-source models like Mistral and domain-specific medical models. The multilingual evaluation in both English and French adds valuable cross-linguistic insight. Testing across model scales (from 7B to 110B parameters) and comparing both base and medically fine-tuned versions provides a thorough performance landscape.\nThe core concept of testing reasoning gaps through fictional medical content is important given the high-stakes nature of healthcare applications. With the increasing deployment of LLMs in medical settings, understanding their limitations in medical reasoning versus pattern matching becomes critical for patient safety.\nThe experimental setup allowed for controlled model performance testing without the confounding variable of pre-existing medical knowledge. The methodological approach of creating a fictional medical domain to isolate reasoning capabilities represents creative thinking about AI evaluation challenges.\nThe statistical analysis is solid, including Cohen's d effect sizes and significance testing, provides some quantitative backing for the findings. \nCode is openly available and use of harness is good practice.\nThe authors are also transparent about the limitations of their approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a study examining the effectiveness of multiple-choice questions (MCQs) in evaluating Large Language Models' (LLMs) medical knowledge and reasoning capabilities. The authors devise an experiment where they use GPT-4 and Claude 3.5 Sonnet to generate textbooks about a fictional gland called \"Glianorex,\" along with corresponding multiple-choice questions in both English and French. They then evaluate various LLMs, including proprietary, open-source, and domain-specific models, on these questions in a zero-shot setting. The models achieve surprisingly high average scores of around 64%, despite having no prior knowledge of this fictional medical content. \n\nThis points to a significant methodological concern: since both the content and questions were generated by LLMs, the high performance likely demonstrates LLMs' ability to recognize and reconstruct patterns in LLM-generated text rather than any genuine medical reasoning capabilities. While the authors interpret their results as evidence that MCQ-based evaluations may not adequately assess medical knowledge, their experimental design inadvertently reveals more about LLM-to-LLM pattern recognition than about the limitations of multiple-choice testing in medical AI evaluation. This highlights a broader issue in AI evaluation methodology, where the tools used to test AI systems may be inherently biased towards the systems' pattern-matching capabilities rather than their actual understanding or reasoning abilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. LLM Chain of Generation:\nThe experimental pipeline (textbook → questions → answers) is LLM-generated, creating a closed loop that primarily tests LLM-to-LLM pattern recognition. This begs the question if this tests more than other approaches that have shown models fragility to lexical substitution or paraphrasing methods.\n\n2. Well-Established Lack of Reasoning:\nMultiple studies have already definitively shown LLMs don't perform actual reasoning:\n- GSM8K and other math reasoning benchmarks show LLMs struggle with novel mathematical reasoning\n- \"Physics of Language Models\" work shows LLMs operate through pattern matching rather than causal understanding\nThis study doesn't meaningfully advance beyond these existing findings. \n\n3. Missing Comparisons and related work:\n- No comparison between performance on these fictional medical questions versus real medical benchmarks (like USMLE)\n- No comparison to human-generated alternative versions e.g. drug name substitution benchmarks \n- Related work should include work on lexical substitution https://aclanthology.org/D14-1066.pdf , dataset contamination https://arxiv.org/abs/2404.18824 and robustness e.g. https://arxiv.org/abs/2406.06573, https://arxiv.org/abs/2406.12066\n\n4. Missed Opportunity for Medical Evaluation:\nInstead of showing that LLMs don't reason (which we know), they could have investigated what specific patterns in medical MCQs make them vulnerable to exploitation by LLMs, which would be more useful for improving medical testing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Is there existing work on benchmarks for specific medical domains, such as oncology? Including one in the related work section could enhance context.\n\n2. In the discussion's training paragraph, you mention “the improved performance of finetuned models Internist.ai and Meerkat over their base versions underscores the impact of domain-specific training on enhancing LLM capabilities.” Could you clarify what you mean by \"LLM capabilities\" in this context?\n\n3. Is the generated textbook open-sourced? If so, please provide a link to the anonymous version of the textbook." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The introduction of a fictional gland is a novel way to control for LLMs’ memorized real-world knowledge. The knowledge about the fictional gland is generated in a comprehensive and detailed way. \n\n2. Human experts were involved to ensure the quality of the generated medical knowledge and questions, and to provide a human performance baseline. Expert involvement makes the experiments more accountable and trustworthy. This is important for applications in healthcare and medicine because these are mission-critical scenarios.\n\n3. The study uses statistical analysis to provide solid evidence supporting its claims. \n\n4. The inadequacy of MCQ evaluation method is a long-standing and important issue, especially in clinical scenarios. The investigation is well-motivated. In addition, the authors clearly outline the motivation for the study and the implications of their findings, making it easy for readers to grasp the significance of the research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses an key problem: the inadequacy of current multiple-choice question-answering evaluation methods for Large Language Models (LLMs) in medical contexts. It highlights that LLMs' test-taking abilities, i.e., their capability to rely on statistical patterns rather than understanding, impact their performance on medical benchmarks. This phenomenon is particularly relevant in settings where LLMs may select correct answers based on learned statistical associations rather than true comprehension. To demonstrate this, the authors devised a test based on fictional medical knowledge, effectively removing the influence of prior knowledge for the models. The results indicate that LLMs still perform well on questions about fictional medical knowledge. The authors claim that their finding suggest that LLMs have good test-taking ability that prevent current evaluation methods from effectively assessing the true clinical reasoning capabilities of these models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The claim that \"entirely fictional content\" removes pre-existing data influence is overstated. While the fictional setup minimizes real-world knowledge's impact, it does not fully eliminate it. The generated content of the fictional gland using LLMs stills adhere to the knowledge framework of medicine. Thus, there is pre-existing foundational medical knowledge from LLMs embedded in the content and questions even though no pre-existing data specific for the fictional gland exists. LLMs’ reasoning ability with prior knowledge is not entirely ruled out. This claim, central to the paper’s message, is thus an overstatement. This is a limitation of the method in isolating LLMs' test-taking abilities.\n\n2. The statistical analysis is appreciated, but the authors do not clearly explain the apparent disparity between the small and negligible performance differences derived statistically and the visible differences shown in Figure 1. This may confuse readers unfamiliar with advanced statistical analysis.\n\n3. A known issue with multiple-choice evaluation is selection bias/position bias for option order and IDs. Studies show LLMs may have positional preferences (e.g., favoring option A), making it necessary to control for option order, as positional bias could affect model performance significantly. The authors do not address this issue.\n\n4. The discussion on the distribution of correct answers and models' \"inferential strategies\" lacks clarity. Terms like \"inferential strategies\" are uncommon in this field and require further elaboration. Additionally, the paragraph addressing language in the discussion section could benefit from better clarity.\n\n5. While the paper highlights issues with current multiple-choice evaluation, it does not propose and develop solutions. A method that can mitigate the issue and suggest potential paths forward would make the contribution more impactful, moving beyond identifying a problem to offering ways to address it.\n\n6. Presentation flaws are present, such as incomplete sentences in the limitations section, which disrupt readability. For instance, the \"Knowledge coherence\" paragraph in the limitations section is unfinished, and the \"Model selection\" paragraph in the same section ends abruptly with \"This selection.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* I was not entirely clear whether the exam-takers (humans and language models) had access to the textbook prior to answering the questions or was the textbook only used for generating QA pairs. Additionally, did each question provide any context at all?\n* It would be useful if the authors could provide more information on the statistical analysis carried out including:\n * How was the standard error of the mean computed? The reported results are for accuracy so what was the mean taken over? \n * Additionally, how was this computed for the human evaluation? The error bars were smaller for the human results which is surprising given that these were based on a smaller sample of questions\n * How was the 2-way ANOVA carried out? What were the 2 factors used?\n* The authors provide some proxy metrics for alignment between model’s answers, such as performance differences, however why did the authors not test answer alignment more directly, using a metric such as Cohen’s kappa? This would allow significantly more insight into how similar performance was between models.\n* I’m not sure how informative figure 2 is. If performance of most models was >0.6 accuracy it’s somewhat given that the correct answers would be skewed towards a majority correct. Using metrics such as Cohen’s kappa or a similar approach would be more informative to understanding model agreement. Additionally, this figure would be easier to understand if the x axes were the same scale.\n* Why did the authors provide no comparison of human answers with model answers? Given that they only used 100 questions as their sample, it’s possible the sample used to test human performance was a harder subset of questions.\n* It would be helpful to see more in-depth analysis of the QA pairs the models got correct and incorrect. For example, were correct answers more likely for specific sub-topics? Were questions based on particular textbook sections more likely to be answered correctly?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors take a novel and interesting approach to assessing the extent to which language models rely on internal knowledge in benchmark evaluations, including expert evaluation of their synthetic data to ensure the validity of the generated questions. The results may have important implications for the field of language model evaluation and certainly warrant further investigation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper the authors generate a multiple-choice QA dataset from a synthetic textbook based on fictional medical knowledge. The authors find that despite performance in humans being at expected chance levels, all language models tested display performance significantly above-chance. This implies that language-model performance on various benchmarks might be the result of general exam-taking ability, rather than testing actual medical knowledge. If true, this could be an important finding to the field of language model evaluations and would warrant further investigation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The results presented here are only for questions in a single domain, related to questions for a synthetic textbook on only a single fictional topic. It is not clear whether these results would generalise to other domains. Additionally, it is possible that the language models used to generate the synthetic text may have imputed this data with real knowledge, which could partially explain the strong performance for every language model tested." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Evaluating LLMs with fictional medical benchmarks reveals traditional MCQs assess pattern recognition over clinical knowledge, indicating a need for improved evaluation methods." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024multiple,\ntitle={Multiple Choice Questions and Large Languages Models: A Case Study with Fictional Medical Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xawA8X5dHq},\nnote={under review}\n}" }, "abstract": { "value": "Large Language Models (LLMs) like ChatGPT demonstrate significant potential in the medical field, often evaluated using multiple-choice questions (MCQs) similar to those found on the USMLE. Despite their prevalence in medical education, MCQs have limitations that might be exacerbated when assessing LLMs. To evaluate the effectiveness of MCQs in assessing the performance of LLMs, we developed a fictional medical benchmark focused on a non-existent gland, the Glianorex. This approach allowed us to isolate the knowledge of the LLM from its test-taking abilities. We used GPT-4-Turbo and Claude 3.5 Sonnet to generate two comprehensive textbooks on the Glianorex in both English and French and developed corresponding multiple-choice questions in both languages. We evaluated various open-source, proprietary, and domain-specific LLMs using these questions in a zero-shot setting. The models achieved average scores around 64%, with minor performance differences between larger and smaller models. Performance was slightly higher in English than in French. Fine-tuned medical models showed some improvement over their base versions in English but not in French. The high performance across models suggests that traditional MCQ-based benchmarks may not accurately measure LLMs' clinical knowledge and reasoning abilities, instead highlighting their pattern recognition skills. This study underscores the need for more robust evaluation methods to better assess the true capabilities of LLMs in medical contexts." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "large language models", "medicine", "benchmark", "evaluation", "clinical knowledge", "multiple choice questions" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7f4069de689d52997213b9d8ffd02b4934f64f94.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/95a8ef51676e40de7866efa441e87c94e9606049.zip" }, "title": { "value": "Multiple Choice Questions and Large Languages Models: A Case Study with Fictional Medical Data" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xayT1nn8Mg
Deep Signature: Characterization of Large-Scale Molecular Dynamics
main
Active
Molecular dynamics; representation learning; graph neural network; path signature
applications to physical sciences (physics, chemistry, biology, etc.)
3;5;6
3;2;3
2;2;3
2;3;2
3;3;3
4.666667
2.666667
2.333333
2.333333
3
-0.188982
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Existing equivariant neural networks with geometric inductive biases have been outperformed, both in terms of performance and efficiency, by Transformer-style architectures (ie, something as simple as torch.nn.TransformerEncoder). Was any ablation done that compares Deep Signature with such architectures that \"tokenize\" the trajectory intervals? \n\n2. I'm curious what the learned coarse-grained beads/maps look like for GPCR proteins – is it creating CG beads only based on atoms within a locality? How does this compare to naively taking atoms within some radial ball and considering them a bead? More often than not, these naive CG choices work well in practice, without the need to overcomplicate it with a learnable method. I'd like to see whether this \"learned GCN-based coarse-graining\" is really necessary.\n\n3. How sensitive is the path signature transform to very dynamic conformational changes in a short duration of time? For instance, as I mentioned above, fast-folding proteins undergo significant changes in 3D shape in just short simulations. Can this method capture this dynamics well enough?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Incorporates appropriate symmetries for (temporal) 3D point cloud learning. In particular, the invariance to time reparameterization is key because the underlying MD simulations might be prone to random restarts and miscellaneous artefacts preventing them from being a smooth \"video\" (ie, the MD itself might be erratic with the same state sampled repeatedly).\n\n2. Deep Signature _learns_ the ideal CG beads relevant to the task, bypassing the need to manually remove degrees of freedom (eg: CA-level coarse-graining, as done in existing methods). The use of the signature transform allows for local and global temporal interactions to be captured well, as opposed to learning representations on separate frames. \n\n3. I'm confident this method can be used for good representation learning of trajectory information for tasks beyond the ones mentioned in the paper (eg: interpolation in the latent space of protein conformations, perhaps for ensemble generation).\n\n4. Demonstrates relatively good performance compared to weak and strong temporal graph learning baselines (like GraphLSTM)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Deep Signature, a framework for characterizing dynamics on graphs. Deep Signature involves three key components: (1) graph coarsening using a deep spectral clustering module, (2) computation of path signatures to capture global inter-node interactions using iterated integrals over time, and (3) a two-layer MLP for property prediction.\nThe authors tested Deep Signature on three datasets: (1) gene expression dynamics, classifying into degradation or dimerization types, (2) GPCR dynamics, distinguishing between active and inactive states, and (3) EGFR mutant dynamics, predicting drug sensitivity. They conducted an ablation study validating the contribution of each loss component in the model and performed limited comparisons with methods based on static structures, dihedral angles, and GraphLSTM." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper does not clarify the level of coarsening necessary to make path signature computations feasible. For instance, in gene regulatory dynamics, the graph was reduced from 100 nodes to 30, while the EGFR dataset was reduced from approximately 5000 atoms to 50 nodes, but the level of coarsening for the GPCR dataset is unspecified. This raises questions about whether 50 nodes is a computational limit for the method. Additionally, while protein structure can be intuitively coarsened into backbones, sidechains, and motifs, the interpretation for coarsened graphs in gene expression dynamics is unclear. It is uncertain if the coarsened nodes correspond to gene hubs or other biologically meaningful groupings.\n\nThe equation used to model GRN dynamics (Eqn 13 in Section B.1) appears incorrect. Specifically, for dimerization (when f=2), the concentration should should be squared under Michaelis-Menten kinetics, rather than simply doubling the decay rate. This could lead to inaccurate modeling of gene expression dynamics and affect the results in this section.\n\nThe comparisons used in the experiments are limited and static, primarily involving the first and last frames (head, tail, and head & tail) which do not capture temporal dynamics. The authors do not benchmark against dynamic approaches that consider time-varying information, such as MDTraj [1], Timewarp [2], and DSR [3]. Including these comparisons would provide a more rigorous evaluation of Deep Signature's effectiveness relative to established tools for molecular trajectory analysis.\n\nThe authors claim that the coarsened dynamics in Fig 6c follow the same trend as the original dynamics, yet this similarity is not quantified. Providing a quantitative measure, such as correlation coefficients between the original and coarsened dynamics at various coarsening levels, would better support this claim. Additionally, the paper would benefit from comparisons of the authors' coarsening strategy against other deterministic and learnable methods for protein graph coarsening to demonstrate its effectiveness and fidelity in preserving dynamics.\n\nThe description of the cross-validation process and test set creation is confusing. The authors mention “for each running, we evaluate the prediction accuracy of our method on an independent unseen test set and report the averaged results,” but it is not clear how this set is constructed. If results are averaged, please report the standard deviation in all the tables.\n[1] MDTraj: A Modern Open Library for the Analysis of Molecular Dynamics Trajectories. Biophysical Journal, 2015.\n[2] Timewarp: Transferable Acceleration of Molecular Dynamics by Learning Time-Coarsened Dynamics. NeurIPS 2023\n[3] DSR: Dynamical Surface Representation as Implicit Neural Networks for Protein. NeurIPS 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I think the classification task using evolution trajectories is not a common setting. Could you explain why this problem setting is reasonable comparing to the one frame classification setting? Do we really need the complete trajectory to do classification?\n\n2. I do not understand your setting in the EGFR dynamics experiment since I'm not an expert on this domain. Could you please explain why the trajectory can be labeled according to its sensitivity towards the drug? I think the sensitivity should be a number rather than a 0/1 label." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The presentation of the method is very clear and easy to understand.\n2. The motivation of applying signature transform is reasonable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In the paper, the authors introduce the Deep Signature framework to capture complex dynamics using the evolution trajectories. The Deep Signature framework includes spectral clustering, signature transform and a classifier. Additionally, the authors show that the framework satisfies the desired symmetry constraints. Experiments on gene regulatory dynamics, EGFR mutation dynamics and GPCR dynamics exhibit the empirical performance of the framework." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of experiments on large dataset. As the paper claims, the Deep Signature framework can capture large-scale complex dynamics. So I think experiments on datasets with large amount of data and system size are necessary. But the paper only includes the experiments on datasets with large system size.\n\n2. I think the baseline in this paper is too weak. For example, the author should compare the strong baseline with graph transformer architecture[1] based on the first/last frame of the trajectory. Comparations between these strong baseline may strengthen the empirical performance of the framework.\n\n[1].Ying, Chengxuan, et al. \"Do transformers really perform badly for graph representation?.\" Advances in neural information processing systems 34 (2021): 28877-28888." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How were the hyperparameters chosen, such as the loss coefficient parameters $\\lambda_i$, the number of nodes in deep spectral clustering model, and time interval $[r_i, r_{i+1}]$ in path signature transform?\n2. When visualizing critical pathways and interatomic interactions on the EGFR dynamics, why were three atoms identified specifically? Could more than three be selected?\n3. Have the authors considered comparing their approach with advanced time series classification algorithms?\n4. How does the computational efficiency of the proposed method compare to baseline methods?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Authors develop an end-to-end framework to characterize interatomic interactions and dynamics of large-scale molecules, it shows improvement on three benchmarks and provides interpretability. \n2. The size of the system is reduced by deep spectral clustering module without any expert knowledge. \n3. The framework's desirable properties are supported by theoretical analysis." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a framework, Deep Signature, designed to analyze the dynamics and interatomic interactions of large-scale molecular system. The method uses a deep spectral clustering model to capture coarse grained dynamics, a path signature transform module to characterize interatomic interactions through iterated integrals, and a classifier for property prediction. Theoretically, Deep Signature is shown to maintain desirable symmetry properties. The method demonstrates improved accuracy across three benchmarks and demonstrates interpretability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors compare their approach to baseline methods, but a more comprehensive comparison with some SOTA baselines would provide a more robust evaluation. \n2. The manuscript would benefit from a comparison of deep spectral clustering module with existing coarse graining methods. \n3. An analysis of the model’s sensitivity to hyperparameters would provide insights into its robustness and reproducibility." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024deep,\ntitle={Deep Signature: Characterization of Large-Scale Molecular Dynamics},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xayT1nn8Mg},\nnote={under review}\n}" }, "abstract": { "value": "Understanding protein dynamics are essential for deciphering protein functional mechanisms and developing molecular therapies. However, the complex high-dimensional dynamics and interatomic interactions of biological processes pose significant challenge for existing computational techniques. In this paper, we approach this problem for the first time by introducing Deep Signature, a novel computationally tractable framework that characterizes complex dynamics and interatomic interactions based on their evolving trajectories. Specifically, our approach incorporates soft spectral clustering that locally aggregates cooperative dynamics to reduce the size of the system, as well as signature transform that collects iterated integrals to provide a global characterization of the non-smooth interactive dynamics. Theoretical analysis demonstrates that Deep Signature exhibits several desirable properties, including invariance to translation, near invariance to rotation, equivariance to permutation of atomic coordinates, and invariance under time reparameterization. Furthermore, experimental results on three benchmarks of biological processes verify that our approach can achieve superior performance compared to baseline methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Molecular dynamics; representation learning; graph neural network; path signature" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a94e9f9ca2b6d60c25dcea316010ae4721139a73.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Deep Signature: Characterization of Large-Scale Molecular Dynamics" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xbW6EGve6a
Energy and Memory-Efficient Federated Learning with Ordered Layer Freezing and Tensor Operation Approximation
main
Active
Federated Learning;Resource-Constrained devices;Computation and Communication Overheads;Layer Freezing;Tensor Operation Approximation
optimization
3;5;6
4;3;3
2;2;2
2;2;2
3;3;3
4.666667
3.333333
2
2
3
-0.944911
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "No." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Tensor Operation Approximation is used to reduce the communication overhead of FL.\nVarious test results across small datasets are demonstrated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The edge applications leveraging FL are highly constrained by resources, in terms of computation and communication. Existing efficient FL solutions either compromise accuracy or ignore memory usage. They introduce the ordered-layer-freezing and Tensor Operation Approximation for reducing memory footprint & communication overhead." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "layerfeezing technique is not new. Novelty seems to be marginal" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The overall contributions seem marginal and further evaluations are necessary to uncover the true potential and limitations of the proposed technique. Besides addressing the above-raised comments in the weakness section, following questions also need particular attention.\n\n1.\tWhat general guidelines or thresholds can be provided for setting the scaling factor s in TOA, beyond using a grid search, to balance accuracy and communication costs effectively? or maybe is there a way to determine the optimal value of s without using grid search?\n2.\tThe TOA technique in FedOLF uses a fixed scaling factor across all clients, which might not fit well for devices with different processing power (different hardware). Therefore, it is important to explore a more flexible, client-specific scaling factor to improve FedOLF's efficiency on diverse IoT devices?\n3.\tTo emulate system heterogeneity, why only the case of uniform clusters is considered? \n4.\tIn the context of Section 3.5, how many clients with full network training capacity are necessary for this technique to offer reasonable (and fast) convergence? Also, can this technique be used when most of the devices are significantly memory constrained. If yes, what are the limits? Also, how does the proposed technique compare with the state-of-the-art techniques under such scenarios. \n5.\tHow well the technique performs in cases with a skewed distribution of classes (and general categories) across devices? Are there any data-level assumptions that have to be valid for the technique to offer reasonable results? \n6.\tWhat is the impact of the proposed technique in cases with significant computational power imbalance between devices? What is the impact of such cases on the overall training time?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tApplying TOA to frozen layers preserves model accuracy while reducing the communication cost.\n\n2.\tThe experimental results of FedOLF on different iid and non-iid datasets and architectures demonstrate its adaptability and advantage in both energy and accuracy over baseline methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The proposed approach, FedOLF, combines the layer freezing method and Tensor Operation Approximation (TOA) to reduce the memory and computation requirements of the training process in federated learning settings for the memory-constrained IoT devices/systems. Experimental results show better overall accuracy with low memory, and energy usage compared to prior works, for different datasets like EMNIST, CIFAR-10, CIFAR-100, and CINIC-10." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tThe paper offers limited novelty, as FedOLF's approach combines previously explored techniques, partial neural network freezing and TOA. The combination of existing techniques, although seems useful, requires more thorough evaluation to distinguish the scenarios and system configurations in which it outperforms the state-of-the-art techniques and scenarios and systems configurations in which it offers sub-optimal results. Specifically, how well the technique performs compared to state-of-the-art for system configurations in which most of the clients are resource constrained (in terms of memory and compute both)? Also, the paper should demonstrate the performance of the proposed technique for non-uniform clusters of devices. Additionally, the proposed technique should be evaluated for cases with a skewed distribution of classes. All these analyses will provide a detailed performance comparison with the state-of-the-art techniques and a better understanding of the significance of the novel contributions presented in the paper.\n \n2.\tFedOLF assumes that \"low-level layers across various local models usually have higher degrees of Centered Kernel Alignment (CKA) similarity,\" allowing frozen layers to generalize across clients without retraining. While Table 1 shows strong accuracy compared to the benchmark models in non-IID settings, supporting FedOLF's effectiveness, it does not directly validate this assumption of universal feature similarity. In real-world scenarios with highly diverse data (e.g., medical vs. satellite images), this may not hold true, and the frozen layers will become less useful for some clients. To validate this assumption, the experimentation should include more types of non-IID characteristics, such as label distribution skew, feature distribution skew, and quantity skew across clients. Additionally, a theoretical analysis of the conditions under which CKA similarity holds or fails, as well as experiments for validating this similarity across different layers and datasets, would be useful. Moreover, it could be interesting to perform a more in-depth analysis of the performance of the individual layers when considering the heterogeneity of the data; it may offer more detailed insights into the practical limits of the proposed approach.\n \n3.\tThe number of frozen layers is determined just based on the memory footprint of different options, and the computational requirements are not considered. The authors should highlight how this is scalable to systems composed of devices with diverse set of computational capabilities. Moreover, is considering just the memory footprint for real systems sufficient? If not, what modifications are required in the proposed technique, and how incorporating computational requirements into the layer freezing decision can improve the performance of the proposed technique for practical systems?\n\n4.\tBy freezing specific low-level layers, FedOLF implicitly assumes that the representation error from these frozen layers will diminish as training progresses. While the paper claims that these errors vanish empirically, a detailed analysis of error propagation through frozen layers seems missing. To address this, a layer-by-layer analysis of error propagation throughout training, using visualizations like heat maps to track error dynamics, is required. Additionally, experiments that vary the number and position of frozen layers could provide crucial insights into how these changes impact error propagation and overall model performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "it is not clear to me how the communication energy consumption is captured in this experiment. How do the authors provide their relative weights? \n\nRegarding Table 1. There is no evidence that the final achieved accuracy would be better in the proposed system compared to the rest. Is the target number of iterations that give this result? How the performance of the above algorithms change with the iteration number T?\n\nSomeone would expect the benefits of the method to become more profound as the number of layers in the network increases. However, Table 1 shows that this is not the case and the gap between the proposed method and STL closes (even for T=500). Can the authors comment on that?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The idea (and proof) of bounding the approximation of the true gradient through layer freezing under certain condition is important and can be used for the design of topologies that operate under such regime. \n\nThe experimental results show significant performance gains in the specific benchmark tests." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the FedOLF framework that aims to improve the efficiency of FL for resource constrained devices. The key observation behind the work is that by allowing the update of the layers of a network in an ordered manner, energy and memory costs can be reduced without significant sacrifices in the performance of the system. The authors move further and enhance the efficiency of their system by employing Tensor Operation Approximation demonstrating better results in their setting compared to traditional quantization approaches. \n\nOne of the main claimed contributions of the work, which is also the basis of the method, is the vanishing representation error. That is by imposing an ordering of the layers that are frozen in any update, the authors demonstrate a bound on the representation error of the active layers as a function of the last frozen layers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The idea of tackling the energy and memory consumption in FL by frozen a number of layers is not new and many works have already utilised this approach. The novel claimed contribution is the strategy of the ordering of the updating. Given that it is well known that more high level layers are more important for specific tasks compared to the early layers that have the job to extract more generic features, such approach is not so surprising.\n\nThe memory footprint can also be tackled by re-evaluating part of the results in an on-demand manner. As such, someone can trade off time for training vs memory footprint. Given that for many devices the computational cost (in terms of energy) is less than the communication cost, it is unclear how such approach would compare on the overall energy cost vs quality of training with the proposed method that chose to freeze some of the layers for addressing the memory requirements.\n\nA key observation/assumption that underpins the theoretical working of the method is that the upper bound B is likely smaller that one. However this is only based on few networks and datasets and I found difficult to justify the above statement \n\nIn the experimental section, it was not clear to me what is the assumption of the quality of the training of the original network. For example, I would expect ordered layer freezing to perform well if the original model was not very far (and especially the initial layers) from the true one, where if you give to the system a network with random parameters (extreme case) then the initial layers would not be very informative and any imposed freezing on them would not lead to meaningful training for the rest of the layers. The authors should comment on that." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose Federated Learning with Ordered Layer Freezing (FedOLF) as a solution to mitigate the energy consumption and memory footprint of Federated Learning in resource-constrained devices while maintaining accuracy." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024energy,\ntitle={Energy and Memory-Efficient Federated Learning with Ordered Layer Freezing and Tensor Operation Approximation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xbW6EGve6a},\nnote={under review}\n}" }, "abstract": { "value": "The effectiveness of Federated Learning (FL) in the context of the Internet of Things (IoT) is hindered by the resource constraints of IoT devices, such as limited computing capability, memory space and bandwidth support. These constraints create significant computation and communication bottlenecks for training and transmitting deep neural networks. Various FL frameworks have been proposed to reduce computation and communication overheads through dropout or layer freezing. However, these approaches often sacrifice accuracy or neglect memory constraints. In this work, we introduce Federated Learning with Ordered Layer Freezing (FedOLF) to improve energy efficiency and reduce memory footprint while maintaining accuracy. Additionally, we employ the Tensor Operation Approximation technique to reduce the communication (and accordingly energy) cost, which can better preserve accuracy compared to traditional quantization methods. Experimental results demonstrate that FedOLF achieves higher accuracy and energy efficiency as well as lower memory footprint across EMNIST, CIFAR-10, CIFAR-100, and CINIC-10 benchmarks compared to existing methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Federated Learning", "Resource-Constrained devices", "Computation and Communication Overheads", "Layer Freezing", "Tensor Operation Approximation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d7b62618a52ea46c2a11f8da40d9ff3911353474.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/91d8c84d778fec97c540847bd236046b4a828899.zip" }, "title": { "value": "Energy and Memory-Efficient Federated Learning with Ordered Layer Freezing and Tensor Operation Approximation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xbXydoejvY
CWPS: Efficient Channel-Wise Parameter Sharing for Knowledge Transfer
main
Active
Transfer Learning;Multi-Domain Learning;Multi-Task Learning
transfer learning, meta learning, and lifelong learning
3;5;5;6
4;2;4;3
2;2;2;3
2;2;2;3
2;2;2;2
4.75
3.25
2.25
2.25
2
-0.4842
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Can the authors explain more about parameter count?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- CWPS demonstrates high performance on various tasks and scenarios.\n- The proposed framework is simple but effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Current methods for knowledge transfer such as fine-tuning or weight sharing only offer coarse-grained sharing solutions and do not search for optimal parameters for sharing. This paper introduces Channel-Wise Parameter Sharing (CWPS) for efficient knowledge transfer. By refining the granularity of shared parameters from the layer level to the neuron level, they achieve fine-grained parameter sharing to address the coarse-grained problem. The authors also propose a simple method to search for suitable parameters for sharing. The proposed method achieves state-of-the-art results on several benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The experiment is mainly done using ResNet-50, lacking the experiment using other models such as DenseNet and EfficientNet, or even transformer-based or MLP-based models. \n- Lack of experiments with different depths of the model" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Would it be possible to visualize which channels are masked at each layer and whether there is a discernible pattern?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.Innovative Approach: CWPS offers a new perspective on knowledge transfer by enabling fine-grained parameter sharing, which is a significant departure from traditional coarse-grained methods.\n\n2.Enhanced Efficiency: The method is designed to be efficient in terms of parameter sharing, which can potentially lead to better performance and faster training times.\n\n3.Comprehensive and Plug-and-Play: CWPS is presented as a comprehensive solution that can be easily integrated into existing networks without the need for extensive modifications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Channel-Wise Parameter Sharing (CWPS), a novel approach to knowledge transfer that enhances the efficiency and effectiveness of sharing parameters across different tasks or new data. Traditional methods like fine-tuning and layer-wise parameter sharing often provide coarse-grained solutions, which struggle to effectively identify shared parameters, thus limiting performance and efficiency. CWPS addresses these limitations by introducing fine-grained parameter sharing, achieved by refining the granularity from layers to neurons, allowing for the explicit composition of model neurons and utilization of knowledge from previous tasks. The paper also presents an effective search strategy to minimize computational costs and simplify the determination of shared weights. CWPS is designed to be comprehensive, plug-and-play, and has strong composability and generalization abilities, making it theoretically applicable to any network with linear and convolution layers. The method is evaluated across various datasets in incremental learning and multi-task learning scenarios, demonstrating superior precision-to-parameter ratio performance compared to existing methods, regardless of the backbone used." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.Since the supplementary materials of the paper include experiments with the Transformer model, it seems inconsistent that the main text focuses on convolution as the primary subject of discussion, rather than the Transformer. Moreover, the performance of Vision Transformer (ViT) in Table 5 of the appendix shows suboptimal results, which somewhat undermines the argument that the algorithm can generalize across different backbones effectively.\n\n2.Why were experiments not conducted on datasets of a scale comparable to ImageNet in Tables 1 and 2? Observing the right side of Figure 4, it appears that all datasets have some relation to ImageNet. Therefore, wouldn't conducting a downstream task on a dataset of similar scale to ImageNet, but not necessarily entirely relevant, further demonstrate that the method's effectiveness is not solely due to knowledge transfer from ImageNet, but rather that the algorithmic design itself contributes significantly to the results?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper addresses an important problem in transfer learning. With more and more large foundation models available to the public, these models contribute a very large pool of knowledge learned through vast volume of data. The techniques investigated in this paper can be greatly beneficial to tapping into the potential of such models.\n\n2. The proposed method seems fairly universal. Even though the experiments in the paper are mostly conducted on CNNs, the method can be potentially applied to state-of-the-art transformer networks as well." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a method for knowledge sharing to tackle the problem of transfer learning. The method uses existing models as a knowledge pool, from which a layer-wise composition is constructed tailored to the new task to be solved. This composition is referred to as the parent model. Together with the parent model, a child model of the same architecture is trained. The two models are combined channel-wise using learnable masks. Experiments on multiple benchmarks are conducted to demonstrate the effectiveness of the approach." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The training pipeline seems unnecessarily convoluted and impractical. If I understand this correctly, given a new task, a child model first needs to be trained for a short period of time. This child model is then used as reference to construct the parent model. Afterwards, the two models are combined with learnable masks to tackle the aforementioned new task. Compared to standard fine-tuning, the advantage of the proposed method appears to be its parameter efficiency. However, the parameter count itself is not the most faithful reflection of the complexity of the model. Stats such as memory consumption and training time are more important, and accurately reflect the Flops needed to train the model. Given that the performance of the proposed method is generally worse than fine-tuning, the contribution of the work is less than convincing.\n\n2. A major motivation of transfer learning is to exploit the knowledge learned on large volume of data and apply it to domain where data may be scarce. In this paper, however, it appears that all data of the target task is assumed to be available. This significantly reduces the practical value of the work. On the other hand, if the effectiveness of the proposed method can be demonstrated in a few-shot setting, where only a small number of labelled examples are available, I think this can make the paper much stronger. For a point of reference, I believe a recent work by Zhang et al. (2024) shows that a learnable linear combination of models in the weight space can achieve very strong performance in few-shot learning and also test-time adaptation. In the latter case, no labelled data is available. I would encourage the authors to compared against this method.\n\n3. The writing of the paper is not the best, with numerous typos. For instance, in L201, I think the authors meant \"existing\" instead of \"exiting\".\n\nRefs:\n- Zhang et al., Knowledge Composition using Task Vectors with Learned Anisotropic Scaling. NeurIPS 2024" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. There is a lack of explanation regarding some baselines, as well as the approach to parameter sharing. Additionally, there are several prompt-like methods, such as DualPrompt." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. CWPS has lower overhead for additional parameters compared to previous methods.\n2. By using CPMS to determine the relationships between different tasks, it can effectively quantify these connections." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on multi-task learning and multi-domain learning. Existing methods have not taken into account the characteristics of neurons, so a channel-wise parameter sharing method, CWPS, is proposed, achieving better results than the baseline." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper argues that sharing at the neuron or channel level is important, but I think this requires more justification, as I didn’t see sufficient evidence for it in the paper. So, it doesn’t seem very different from other parameter-sharing methods, and the motivation behind the approach doesn’t seem very clear to me.\n2. The overall design of the method appears relatively simple, and the comparison results lack novelty. More experiments are needed to uncover the core reasons behind the improvements this method brings.\n3. The writing is poor." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel fine-grained parameter-sharing method for efficient and comprehensive knowledge transfer, addressing issues with current coarse-grained sharing solutions." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024cwps,\ntitle={{CWPS}: Efficient Channel-Wise Parameter Sharing for Knowledge Transfer},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xbXydoejvY},\nnote={under review}\n}" }, "abstract": { "value": "Knowledge transfer aims to apply existing knowledge to different tasks or new data, and it has extensive applications in multi-domain and multi-task learning.\n The key to this task is quickly identifying a fine-grained object for knowledge sharing and efficiently transferring knowledge.\n Current methods, such as fine-tuning, layer-wise parameter sharing, and task-specific adapters, only offer coarse-grained sharing solutions and struggle to effectively search for shared parameters, thus hindering the performance and efficiency of knowledge transfer.\n To address these issues, we propose Channel-Wise Parameter Sharing (CWPS), a novel fine-grained parameter-sharing method for Knowledge Transfer, which is efficient for parameter sharing, comprehensive, and plug-and-play.\n For the coarse-grained problem, we first achieve fine-grained parameter sharing by refining the granularity of shared parameters from the level of layers to the level of neurons. The knowledge learned from previous tasks can be utilized through the explicit composition of the model neurons.\n Besides, we promote an effective search strategy to minimize computational costs, simplifying the process of determining shared weights.\n In addition, our CWPS has strong composability and generalization ability, which theoretically can be applied to any network consisting of linear and convolution layers.\n We introduce several datasets in both incremental learning and multi-task learning scenarios. Our method has achieved state-of-the-art precision-to-parameter ratio performance with various backbones, demonstrating its efficiency and versatility." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Transfer Learning", "Multi-Domain Learning", "Multi-Task Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ddaa2d469c90af9ae8aae975b2763fd5c724eed5.pdf" }, "presentation": null, "primary_area": { "value": "transfer learning, meta learning, and lifelong learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/3fc0b70f1be0b45b1823ac00fa9d323759590456.zip" }, "title": { "value": "CWPS: Efficient Channel-Wise Parameter Sharing for Knowledge Transfer" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xcHIiZr3DT
Vision-Based Pseudo-Tactile Information Extraction and Localization for Dexterous Grasping
main
Active
Pseudo-Tactile Information;Dexterous Grasping;Vision-Based Perception;Robotic Localization
applications to robotics, autonomy, planning
1;3;3;3
4;3;3;4
1;2;2;1
1;1;1;1
1;2;2;2
2.5
3.5
1.5
1
1.75
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In the experiment, the motion of the real robotic hand is recorded and then replayed in the simulation to show the contact locations can accurately match. I am not clear the motivation here.\n2. In the texture feature extraction framework, how the conditions to classify points as texture features work?\n3. In pseudo-tactile data integration part, it is shown that the system is able to simulate tactile feedback without the need for real sensors. How the feedback works? Is the pseudo-tactile information used for some closed-loop controller?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "This work introduces an approach to extract pseudo-tactile information from vision and contact locations for robotic grasping tasks. The topic is interesting, the pipeline is presented with details, and the experiment results show the effectiveness of the approach with high accuracy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a framework to acquire point cloud representations of objects and simulate the contact locations when using a dexterous robotic hand for grasping tasks. The point cloud data are first processed to filter the background and texture feature points are determined afterwards. The real grasping data are derived from the hardware and can be reproduced in a simulation environment with the corresponding contact locations and texture features." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The overall contribution is marginal. The vision-based \"pseudo tactile\" features are derived using some common tools for point cloud data processing. The simulation for finger tip localization is from replicating and transforming the real motion data into the simulated environment. It is also claimed that the combination of this vision-based information and the fingertip contact points can enhance tactile feedback reliability in robotic grasping. However, this is not clear. The experiment does not show how the robotic grasping is improved." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- What is the purpose of using simulation?\n- How is the tactile information from the point cloud used in contact localization?\n- How is ground truth obtained for contact localization? What is the precision/resolution of the ground truth measurement?\n- What does policy finetuning have to do with this paper?\n- How are the texture feature points used once they're determined by thresholding?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "Open-sourced dataset of point cloud images is made available." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a dataset of point cloud images of a wide variety of everyday objects. A dexterous hand model is built in an Isaac Sim simulator for real-time contact localization by replicating the real-life set up in simulation and using measured joint angles." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Details are so unclear it is difficult to fully understand the paper. For instance, \n- Paper repeatedly talks about the \"Y component of the normal vector\" without clearly defining a coordinate frame. Authors mention employing \"policy fine-tuning techniques\" on page 7 without ever mentioning a policy up until this point.\n- The role of simulation in the paper is not clear. The paper mentions \"This real-time linkage of each joint’s degrees of freedom with actual dexterous hand movements and the simulation platform allowed us to record the spatial coordinates of each grasping contact point in the simulation accurately.\" This sounds like a simple forward kinematics problem that would only require a mathematical model of the robot – not a full-fledged simulation.\n- Paper claims that intel realsense has “sub-mm accuracy”. This is not supported by documentation from the manufacturer: https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance?_ga=2.110331777.520332705.1730517789-101245430.1730517789#section-verify-performance-regularly-on-a-flat-wall-or-target\n- Section 4.2 claims to assess localization precision. It is unclear what the ground truth is, how this is being measured and what measurement is being compared against this ground truth. Referenced Table 4 is difficult to understand, has no mention of errors or comparative ground truth. \n- An RMSE error is reported with no clarity on what quantities are being compared.\n- It is also unclear how related work Section 2.3 is connected to this work. Pseudo-haptics is associated with giving humans touch sensory feedback, whereas “pseudo-tactile” in this paper is related to analyzing the surface tactile properties of objects.\n\nMinor comment:\nPaper is very poorly formatted. Main result tables are placed in the appendix and are difficult to understand. Results in Tables 4,5 and 6 have poor choice of units (meters when dealing with textures that are likely sub-cm scale), and numbers are represented with arbitrary precision with no regard for the precision and error rates of the depth measurement device, ie. Intel Realsense cameras. Bullet points in the results section seem to have headers that have the exact font formatting as the rest of the text." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "There lacks of the definitions of the \"grayscale variance\" and \"Y-component\", the authors could define them properly when they first appear.\n\nTable 1 and equations (1)-(4) are describing the transformation from camera frame to the world frame to extract point cloud which seems to be fundamental computer vision techniques. I would suggest to move them to the Appendix since it is not novel in this work." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The work has clear writing and good structure which makes it easy to follow and understand. Figures are well-made and informative. It is well-motivated to address the hard-to-acquire tactile perception by using vision for dexterous robotic hands." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presented an approach to acquire object surface information including textures and geometries, and proposed a simulation-involved approach to locate fingertips of robotic hand for grasping. The results demonstrated the object grasping with different number of fingers and object surface feature extraction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I do not think this work has solid contribution or concrete experimental results. The proposed method simply combining existing techniques such as extracting point cloud from RGBD camera, and using simulator to simulate grasping. Instead of quasi-static grasping, I would encourage authors to extend it to more dynamic manipulation tasks and explore whether the extracted object features can be leveraged for these complicated tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Apart from the weaknesses I have mentioned, I still have some questions for the authors to respond.\n1. I noticed that the author evaluated the localization results by \"comparing simulated 3D fingertip coordinates with real-world measurements\", how to acquire accurate real-world contact positions with 3D point cloud input or other approach?\n2. The authors use Intel RealSense Camera for capturing depth images and convert them into point clouds. As far as I know, depth images generated by D435 can be very noisy, thus the estimation of surface normal vectors can be very biased, how to deal with this problem?\n3. the authors mentioned that objects with glass material is included in the dataset, but the depth images generated from RealSense Camera on transparent objects are catastrophy. How can the author obtain accurate point cloud information under this circumstances?\n4. In Sec. 3.3, the author mentioned \"We also employ policy fine-tuning techniques, using a small amount of real-world grasping attempt data to fine-tune the model\", but since the method is not deep-learning based, how to conduct the fine-tuning stage in the pipeline?\n\nRight now, I'm inclined to reject this paper. However, if the authors are willing to answer my questions well, I'll consider raising my score." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper addresses the problem of obtaining tactile information during grasping based on only vision perception, and provides a clear method for object surface texture extraction with 3D point cloud input. The strengths are listed below.\n1. The authors provide abundunt and clear explanation for method presentation, and present a simple yet effective approach for point cloud preprocessing and feature extraction.\n2. The authors give a thorough representation on the expermental setup, and conduct real-world experiments on a dexterous hand for validation.\n3. The result of the experiments seems very ideal, indicating the effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the challenges of acquiring haptic perception during grasping by a mechanical dexterous hand and proposes solutions. The main tasks of the research include the acquisition of “pseudo-tactile” information about everyday objects through vision and the construction of a dexterous hand (RH8D) model in Isaac Sim for real-time fingertip contact localization. The study establishes a scientific link between simulated 3D coordinates, actual 3D coordinates, and pseudo-tactile information derived from the point cloud, which is quantified by normal vector and grayscale ANOVA. Experimental results show that the method is able to clearly extract the surface texture of an object, accurately locate the fingertip contact point in real time, and provide haptic information at the contact point." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While I recognize some of the article's contributions, I still have the following concerns.\n1. From my point of view, this paper is a bit lack of novelty, because some parts of the method chapter are biased towards engineering practice rather than innovation, such as point cloud preprocessing, camera coordinate system transformations, and other types of work are actually common in robotics work, and cannot be listed as points of innovation. Meanwhile, the texture extraction method in the section overlaps most with [1].\n2. While generating pseudo-haptic sensing is one of the important contributions of the article, I didn't see that the authors had measured how good the quality of the generated haptic signals were, both quantitatively and qualitatively.\n3. The article's experiments still seem inadequate to me and lack comparison with previous work. For the contact position localization part, are there any previous baselines that can realiza this? For example, 3D point cloud keypoint prediction baselines, etc. Meanwhile, the authors didn't conduct the ablation studies on the proposed method, such as the different effectiveness on selections of KDTree radius, normal threshold, etc.\n\n[1] Budiyanta, N. E., Yuniarno, E. M., & Purnomo, M. H. (2021, December). Human point cloud data segmentation based on normal vector estimation using pca-svd approaches for elderly activity daily living detection. In TENCON 2021-2021 IEEE Region 10 Conference (TENCON) (pp. 632-636). IEEE." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "This study extracts pseudo-tactile information from everyday objects using vision, enabling real-time localization of fingertip contact points and their corresponding pseudo-tactile 3D point cloud information in robotic dexterous hand grasping." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024visionbased,\ntitle={Vision-Based Pseudo-Tactile Information Extraction and Localization for Dexterous Grasping},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xcHIiZr3DT},\nnote={under review}\n}" }, "abstract": { "value": "This study addresses the challenges of tactile perception in robotic dexterous hand grasping by focusing on two main tasks: 1) Acquiring tactile information from everyday objects using vision, termed \"pseudo-tactile\" information, and 2) Building a Dexterous Hand (RH8D) model in Isaac Sim for real-time fingertip contact localization. Utilizing Isaac Sim enables safe, cost-effective experimentation and high-precision simulations that facilitate data collection for model validation. The research establishes a scientific connection between simulated 3D coordinates, actual 3D coordinates, and pseudo-tactile information derived from point clouds, quantified through normal vectors and grayscale variance analysis. Results demonstrate the ability to extract clear object surface textures, accurately locate fingertip contact points in real-time (with precision up to $0.001 m$), and provide tactile information at contact points. This framework enhances robotic grasping capabilities and offers low-cost sensory data. The source code and dataset are publicly available now." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Pseudo-Tactile Information", "Dexterous Grasping", "Vision-Based Perception", "Robotic Localization" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a45afd60172c0e0d8c40b02c5471fa532026c1ef.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/716d6fb35281c2cbea867edbdf570a8204fa4c45.zip" }, "title": { "value": "Vision-Based Pseudo-Tactile Information Extraction and Localization for Dexterous Grasping" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xcPN6Or88c
ImputeINR: Enhancing Time Series Imputation with Adaptive Group-based Implicit Neural Representations
main
Active
time series imputation;implicit neural representations
learning on time series and dynamical systems
3;3;5;6
4;5;4;3
1;2;3;3
2;2;3;3
2;3;3;3
4.25
4
2.25
2.5
2.75
-0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper is well-written, the ideas are easy to follow, and the architecture is well described.\n* The idea of using INRs for time series, and especially for imputation tasks, is a timely and important topic for the community.\n* The results of the experiments are convincing, especially when the missing rate is high." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This article proposes a new model for the problem of time-series imputation that uses Implicit Neural Representations (INR) for time-series modeling. The model employs a convolutional network to extract multi-scale features from the time series, followed by a transformer encoder. The time series are modeled by an INR decomposed into three sub-functions modeling the trend, seasonality, and residuals. The parameterization of the INR is performed based on the output of the transformer. To facilitate the learning of the residual function, clustering is performed on the different variables of the time series to dedicate a specific MLP to each cluster. Experiments on 7 datasets are conducted, comparing the proposed approach to various state-of-the-art algorithms. An ablation study is also carried out to demonstrate the usefulness of each component." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The greatest weakness of the paper, in my opinion, lies in the experimental section and the comparison to existing approaches. There are at least two other papers using INRs for time-series imputation: [1], which is also set in the context of extremely missing observed data (95%), and [2], which uses trend/seasonality/residual decomposition and is also set in the context of a high missing rate. The paper does not cite or compare to these two approaches. Given the similarity of ideas and the relatively few papers on the subject, it seems necessary for the proposed model to be compared to these two approaches.\n\nA minor weakness is the section on clustering, which seems questionable to me. The assumption is that when variables are related, they remain related over time – as far as I understand, the clustering is done on the entire series. This clustering is only used in the residual part to facilitate learning. I wonder if the same result could not be achieved with better regularization of the MLP network used.\n\n[1] Time Series Continuous Modeling for Imputation and Forecasting with Implicit Neural Representations, Le Naour et al., TMLR 2024\n[2] HyperTime: Implicit Neural Representation for Time Series, E. Fons, A. Sztrajman, Y. El-Laham, A. Iosifidis, S. Vyetrenko, NeurIPS 2022 SyntheticData4ML" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As seen in weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-written.\n2. The targeted problem is important." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a solution to the time series imputation problem, particularly for datasets with high proportions of missing observed values. The approach leverages Implicit Neural Representations (INR), which are known for their ability to model continuous functions, potentially enhancing the accuracy of missing data interpolation. The authors introduce three distinct functions to represent multivariate discrete datasets, corresponding to the trend, seasonal, and residual components. Additionally, a clustering module is incorporated to group channels with similar distributions, further refining the handling of inter-variable relationships and improving imputation quality. Experimental validation demonstrates the method's superior performance compared to existing techniques, particularly in extreme missing data scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The word \"novel\" appears multiple times throughout the paper; however, techniques such as INR, clustering, and time series decomposition (including trend, seasonal, and residual components) are well-established and widely utilized in existing research. Additionally, the approach of learning parameters to calculate imputation results through mathematical methods has been explored in prior studies[1]. Although the combination of these elements may provide certain technical contributions, the work lacks a distinctive level of innovation to set it apart from previous works.\n[1] Liu, Shuai, et al. \"Multivariate time-series imputation with disentangled temporal representations.\" The Eleventh international conference on learning representations. 2023.\n\n2. In Equations 9 and 10, the authors use fixed formulas to represent parts of the INR function (trend and seasonal components). While this choice enhances the interpretability of ImputeINR, it also reduces its ability to handle various complex datasets. Could the authors explain why they used these two fixed functions to represent the trend and seasonal components of time series data? Detailed insights will be preferred.\n\n3. The average experimental results obtained by ImputeINR are quite impressive; however, this performance is primarily evident in datasets such as BAQ and IAQ, while the improvements observed on widely used datasets like ETT and Weather are quite modest. Does this indicate that ImputeINR has stringent requirements concerning dataset distribution? The authors are encouraged to clarify which types of datasets are most appropriate for imputation with ImputeINR.\n\n4. The experimental settings require more detailed clarification. BAQ, IAQ, and Solar are datasets containing tens of thousands of time steps, while the experiments presented in this paper utilize only a few hundred or fewer. Furthermore, the training and testing set split ratios vary across datasets, such as Weather and Solar. Similar concerns are observed in other datasets as well.\n\n5. Incorporating SOTA baselines, such as CSDI[2], would enhance the credibility of the experimental results.\n[2] Tashiro, Yusuke, et al. \"Csdi: Conditional score-based diffusion models for probabilistic time series imputation.\" Advances in Neural Information Processing Systems 34 (2021): 24804-24816." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the weak points listed above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "S1. Imputing time series data is an important problem.\n\nS2. Various baselines are considered in experiments.\n\nS3. The visual analysis is conducted, to improve the readability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an imputation approach to address the incomplete time series with high missing rates,. By leveraging implicit neural representations (INR) to learn continuous functions, ImputeINR can generate fine-grained imputations even when substantial values are missing. The method includes a multi-scale feature extraction module to enhance the imputation's fine-grained and global consistency. Additionally, ImputeINR uses a specific form of INR continuous function to separately learn trend, seasonal, and residual information, and an adaptive group-based framework to model complex residual information. Experiments on seven datasets show ImputeINR's superior performance in high absent ratios in time series." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1. This paper focuses on imputing time series data with high missing rates, e.g., 70%, 90%. However, it is doubted whether this scenario is important and commonly observed in real applications, since there is no real scenario provided to support such a motivation. Please provide specific examples of real-world scenarios where such high missing rates occur, or to justify why addressing these extreme cases is important even if they are rare.\n\nW2. The core idea is to use the ability of INR to learn continuous functions and achieve interpolation. To adapt INR for time series imputation, trend, seasonal and residual items are considered, which are typical operations for modeling temporal data. It is suggested to further highlight the contribution and novelty of proposed framework. Please explicitly compare the proposed approach to existing methods that use trend, seasonal, and residual decomposition, and clearly state what specific innovations this method introduces beyond these typical operations.\n\nW3. As claimed in W1, in experiments, all the datasets are originally complete and only artificial missing values are considered in the evaluation. It is necessary to use real-world incomplete data with large missing rates in experiments to demonstrate the applicability of proposed methods as well as the motivation scenario.\n\nW4. It is also suggested to consider the application study using real-world incomplete datasets, to investigate the performance of using proposed techniques to serve real scenarios.\n\nW5. In Table 2, mean and median values are the same in different mask rates. Please explain why the mean and median values are the same across different mask rates, or to verify if this is correct.\n\nW6. In Table 2, Mask rate is represented inconsistently, e.g., 10% or 0.1." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In the Introduction, the authors state that ImputeINR is the first method designed specifically to handle extremely sparse observed data. However, to my knowledge, other recent studies have addressed this issue, including CSDI (Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation) and SSSD (Diffusion-based Time Series Imputation and Forecasting with Structured State Space Models). The author should include these methods in their experimental comparisons or a comparative discussion of these approaches in the literature review and highlight how ImputeINR are different and better from these approaches for extremely sparse data scenarios.\n\n2. Could the author provide recent advancements in INR for time series, limitations of current INR approaches, or how INR addresses missing values in existing work? For instance, to my knowledge, \"Time Series Continuous Modeling for Imputation and Forecasting with Implicit Neural Representations\" also provide an INR framework for data imputation, for which this is not being reviewed in this work.\n\n3. It is not so clear to me how the author generate the (missing) mask. Could the author clarify how their masks are generated in detail? In Section 4.1, the authors mention that missing values are generated by randomly masking values based on a specified mask rate. Does this imply the missing value are generated according to the missing rate? If so, have the authors also considered block missing scenarios, as proposed in SSSD, where entire segments are missing?\n\n4. Could the authors clarify why the last column in Table 2, showing imputation results with the mean/median baseline, are identical?\nIs this an error in reporting or is it a characteristic of the mean/median imputation method. If this is not an error, the authors could explain and discuss this pattern. \n\n5. Additional detail on the computational complexity of ImputeINR would be helpful. Specifically, how does its computational burden compare to that of other baseline methods?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The author introduces an innovative ImputeINR model, addressing the complex problem of time series imputation under high missing ratios. The method shows strong performance on benchmark datasets with diverse characteristics, achieving state-of-the-art imputation accuracy, especially under extreme mask rates.\n2. The paper includes extensive experiments, ablation studies, and robustness analyses to validate the contributions of each component, showing ImputeINR’s consistency across diverse settings.\n3. The paper provides clear explanations and visualizations, particularly for clustering results, aiding the reader's understanding of ImputeINR’s inner workings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel approach for imputing missing values in time series data, particularly focusing on cases with high proportions of missing values. The proposed method, ImputeINR, employs implicit neural representations (INR) to model time series as continuous functions, allowing for fine-grained interpolation even with sparse observations. ImputeINR incorporates a multi-scale feature extraction module to capture various temporal patterns and a novel adaptive group-based architecture that leverages clustering to group variables with similar distributions. Finally, numerical experiments across seven datasets and five different levels of missing data were conducted to demonstrate the performance of ImputeINR. Overall, the paper looks promising and organized, but I have some questions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Please refer to the questions sections." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose ImputeINR, a time series imputation method based on adaptive group-based implicit neural representations." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024imputeinr,\ntitle={Impute{INR}: Enhancing Time Series Imputation with Adaptive Group-based Implicit Neural Representations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xcPN6Or88c},\nnote={under review}\n}" }, "abstract": { "value": "Time series data frequently exhibit the presence of missing values, rendering imputation a crucial process for downstream time series tasks and applications. However, existing imputation methods focus on discrete data points and are unable to effectively model sparse data, resulting in particularly poor performance for imputing substantial missing values. In this paper, we propose a novel approach, ImputeINR, for time series imputation by employing implicit neural representations (INR) to learn continuous functions for time series. ImputeINR leverages the merits of INR that the continuous functions are not coupled to sampling frequency and have infinite sampling frequency, allowing ImputeINR to generate fine-grained imputations even on extremely absent observed values. In addition, we introduce a multi-scale feature extraction module in ImputeINR architecture to capture patterns from different time scales, thereby effectively enhancing the fine-grained and global consistency of the imputation. To address the unique challenges of complex temporal patterns and multiple variables in time series, we design a specific form of INR continuous function that contains three additional components to learn trend, seasonal, and residual information separately. Furthermore, we innovatively propose an adaptive group-based framework to model complex residual information, where variables with similar distributions are modeled by the same group of multilayer perception layers to extract necessary correlation features. Since the number of groups and their output variables are determined by variable clustering, ImputeINR has the capacity of adapting to diverse datasets. Extensive experiments conducted on seven datasets with five ratios of missing values demonstrate the superior performance of ImputeINR, especially for high absent ratios in time series." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "time series imputation", "implicit neural representations" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/63000132df19e4b22261c740292d0cdf8109bc13.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/2e088e4499233ace74242e93d1329227f9af3261.pdf" }, "title": { "value": "ImputeINR: Enhancing Time Series Imputation with Adaptive Group-based Implicit Neural Representations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xdGsiYNfje
LLMScan: Causal Scan for LLM Misbehavior Detection
main
Active
Large Language Model;LLM Safety;LLM Misbehavior Detection;Causality Analysis;Model Scan
causal reasoning
3;3;5
4;4;3
1;2;3
2;2;3
1;2;3
3.666667
3.666667
2
2.333333
2
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Pros:\n1. The paper presents a unified approach to detecting multiple types of LLM misbehavior, whereas previous work typically focused on individual issues.\n2. The authors provide detailed visualizations and analysis showing how different patterns in the causal maps correspond to different types of misbehavior, as demonstrated in Figures 2-5 and the extensive experimental results in Section 4.\n3. The method shows impressive performance, particularly for detecting lies, jailbreaks, and toxic content, with AUC scores consistently above 0.95 across different models and datasets. This is particularly notable given that the method works with the first generated token, allowing for early detection of potential misbehavior." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces LLMSCAN, a novel method for detecting various types of misbehavior in Large Language Models (LLMs) through causal analysis of the models' internal mechanics. The approach monitors both input tokens and transformer layers to create causal maps that capture how different components contribute to the model's outputs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Cons:\n1. The performance on bias detection is notably weaker than other tasks, with AUC scores ranging from 0.71 to 0.78. While the authors acknowledge this limitation and provide some analysis in Section 4.3, they could have explored more deeply why their approach struggles with this particular type of misbehavior and potential solutions.\n2. While the authors provide a public code repository, the reproducibility section (Appendix C) could be more detailed, particularly regarding the specific hyperparameters used for the detector training and the process for selecting attention heads for token-level analysis.\n3. The evaluated LLMs are relatively limited and small-scale. It is suggested that the authors also evaluate on models with a larger scale such as llama-70B. Otherwise, the effectiveness of the proposed method is relatively limited." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Can more sophisticated causal analysis be applied for LLM detection?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- A proactive solution by identifying and preventing the intent to generate misbehavior of LLM based on internal patterns before a token is generated is a good idea.\n- The idea of analyzing internal patterns is interesting.\n- The idea of constructing a causality distribution map for token and layer is interesting" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new method called LLMScan for detecting misbehavior in large language models (LLMs) usage. LLMScan uses causality analysis to identify the LLM's internal activation pattern to indicate potential misbehavior, such as generating untruthful, biased, harmful, or toxic responses. The authors demonstrate the effectiveness of LLMScan through experiments on various LLMs and tasks, achieving high accuracy in detecting different types of misbehavior." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "No baseline.\nThe work claims to provide many baselines, but they are not presented in the relevant experiment result tables.\n\n\nPoor presentation.\n- The terminology provided in this work is not precise, leading to confusion for the reader. E.g., \n - The name of causality seems misleading. The process described in ``Computing the causal effects of tokens'' is a measurement of sensitivity to token replacement by the attention scores. Since the target is an internal attention score, the term causality seems not to be an appropriate description. \n - The term causal map is also weird. The causal map provides a vector value and not a 2-dimensional matrix value. \n- Besides the example of {“Who developed Windows 95?”. The truthful response is “Microsoft Corporation”. The untruthful response is “Bill Gates”} (Line 263) is also very curious since Bill Gates is the CEO of Microsoft Corporation at that time.\n- Other typo/errors: \n - Table 2 duplicate first and second row" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "It is clear that the detector (binary classifier) can effectively identify whether a prompt can lead to misbehavior in LLM. I am curious whether the detector can be extended to precisely point out which misbehavior the LLM has given a malicious prompt." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The observation that the difference in causal maps between truthful and untruthful responses is significantly important for advancing research on the robustness and security of LLMs.\n\n2. The experiments are sufficient, and the results show the effectiveness of the LLMScan." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents LLMScan, a novel approach for detecting various types of misbehavior in large language models (LLMs). LLMScan is composed of two key components: a scanner and a detector. The scanner assesses the influence of each token through attention scores and evaluates the contribution of each layer by examining differences in output logits. These token influences and layer contributions are used to form a causal map and serve as features, which are then processed by a multi-layer perceptron (the detector) to identify misbehavior in the LLM. The paper demonstrates the effectiveness of LLMScan by detecting four types of misbehavior across 13 public datasets using three well-known LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper leverages the ATE (Average Treatment Effect) to evaluate the causal effect of each token and each layer ($ATE=E[Y|do(T=1)]-E[Y|do(T=0)]$).\n\nMy concern is replacing a token with a placeholder \"-\" and calculating the difference in attention scores may not fully align with the rigorous causal inference. Here are some reasons: \n\n- In the context of causal inference, when we conduct the intervention on a token, we should expect the downstream effects on the intervention, leading to the change of other tokens. That said, this paper implicitly makes a strong assumption that all tokens are independent of each other. However, this assumption may not be held in NLP.\n\n- From the perspective of estimating causal effect, the method only measures a difference in attention scores given a single instance, which is actually more like deriving a counterfactual sample if we assume tokens are independent. \n\nNote that I was not questioning the effectiveness of this method. I just don't think the technique described constitutes a rigorous causal analysis. The proposed approach provides useful insights, but labeling it as causal analysis without following the established principles can be misleading." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a novel method for scanning LLM's \"brain\" and detecting LLM misbehavior using causal analysis on input tokens and transformer layers, enabling early detection of lies, harmful and outputs." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024llmscan,\ntitle={{LLMS}can: Causal Scan for {LLM} Misbehavior Detection},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xdGsiYNfje},\nnote={under review}\n}" }, "abstract": { "value": "Despite the success of Large Language Models (LLMs) across various fields, their potential to generate untruthful, biased and harmful responses poses significant risks, particularly in critical applications. This highlights the urgent need for systematic methods to detect and prevent such misbehavior. While existing approaches target specific issues such as harmful responses, this work introduces LLMScan, an innovative LLM monitoring technique based on causality analysis, offering a comprehensive solution. LLMScan systematically monitors the inner workings of an LLM through the lens of causal inference, operating on the premise that the LLM's `brain' behaves differently when misbehaving. By analyzing the causal contributions of the LLM's input tokens and transformer layers, LLMScan effectively detects misbehavior. Extensive experiments across various tasks and models reveal clear distinctions in the causal distributions between normal behavior and misbehavior, enabling the development of accurate, lightweight detectors for a variety of misbehavior detection tasks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Model", "LLM Safety", "LLM Misbehavior Detection", "Causality Analysis", "Model Scan" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8937373704c26e897d2dd360872d85f33c449cc1.pdf" }, "presentation": null, "primary_area": { "value": "causal reasoning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "LLMScan: Causal Scan for LLM Misbehavior Detection" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xeP03R58RH
Rethinking Uncertainty Estimation in Natural Language Generation
main
Active
llm;nlg;uncertainty estimation;uncertainty measures;proper scoring rules
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
3;3;3;5;6
4;3;3;3;3
2;1;2;3;2
1;2;2;3;3
1;1;2;3;3
4
3.2
2
2.2
2
-0.395285
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper introduces a new uncertainty estimation metric based on NLL that avoids the need for multiple sequence generations, which is a common bottleneck in existing methods.\n- By eliminating the need to generate multiple output sequences, the proposed method significantly reduces computational overhead, making it more practical for large-scale applications.\n- The method achieves or surpasses the performance of existing state-of-the-art uncertainty estimation methods across different models and tasks.\n- The approach shows strong performance across various model architectures, sizes, and training stages, demonstrating its broad applicability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel approach to uncertainty estimation in natural language generation (NLG) models. The authors propose using the negative log-likelihood (NLL) of the generated sequence as a surrogate for uncertainty estimation. By leveraging the theoretical framework of proper scoring rules, they demonstrate that NLL can serve as an effective uncertainty metric. This approach simplifies the estimation process because it only requires the likelihood of the generated sequence under the model, avoiding the need for multiple samples. The theoretical foundation is well-established within the framework of proper scoring rules, and the empirical results demonstrate the method's superiority over existing metrics across various models and tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed metric focuses on statistical uncertainty derived from model probabilities but does not explicitly account for the semantic aspects of generated text. Incorporating semantic uncertainty would provide a more holistic estimation, capturing discrepancies between the generated content and the underlying meaning or intent. While the authors briefly discuss this limitation in the conclusion, it remains a significant issue that warrants deeper exploration, possibly through additional methods or combined metrics.\n- While the experiments are extensive, they focus primarily on free-form question-answering tasks. Additional experiments on other types of NLG tasks (e.g., dialogue generation, story generation) would strengthen the claims.\n- The paper spans 7 pages, whereas the conference allows submissions up to 10 pages. This unused space represents an opportunity to expand on key areas such as additional experiments, detailed analyses, or discussions that could further strengthen the paper's contributions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to Weaknesses)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Computational Efficiency: The proposed uncertainty measure requires only a single output sequence, significantly reducing computational costs compared to methods that generate multiple sequences, making it highly scalable for real-world applications.\n- Theoretical Soundness: Using MAP as the metric of uncertainty is grounded in established principles of proper scoring rules, ensuring theoretical robustness while simplifying the complexity of uncertainty estimation for natural language generation models​." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a MAP-based approach to estimating uncertainty in large language models (LLMs) to improve the reliability of generated text. Traditional Monte-Carlo uncertainty estimation methods rely on generating multiple output sequences, a process that is computationally intensive and inefficient at scale. This study introduces a streamlined method that estimates uncertainty using only the negative log-likelihood of the most probable output sequence, eliminating the need for multiple sequences. The proposed approach maintains theoretical rigor and outperforms or matches existing methods across a range of tasks and models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Questionable Efficiency: It seems to me that obtaining the MAP sequence (argmax) is non-trivial. While seemingly at the end of the day we only get one sequence, taking efforts to approximate it to be the MAP could be no computationally cheaper than sampling a lot of candidates, which is exactly what the paper is claiming to avoid. It would make the paper more compelling, if the authors can briefly study how well the argmax sequence is approximated, and if the approximation of obtaining argmax is not quite good, what is the worst-case performance of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What’s $\\mathcal{D}$ in Line 115?\n\n2. Could you provide more justifications on why NLL may be better than other sampling-based baselines?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed metric alleviates the need for sampling to estimate the uncertainty in natural language generation. \n\n2. The derivation of the different uncertainty terms and defining aleatoric and epistemic uncertainty is helpful.\n\n3. The presented experiments cover a few backbone models and representative tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Traditional uncertainty estimation relies on sampling-based methods, which inevitably incurs additional computation cost. This work addresses this limitation and proposes to measure the uncertainty solely based on the negative log-likelihood of the most likely sequence. Empirical results demonstrate the performance of the proposed metric in distinguishing between correct and incorrect answers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The only metric proposed by this work is the zero-one score, which is one minus the predictive distribution for the most likely output sequence. Therefore, I find this is actually equivalent to propose $p(y=y’|x)$ as the confidence estimation, which has been widely applied in the machine learning community, whereas uncertainty is simply derived by 1-confidence. Consequently, this metric lacks technical novelty.\n\n2. Though the proposed NLL metric seems to be superior to baselines, this work lacks justification and insights on why the NLL is a better metric than the variants using sampling. \n\n3. Verbal explanations have been widely implemented in estimating the confidence level of LLMs. The author includes relevant discussion in Line 249. However, there is no empirical comparison to this type of baseline." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "083: What is \\mathcal{V} here? You need to introduce the vocabulary. \n\n094: “since \\mathcal{Y} scales exponentially with the sequence length T.” Here you defined \\mathcal{Y} as all possible sequences, which is an infinite set so it shouldn’t be growing. If you want to make this claim, you can define \\mathcal{Y}_t as the subset of all sequences with length <t. \n\n098-100: “We consider uncertainty for a given LMs, … a valid assumption” I am a bit confused, what is the assumption here?\n\n111: Here if you are sampling y’ from p(\\cdot |x, \\cdot) it is better to write it explicitly “y’ \\sim …” It can be a bit confusing here, and I am not sure how the discussion on “Proper Scoring Rules for Uncertainty Measures.” advances your main claims - if this is only about evaluation, you can defer this to later sections.\n\n127: Again, I am not sure how “Aleatoric and Epistemic Uncertainty” relates to your proposed method. The purpose of “related works” or “preliminaries” is to make people ground your work to existing literature, if you believe that your proposed method is connected with this literature, make it more explicit.\n\n898: “The reference answer sampled using beam search with a size of 20 is considered for assessing overall correctness, as it represents the most likely answer generated by the language model” - why is beam search of 20 = most probable answer? Do you have any guarantees that a beam size of x makes the generated sequence log-prob close (difference bounded) to the most likely sequence?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The paper studies an important topic, which is crucial for many applications (e.g. improving trustworthiness of LMs).\n\n2. The experiment results are good, which is surprising given the simplicity of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to quantify the uncertainty of a language model for a specific prompt using the log-likelihood of the most probable sequence. Empirical results show that this new measure is effective in quantifying the uncertainty of the model without having to generate multiple times." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The idea to use a single generation to “approximate the most likely output sequence (line 223)” is concerning - the motivation is to avoid generating multiple sentences, and yet in order for beam search to find the most probable sentence (even in a toy setting), it requires multiple samples (Appendix A, Figure 2). Practically, I don’t know how close a greedy sampled / top-k sampled sequence is close to the most likely sequence even of the same length. \n\n2. The contribution (using log-likelihood of one generation) is somewhat limited to empirical findings without any theoretical guarantees that one generation is able to find a sequence that is close to the most probable sequence. \n\n2. The paper is not very well written, making it difficult to understand what the authors want to convey. See the questions section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Are eq (1-6) developed by you or prior work? \n2. what is the exact step to calculate Eq(8).\n3. Do you have real groundtruth measurement for generation correctness?\n4. How many generations do you need to estimate uncertainty for one sequence?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The uncertainty quantification for LLM is an important topic. \n2. The derivation of aleatoric and epistemic entropy is valid. \n3. The introduction of zero-one score is valid." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the uncertainty quantification for LLM. The paper first defines uncertainty as the expected score (which needs to be designed) of LLM prediction with respect to all possible parameters fitting the data. It then defines the scoring function as zero-one indicator of whether the generated sequence reaches maximal likelihood. The paper claims the estimation of the final uncertainty quantity only requires max-decoding (such as beam search). The paper evaluates the proposed method against prior baselines on 3 tasks for 6 LLMs. The paper used AUROC to measure accuracy and claims the proposed method achieves the best." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing of the paper is very blurred. It is not clear which part is from prior paper, which part is the original contribution in this paper. Eq(7-8) are proposed in this paper. But Eq(1-6) are unclear. \n2. The definition of uncertainty in Eq (1) seems to suggest there is a groundtruth y generation. The definition in Eq(2) is questionable. It is unclear what is the posterior distribution of parameter w. Does it need to have a distribution of parameter? What if the parameter is fixed. \n3. The actual estimation algorithm is not described. In particular, Eq(8) needs to estimate an expectation term. It is unclear how to estimate this part. There is no description in the paper. \n4. The evaluation approach and the metric used are quite questionable. It is unclear why this particular F1 threshold-based correctness is used. But the details of estimation such correctness is also not described well. The use of LLaMA 70B as the evaluator is also quite questionable." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a theoretically grounded uncertainty measure for LLMs that significantly reduces computational costs while maintaining state-of-the-art performance." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024rethinking,\ntitle={Rethinking Uncertainty Estimation in Natural Language Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xeP03R58RH},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) are increasingly employed in real-world applications, driving a need to determine when their generated text can be trusted or should be questioned. To assess the trustworthiness of the generated text, reliable uncertainty estimation is essential. Current LLMs generate text through a stochastic process that can lead to different output sequences for the same prompt. Consequently, leading uncertainty measures require generating multiple output sequences to estimate the LLM’s uncertainty. However, generating additional output sequences is computationally expensive, making these uncertainty estimates impractical at scale. In this work, we challenge the theoretical foundations of the leading measures and derive an alternative measure that eliminates the need for generating multiple output sequences. Our new measure is based solely on the negative log-likelihood of the most likely output sequence. This vastly simplifies uncertainty estimation while maintaining theoretical rigor. Empirical results demonstrate that our new measure achieves state-of-the-art performance across various models and tasks. Our work lays the foundation for reliable and efficient uncertainty estimation in LLMs, challenging the necessity of the more complicated methods currently leading the field." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "llm", "nlg", "uncertainty estimation", "uncertainty measures", "proper scoring rules" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/950207a67d554f17fc2ad806037fcbb4f76f3f8e.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Rethinking Uncertainty Estimation in Natural Language Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xfw92pDy2u
Distilled Diffusion Language Models
main
Active
diffusion language models;discrete diffusion;distillation
generative models
3;3;3;5
4;4;4;2
3;2;3;2
2;2;2;2
2;2;2;2
3.5
3.5
2.5
2
2
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses. Will consider adjusting my initial rating upon authors' responses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents a new way to distill AR language models into diffusion language models to bridge their performance gap, where the TCS distillation objective effectively connects different types of models. The paper's evaluation on a range of tasks demonstrates its improved performance on complex tasks like in-filling and arithmetic. Moreover, the potential for faster parallel generation is also a advantage over autoregressive counterparts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript introduce Distilled Diffusion Language Models (DDLM), a framework to distill pre-trained autoregressive (AR) language models into denoising diffusion language models. \nA key contribution is the Target Concrete Score (TCS) distillation objective, aiming to bridge the gap between AR and diffusion models. Specially, top-K and gradient-informed estimation are proposed to efficiently estimate the TCS. \nDDLM is evaluated for both discrete and continuous Diffusion LMs, on several language modeling and reasoning tasks, showing its effectiveness in improved performance of Diffusion LMs with faster parallel generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think one of the major weaknesses lies in evaluation on more and widely-used standard LLM benchmark. \nThe authors only evaluate the models against PPL where PPL/likelihood of Diffusion LMs are often not exact and cannot serve as a standalone indicator for real language capabilities. Therefore, the paper should provide more detailed comparisons of DDLM with existing AR-LMs (e.g, LLAMA-3 8B) on downstream language tasks beyond GSM8K-Aug, such as BBH, MMLU, multilingual tasks like translation, etc. Plus, case study of samples generated by DDLM is needed to assess the behaviors of the model, especially for reasoning tasks.\nALL of these are important to convincingly demonstrate the proposed framework's ability to generalize across a wider range of language tasks and datasets. \n\nMoreover, despite the great promise of self-correction and bidirectional context in Diffusion LMs, AR-LMs can achieve similar results through reflection or complicated chain-of-thought reasoning, as demonstrated by O1. Additionally, open-ended reasoning is particularly challenging for Diffusion LMs because they require pre-specified sequence lengths. Faster parallel generation is good, but AR-LLMs enjoy many MLSYS optimizations thanks to exactly their autoregressive/recursive decomposition especially at inference time. \nAt the end of the days, what is the real potential of developing Diffusion LMs for natural language tasks, as an alternative for AR-LLMs? And to reach this goal, what major challenges need to be addressed?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Knowledge distillation is a potential direction to enhance diffusion models. \n- The results are good. \n- This paper is well writen." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explore the possibility of distilling a pre-trained autoregressive (AR) language model (teacher) into a non-autoregressive diffusion (non-AR) language model (student), combining the best of both worlds. The authors propose TCS distillation, a theoretically grounded framework that bridges autoregressive and diffusion paradigms, which can be broadly applicable to both discrete and continuous diffusion models, with any pre-trained autoregressive teacher model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I am not sure whether several numbers in Table 1 is missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How does the proposed TCS distillation method compare to other state-of-the-art distillation techniques, especially in terms of efficiency and performance?\n2. Could the authors provide more detailed experiments that systematically vary data size and model complexity to demonstrate the scalability of the proposed method?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed method shows the potential to improve learning efficiency and perplexity, which is demonstrated through experiments on language modeling tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel framework for distilling knowledge from pre-trained autoregressive (AR) language models into non-autoregressive diffusion models. The core contribution is the Target Concrete Score (TCS) distillation, a method designed to bridge the gap between autoregressive and diffusion paradigms. It can apply to both discrete and continuous diffusion models and leverages any pre-trained autoregressive teacher model. Experiments on language modeling tasks show improvements in pre-trained diffusion language models and the ability to train new models from scratch." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **The method's introduction lacks systematic clarity**: the paper does not provide a comprehensive introduction to the process of distilling from autoregressive (AR) models to diffusion models. It is challenging to discern the specific difficulties involved in this distillation process and how the current work addresses these challenges. It would benefit from a more structured explanation that highlights the novel contributions and breakthroughs of the proposed method.\n\n2. **The experimental comparisons are insufficient**: the experimental section lacks a thorough comparison with existing diffusion models, particularly in terms of perplexity (PPL). While the paper presents baseline comparisons, it fails to include benchmarks against state-of-the-art (SOTA) diffusion models, which is crucial for validating the effectiveness of the proposed method. For instance, Table 3 follows the experimental setup of Ye et al., 2024, but does not include a comparative analysis of their results, limiting the ability to assess the method's performance.\n\n3. **The validation of the method does not scale up in terms of model size and capability**: the paper does not sufficiently demonstrate the method's scalability, particularly in terms of distilling larger AR models. The ability to effectively distill knowledge from more complex AR models is crucial for validating the motivation behind transferring knowledge to diffusion models. However, the manuscript lacks discussions on whether the proposed method can scale up to handle larger models, which is a key aspect of assessing the practical viability of the approach." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Do diffusion language models learned from scratch and learned with TCS distillation show similar patterns in intermediate generation steps?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors propose distillation from autoregressive models as an effective way to enhance the performance of diffusion language models.\n2. The proposed method is theoretically grounded.\n3. Empirical results show the effectiveness of the method in terms of perplexity." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Target Concrete Score (TCS) to distill autoregressive language models to diffusion language models in order to enhance the latter one. The TCS method is applicable to a wide range of diffusion language models, both continuous and discrete ones. Comprehensive experiment supports the effectiveness of TCS." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors emphasize the error-correction ability of diffusion language models but do not show evidence to support it. Additionally, autoregressive models also have the potential to correct previous errors with chains of thought.\n2. Although this paper narrows the performance gap between autoregressive and diffusion language models, diffusion language models still underperform autoregressive models in most tasks without unique advantages. \n3. Insufficient experiments to study how the scales of teachers and students affect learning efficiency. It remains unclear whether the proposed methods help scale a diffusion language model." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Distilling a pre-trained autoregressive language model into a diffusion-based language model with proposed Target Concrete Score objective." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024distilled,\ntitle={Distilled Diffusion Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xfw92pDy2u},\nnote={under review}\n}" }, "abstract": { "value": "Transformer-based Large Language Models (LLMs) have demonstrated remarkable capa-\nbilities, yet their autoregressive nature forces sequential token-by-token decoding, leading\nto inefficiencies during inference. Furthermore, autoregressive language models lack in-\nherent self-correction abilities, which hinders their capacity to refine and improve gener-\nated content without relying on external prompting or retraining techniques. In contrast,\ndiffusion-based models offer the advantage of fast parallel generation through iterative\nrefinement, while leveraging bi-directional attention to utilize full context at once. How-\never, diffusion models are unable to match their autoregressive counterparts. This moti-\nvates us to explore the possibility of distilling a pre-trained autoregressive (AR) language\nmodel (teacher) into a non-autoregressive diffusion (non-AR) language model (student),\ncombining the best of both worlds. In this work, we present Target Concrete Score (TCS)\ndistillation, a theoretically grounded framework that bridges autoregressive and diffusion\nparadigms. TCS distillation is broadly applicable to both discrete and continuous diffu-\nsion models, with any pre-trained autoregressive teacher model. We propose techniques\nto make TCS distillation scalable and efficient for transformer-based models, and show\nhow it can both improve pre-trained diffusion language models and also train new mod-\nels from scratch. Through comprehensive experiments on language modeling tasks, we\ndemonstrate the effectiveness of our proposed methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "diffusion language models", "discrete diffusion", "distillation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f8c8ace4bf4fef1915735d20ce50bd5e071ad7a1.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Distilled Diffusion Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xgQfWbV6Ey
Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting
main
Active
generative model;retrieval augmented generation
generative models
3;5;6;6
4;4;4;4
3;3;3;3
2;2;3;3
3;2;3;3
5
4
3
2.5
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- For the comparison of the average number of tokens in the generated rationale and the retrieved documents in Figure ```2```: What is the number of retrieved documents? Is it before or after subsetting to drafters?\n\n- Is the Multi-Perspective Sampling stage also considered in the latency analysis in Section ```4.5```?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The conceptual extension of the speculative farmwork proposed by the author is both novel and appealing, and supported by strong empirical results.\n\n- The paper targets a timely challenge in RAG and offers promising techniques to improve the efficiency for the system.\n\n- The experiments are comprehensive and thorough, in particular, the extensive ablation study and analysis provide great insights for a better understanding of the proposed approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a retrieval-augmented generation framework termed SpeculativeRAG, leveraging high-level concepts analogical to speculative decoding.\nThe framework exhibits better performance with lower latency via lunching a set of smaller model instances in parallel, each processes a subset of retrieved documents, to produce answer drafts and rationales.\nThe answer drafts and rationales are subsequently verified by a larger, strong base LLM to select the final answer." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The instruction-tuning of the small drafter LMs requires synthesis rationales generated by Gemini Ultra from an additional 40k instances. This presents an unfair comparison for baselines methods that do not have access to these external resources in the experiments.\n\n- It is unclear whether multiple instances of large verifier LLMs are also lunched in parallel. According to Line ```11~14``` in Algorithm ```1```, this seems to be the case. If so, the large memory overhead might significantly offset the latency gain in practical scenarios.\n\n- A relevant work [1] (first released ~4.5 month ago) is missing in the baseline and not mentioned in the related work Section. To the best of my understanding, [1] also proposes a very similar synthesis rationale-augmented generation approach for RAG, thus, might be a suitable method for baseline comparison.\n\n- The contribution and result of the paper could be further strengthened if newer generation of models (e.g., gemma-2, llama-3) are adopted in the experiments.\n\n[1] Wei et al, *InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales*. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Line 207-209, is the answer A obtained by human labeling or generated by LLMs given documents D? What if the D does not provide evidence for generating A?\n2. Except for the inference latency, what are the consuming token numbers of different methods?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and easy to follow. The authors present detailed descriptions of their methods, including prompts and experimental setups.\n2. The proposed method achieves obvious improvements among the baselines, and the ablation studies can verify the effectiveness of their method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a new speculative decoding method to enhance the retrieval augmented generation. Specifically, a smaller distilled specialist LM is used to generate answer drafts based on randomly selected document subsets. After that, a larger generalist LM is used to verify the answer confidences according to the answer and rationale generation probabilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The conducted experiments should include more recent RAG baselines, especially the speculative decoding methods (e.g., [1], [2]). \n2. It is a trivial idea and makes limited contributions compared to previous studies. As mentioned in this paper, many recent studies investigate the two-stage RAG, where a drafting stage produces answer candidates and the assembling stage generates final answers based on the drafts. There are only a few differences between them.\n\n\n[1] Ground every sentence: Improving retrieval-augmented llms with interleaved reference claim generation.\n\n[2] RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Since this is a paper target for the RAG framework, testing the proposed framework on some RAG benchmarks may be more reliable. Because the QA benchmark is mainly designed for reasoning, not for the RAG framework." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper trained an LLM (Drafter) to generate multiple answers paired with rationale (reference + explains). Then, they use another LLM (Verifier)to confirm the answers, which enhances the robustness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Speculative RAG – a framework that leverages a larger generalist LM to efficiently verify multiple RAG drafts produced in parallel by a smaller, distilled specialist LM. Each draft is generated from a distinct subset of retrieved documents, offering diverse perspectives on the evidence while reducing input token counts per draft." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main contribution of this paper is insufficient. Because the whole framework just involves clustering the documents and fine-tuning small LLM to generate answers with rationale. Then prompt Verifier model confirms the answer based on rationale." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. While the author has shown that the proposed method is effective, it is important to note that the generalist LM has not undergone fine-tuning. RAG serves to supply additional information to the LLM to mitigate knowledge gaps. However, if the generalist LM itself lacks relevant knowledge, can it reliably evaluate the rationale and drafts generated by the specialist LM, particularly when the specialist’s output is inaccurate or includes hallucinations? To better understand this limitation, could the author provide boundary cases or conduct an error analysis?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The designed specialist LM is lightweight and allows to analyze multiple documents and produce RAG drafts in parallel, which effectively reduces the computation cost and increases the inference efficiency.\n2. The distinction between specialist and generalist LMs clarifies their functional roles, reducing the risk of generalization degradation that may result from supervised fine-tuning (SFT) of a generalist LM. Specialists, optimized through targeted fine-tuning, focus on improving draft generation capabilities, thereby enhancing result accuracy.\n3. Applying a clustering algorithm, i.e. K-means, to pre-group retrieved documents and then uniformly sampling from each group, i.e., multi-perspective sampling, can mitigate analysis inaccuracies caused by incomplete information, enhancing robustness in practical applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper leverages the idea of speculative decoding to design a speculative RAG framework, which offloads computational burden to a smaller, specialist LM that serves as the RAG drafter for generalist LMs. Extensive experiments and ablation studies demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The quality of the SFT data constructed by the author remains unclear due to insufficient analysis. Additionally, further ablation studies are needed to evaluate the drafter’s performance across varying data volumes. Acquiring substantial SFT data through a proprietary model is resource-intensive. Therefore, it is essential to investigate whether a smaller dataset could yield satisfactory results or if a large volume of SFT data is indeed required to achieve optimal outcomes.\n2. P(Yes) or SR is commonly applied in hallucination detection. However, [1] highlights that LLMs without target SFT or aligned, while LLMs may differentiate between correct and incorrect results, the probability itself, i.e., P(Yes), is either uncalibrated or demonstrates poor calibration performance. Although Table 3 shows that incorporating SR enhances overall performance, there is insufficient evidence to validate the intrinsic effectiveness of P(Yes). This limitation weakens the robustness of the proposed method. Could the author also add an ablation study to independently assess the effectiveness of P(Yes)?\n3. It seems that results for key baseline models, such as CRAG and Self-CRAG, are missing for the three free-form datasets. This omission is concerning, as it is standard research practice to replicate baseline results when they are not available from prior studies, especially when the authors are comparing their approach with only a few previous works. Could the authors also include the performance metrics of CRAG and Self-CRAG on the free-form datasets for a more comprehensive comparison?\n4. According to the results in Table 1, Mixtral-Instruct-8x7B has already achieved high scores on TriviaQA and ARC-C (73.91% and 78.41%), and the improvement brought by Speculative RAG is limited (74.24% and 80.55%). These results may diminish the contribution of the proposed method, as the performance improvement brought by instruction tuning is more pronounced and stable (Mistral-7B vs. Mistral-Instruct-7B and Mixtral-8x7B vs. Mixtral-Instruct-8x7B). Could the authors provide more explanation and analysis to clarify this result?\n5. Although the datasets used in this paper include a variety of question-answering formats, such as free-form (short-form and multi-hop) and closed-set, they are all based on wiki-style or specific domains. To more convincingly demonstrate the method's effectiveness, the authors should further extend their evaluation to more realistic and domain-diverse RAG datasets, like FreshQA and BRIGHT. Could the author also discuss the potential challenges in applying their method to these datasets and propose specific experiments to show the performance?\n\n[1] Kadavath et al., Language Models (Mostly) Know What They Know, 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024speculative,\ntitle={Speculative {RAG}: Enhancing Retrieval Augmented Generation through Drafting},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xgQfWbV6Ey},\nnote={under review}\n}" }, "abstract": { "value": "Retrieval augmented generation (RAG) combines the generative abilities of large language models (LLMs) with external knowledge sources to provide more accurate and up-to-date responses. Recent RAG advancements focus on improving retrieval outcomes through iterative LLM refinement or self-critique capabilities acquired through additional instruction tuning of LLMs. In this work, we introduce Speculative RAG - a framework that leverages a larger generalist LM to efficiently verify multiple RAG drafts produced in parallel by a smaller, distilled specialist LM. Each draft is generated from a distinct subset of retrieved documents, offering diverse perspectives on the evidence while reducing input token counts per draft. This approach enhances comprehension of each subset and mitigates potential position bias over long context. Our method accelerates RAG by delegating drafting to the smaller specialist LM, with the larger generalist LM performing a single verification pass over the drafts. Extensive experiments demonstrate that Speculative RAG achieves state-of-the-art performance with reduced latency on TriviaQA, MuSiQue, PopQA, PubHealth, and ARC-Challenge benchmarks. It notably enhances accuracy by up to 12.97% while reducing latency by 50.83% compared to conventional RAG systems on PubHealth." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "generative model", "retrieval augmented generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5404b2c6d52a09993b93dd65a3797328da13ba75.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xgtXkyqw1f
MindSearch: Mimicking Human Minds Elicits Deep AI Searcher
main
Active
language model;search engine;multi-agent system
applications to computer vision, audio, language, and other modalities
5;6;6;6
4;4;3;4
2;3;3;3
2;3;4;3
3;3;3;3
5.75
3.75
2.75
3
3
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1)\tThe discussion in line 315 is a bit confusing. The authors say “MindSearch does not yield better performance in terms of facticity, but as per Figure 4 in the paper, factuality of MindSearch is preferred 70% of the time.\n\n2)\tPlease consider showing the example in Figure 5 in English." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1)\tThe paper demonstrates considerably better output responses for MindSearch, compared to proprietary AI-Search engines like Perplexity Pro and ChatGPT-Web.\n\n2)\tMindSearch also works considerably better than the closed-book and ReACT baselines on a variety of multi-hop question-answering datasets. \n\n3)\tExtensive analysis and evaluation provided in terms of the prompting strategy for WebPlanner along with using a graph-based methodology vs JSON-based and code-based." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes MindSearch, an LLM-based multi-agent information-seeking framework for complex multi-step information-seeking questions. MindSearch includes a Web Planner which decomposes the user query into atomic sub-questions as nodes in a dynamic graph and progressively extends the graph based on results from the WebSearcher. MindSearch considerably improves in response quality in terms of depth and breadth and also improves over the baseline react-based iterative search system." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1)\tWhile the paper only evaluates for final response quality, it does not consider the attribution quality of the generated response. Popular AI search engines like Perplexity.AI and ChatGPT-web also provide citations as part of the generated output. The authors do not discuss whether MindSearch provides any kind of attribution, and if yes, what does the citation quality look like (based on automatic evaluations like ALCE [1])\n\n2)\tNo analysis was provided with regard to the dynamic graph constructed by the WebPlanner. Does the number of hops in the question match the depth of the tree? How often is an incomplete graph created? Also, it would be interesting to see a cost analysis in terms of the number of search queries that MindSearch generates, in comparison to the baselines (ReACT specifically)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What happens when Webplanner and code style interaction is not employed ? Is query decomposition required for all queries in web-searcher ? There is also a lack of qualitative analysis of failure scenarios. What happens when response at one node of the chain is wrong ? Does it result in cascading failures. Is there ayn mechanism for the Webplanner to detect such mistakes with feedback from websearcher ? \n\n2. Was there any qualitative evaluation on the benchmark where several human subjects were involved in performing the task with corresponding measurement of time taken ? to compare to mindsearch ?\n\n\n3. how do you respond to the first point in the weakness 1." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The problem is both interesting and important. Multi-agent systems for complex QA tasks that are robust and effective \n\n2. Easy to follow and the methods are simple and well explained.\n\n3. Experiments that include inference cost analysis is well considered." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents MindSearch, a multi-agent system for complex tasks that uses large language models (LLMs) and search engines for complex web information-seeking tasks. MindSearch addresses complex queries by decomposing them and retrieving information hierarchically, modeling the process as an iterative graph construction to enhance precision and recall. By distributing tasks across specialized agents, the framework manages complex and extended contexts effectively. The authors show that experimental results using GPT-4o and InternLM2.5-7B MindSearch outperforms benchmarks like ChatGPT-Web and Perplexity.ai, with human evaluators preferring its responses." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The work fails to cite and compare to other relevant baselines. For complex QA tasks like HotpotQA or MusiqueQA self-ask[1] with search is a relevant baseline. Similarly Searchain[2] is particularly relevant as it also forms a global reasoning chain or graph where the query is decomposed into subquestions that comprise the nodes of the chain and this planning is similar in philosophy to Mindsearch. I think Assistantbench[3] released in July 2024 is also very relevant and useful to evaluate on. The method SeeplanAct proposed in the paper would serve as a strong baseline. SeeAct[4] is also a relevant baseline. While the authors have cited the same they have not compared to this approach. Other RAG baselines in AssistantBench are also relevant.\n\n2) Some claims are unsupported. For instance the claim made in abstract and section 2.3 regarding the utility of Mindsearch : “Mindsearch performs in 3 minutes tasks worth 3 hours of human effort” has no related evidence cited in the paper. Was there any qualitative evaluation on the benchmark where several human subjects were involved in performing the task with corresponding measurement of time taken ? to compare to mindsearch ?.\n\n3) The work also misses on some important ablations. What happens when Webplanner and code style interaction is not employed ? Is query decomposition required for all queries in web-searcher ? There is also a lack of qualitative analysis of failure scenarios. What happens when response at one node of the chain is wrong ? Does it result in cascading failures. Is there ayn mechanism for the Webplanner to detect such mistakes with feedback from websearcher ? The current approach is a simple tool use based approach which has been well explored in existing WebAgent based works. The additional analysis and error handling mentioned above may help strengthen and understand the core contributions of MindSearch\n\n[1] Measuring and Narrowing the Compositionality Gap in Language Models, Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis\n\n[2] Search-in-the-Chain: Interactively Enhancing Large Language Models with Search for Knowledge-intensive Tasks, Shicheng Xu, Liang Pang, Huawei Shen, Xueqi Cheng, Tat-Seng Chua [3] AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?, Ori Yoran, Samuel Joseph Amouyal, Chaitanya Malaviya, Ben Bogin, Ofir Press, Jonathan Berant [4] GPT-4V(ision) is a Generalist Web Agent, if Grounded, Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, Yu Su" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "S1: The writing and framework of this paper are clear and easy to follow.\n\nS2: The method is novel, utilizing the agents WebPlanner and WebSearcher to perform web search tasks.\n\nS3: Extensive experiments are conducted, demonstrating both the effectiveness and efficiency of this approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel tool agent, MindSearch, which decomposes user queries into atomic sub-questions represented as graph nodes and progressively extends the graph based on the search results from WebSearcher. For each sub-question, WebSearcher performs hierarchical information retrieval using search engines to gather relevant information for WebPlanner. Extensive experiments are conducted, including both open-set and closed-set datasets, and using open-source models alongside close-sourced LLMs, demonstrating the its effectiveness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1: In Figure 5, the words should also be accompanied by English translations.\n\nW2: For WebSearcher, how does the LLM select the most valuable pages from all the retrieved web content? More details should be provided. Additionally, regarding answer generation, the statement, \"After reading these results, the LLM generates a response to answer the original question based on the search results,\" requires further elaboration, such as information on input design or specific prompt construction.\n\nW3: For the open-set evaluation, five experts are chosen. The author should provide more details, including whether these experts had prior exposure to the answers generated by MindSearch. Furthermore, examples should be included to intuitively demonstrate the differences between the responses generated by MindSearch, ChatGPT-Web, and Perplexity.ai.\n\nW4: The author could provide information on token consumption to help the community manage the budget when using MindSearch in their projects." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- When constructing the DAG, how does MindSearch automatically create graph nodes? What are some tips for structuring question as graph nodes?\n- When large amounts of content are retrieved, how does WebSearcher reduce noise? And, as the rapid growth of web content can easily exceed the maximum context length of the LLM, how does WebSearcher effectively limit content length?\n- Additionally, could the experiment include more closed-source and open-source LLMs to further validate the effectiveness of the method?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents a clear and logical approach to the problem, with a well-organized visual format that is easy to understand and read. \n- This method provides a novel question-answering retrieval method based on directed acyclic graphs, which makes the RAG more reasonable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a system called MindSearch, designed to emulate human cognitive processes to enhance web information retrieval and integration tasks. By combining large language models (LLMs) with search engines, the system addresses limitations in handling complex queries, fragmented information, and lengthy content through an LLM-based multi-agent framework.\n* WebPlanner: Simulates the cognitive process of multi-step information seeking by breaking down user queries into atomic subproblems, represented as nodes in a graph. The graph is then progressively expanded based on search results from WebSearcher.\n* WebSearcher: Conducts hierarchical information retrieval for each subproblem, using a search engine to gather valuable information for WebPlanner.\n\nThis multi-agent design enables MindSearch to search and integrate information from vast web sources within three minutes, equivalent to saving three hours of manual effort. MindSearch demonstrates significant response quality improvements in both closed-set and open-set QA tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The method part is not detailed enough to show the technical details. For instance, the design of DAG and the use of DAG is not clear.\n- Few baseline methods from the same category are included, and many RAG-based question-answering approaches are left unexamined, such as ChatKBQA, AutoReAct, etc. \n- The backbone was only tested on GPT-4 (close sourced) and InternLM2.5 (open sourced). Under this setting, it is hard to tell if the MindSearch will work for all (at least most) LLMs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024mindsearch,\ntitle={MindSearch: Mimicking Human Minds Elicits Deep {AI} Searcher},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xgtXkyqw1f},\nnote={under review}\n}" }, "abstract": { "value": "Information seeking and integration is a complex cognitive task that consumes enormous time and effort. Inspired by the remarkable progress of Large Language Models, recent works attempt to solve this task by combining LLMs and search engines. However, these methods still obtain unsatisfying performance due to three challenges: (1) complex requests often cannot be accurately and completely retrieved by the search engine once (2) corresponding information to be integrated is spread over multiple web pages along with massive noise, and (3) a large number of web pages with long contents may quickly exceed the maximum context length of LLMs. Inspired by the cognitive process when humans solve these problems, we introduce MindSearch to mimic the human minds in web information seeking and integration, which can be instantiated by a simple yet effective LLM-based multi-agent framework. The WebPlanner models the human mind of multi-step information seeking as a dynamic graph construction process: it decomposes the user query into atomic sub-questions as nodes in the graph and progressively extends the graph based on the search result from WebSearcher. Tasked with each sub-question, WebSearcher performs hierarchical information retrieval with search engines and collects valuable information for WebPlanner. The multi-agent design of MindSearch enables the whole framework to seek and integrate information parallelly from larger-scale (e.g., more than 300) web pages in 3 minutes, which is worth 3 hours of human effort. MindSearch demonstrates significant improvement in the response quality in terms of depth and breadth, on both close-set and open-set QA problems. Besides, responses from MindSearch based on InternLM2.5-7B are preferable by humans to ChatGPT-Web and Perplexity.ai applications, which implies that MindSearch can already deliver a competitive solution to the proprietary AI search engine." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "language model", "search engine", "multi-agent system" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c119427897b9714f0418071eb00cad13c8849ee5.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MindSearch: Mimicking Human Minds Elicits Deep AI Searcher" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xhtqgW5b93
ToMA: Token Merging with Attention For Diffusion Models
main
Active
Diffusion;Token Merge;Attention
generative models
3;3;5;6;6
5;4;4;2;4
2;2;3;4;3
1;2;2;3;3
2;3;2;2;3
4.6
3.8
2.8
2.2
2.4
-0.662122
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed.", "Yes, Research integrity issues (e.g., plagiarism, dual submission)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- There are numerous existing token merging approaches that extend beyond their application in diffusion models and generative tasks. The proposed method appears to function as a plug-and-play token merging technique. How does it perform when integrated with baseline models and discriminative tasks? Are the improvements consistently observed across these models and tasks?\n\n- Could the authors provide more detailed information on the implementation of the tile-shaped regions?\n\n- The submodular-based destination selection appears analogous to Farthest Point Sampling (FPS). To my understanding, in most 3D applications, the FPS algorithm is implemented with CUDA to achieve acceptable speed. This step seems to contribute significantly to the computational overhead of the proposed method. Could the authors clarify the distinctions between the submodular approach and FPS, particularly in terms of efficiency?\n\n- In the original ToMeSD paper (applied in SD 1.5), the results indicate a reduction in inference time (s/img). However, in Table 1, even at higher compression rates (0.5 and 0.75), this reduction is not evident. Could the authors provide an explanation for this discrepancy?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- This paper is well-crafted and effectively articulates both the proposed methodology and the corresponding experimental outcomes.\n- The implementation of the approach is methodical and straightforward, which supports practical applicability.\n- The comprehensive implementation details, supplemented by the provided code, significantly bolster the reproducibility of the research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents three significant advancements aimed at enhancing the token merging mechanism in diffusion models for generative tasks: identifying representative merge destinations, optimizing the merging and unmerging processes, and reducing computational complexity. Specifically, the proposed method employs a greedy-based algorithm to determine a representative subset that serves as merge destinations. This is followed by an additional cross-attention operation and matrix multiplication to effectively execute the merging process. During the unmerging phase, the authors leverage the inverse (or transpose) matrix from the merging step, thereby improving the overall efficiency of the unmerging procedure. Moreover, the authors introduce strategies to merge only tokens located within the same local region and to share destination and merge matrices across iterations and layers, further mitigating computational costs. When compared to an existing approach (i.e., ToMeSD), the proposed method achieves notable improvements in text-to-image generation tasks across two datasets (GEMRec and ImageNet-1k) evaluated using three metrics (CLIP, DINO, and FID), highlighting its efficacy and substantial contribution to the field." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The title and scope of the paper may lead to potential misunderstandings. While diffusion models have applications beyond generative tasks, the experiments in this work are solely focused on generation. It would be advisable to revise the title to more accurately reflect the scope of the contributions.\n- The experimental evaluation is restricted to text-to-image tasks, which limits the generalizability and perceived practical impact of the proposed approach.\n- The discussion and comparative analysis do not sufficiently engage with related work on token merging, such as CrossGET [1] and TRIPS [2], which diminishes the thoroughness of the literature review.\n- The comparative evaluation is limited to ToMeSD, and there are notable inconsistencies when compared to the results reported in the original paper.\n\n\n\n[1] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers, ICML 2024\n\n[2] TRIPS: Efficient Vision-and-Language Pre-training with Text-Relevant Image Patch Selection, EMNLP 2022" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses part." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.\tThe method uses a submodular function to identify a representative subset of tokens for merging and applies a GPU-efficient vectorized optimization algorithm.\n2.\tThe design of ToMA carefully considers the advantages and limitations of GPU computations.\n3.\tToMA achieves 30%-50% speedups without noticeable sacrifice in image quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "To address the two main challenges in token merging, this paper introduces the TOMA method. TOMA first uses a submodular-based approach to select diverse tokens for merging. It then leverages efficient attention implementations to minimize merge overhead. By abstracting (un-)merging as (inverse) linear transformations, TOMA enables shared computation across layers and further accelerates processing by operating on tokens within local blocks to exploit image locality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This work seems like an enhanced version of ToMeSD, focusing on updated merge rules and additional locality optimizations, but the contributions may not be substantial enough.\n2. Regarding experimental results:\n- The paper only tests on the SDXL architecture, limiting generalization claims. As noted in line 372, this method could be extended to SD2 and SD3, so more results on these structures are needed. Actually, using token merging in the DiT structure could theoretically offer greater speedups.\n- The results in Table 1 for ToMeSD are strange, as its inference time is longer than the baseline. Were torch and xformer versions verified to match the official implementation during testing? Without a correct ToMeSD implementation, comparisons may lose significance.\n- FID scores in Figure 5 exceed 25, unusually high for ImageNet.\n- The speedup achieved by ToMA is limited. At a ratio of 0.25, the improvement is just 10%, and while a ratio of 0.75 yields a 20% speedup, it comes with a significant decline in quality metrics.\n- The comparison methods are limited; it would be beneficial to include approaches such as “Token Downsampling for Efficient Generation of High-Resolution Images.”\n3. Some figures and explanations are unclear, e.g., the X-axis in Figure 5." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Refer to Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The use of submodular optimization for token selection effectively reduces information loss during merging, improving quality retention compared to previous approaches.\n\n2. The paper's experiments, which utilize metrics such as CLIP, DINO, and FID on high-quality datasets, demonstrate ToMA's balance between efficiency and image quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Token Merge with Attention (ToMA) to optimize transformer-based diffusion models, addressing inefficiencies in existing token merging methods. By utilizing submodular optimization for token selection, efficient attention mechanisms, and leveraging token locality, ToMA achieves substantial computational speedups with minimal impact on image quality, making it compatible with modern GPU architectures." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The use of locality to limit the scope of attention for computational efficiency, as implemented in ToMA, is not sufficiently novel. Similar approaches have already been explored in methods such as Sparse Transformer[1], DiffRate[2], ToFu (Token Fusion)[3], making it difficult to assess the unique contribution of ToMA.\n\n2. The experimental comparisons are primarily limited to ToMeSD, without benchmarking against other prevalent methods such as Token Pruning, Flash Attention, DiffRate[2], ToFu[3], and FRDiff[4].\n\n3. The paper is lack of qualitative visual analysis. Without sufficient visual examples, it is challenging to assess ToMA's performance meaningfully, especially in comparison to other acceleration methods.\n\n[1] Child, R., Gray, S., Radford, A., & Sutskever, I. (2019). Generating Long Sequences with Sparse Transformers. arXiv preprint arXiv:1904.10509.\n\n[2] Chen M, Shao W, Xu P, et al. Diffrate: Differentiable compression rate for efficient vision transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 17164-17174.\n\n[3] Kim, M., Gao, S., Hsu, Y.-C., Shen, Y., & Jin, H. (2023). Token Fusion: Bridging the Gap between Token Pruning and Token Merging. arXiv preprint arXiv:2312.01026.\n\n[4] So J, Lee J, Park E. FRDiff: Feature Reuse for Universal Training-free Acceleration of Diffusion Models[J]. arXiv preprint arXiv:2312.03517, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "Nan" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1.\tI noticed that in the smaller steps of Figure 4, the average token intersections across different layers are significantly different. In these steps, could the sharing of both destinations and attention weights between layers lead to a notable loss in performance?\n\n2.\tHow is the scale of the set of destinations determined? Specifically, how does the size of 𝐷.\n\n3.\tThe terms \"Dino\" and \"Clip\" mentioned in line 475 should be aligned with the entries in Table. 3." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper exhibits several strengths:\n\n1.\tThe motivation and methodology are both reasonable and intuitive.\n\n2.\tThe generation model (SDXL) is significantly accelerated by merging and unmerging tokens before and after attention, along with additional speed-up settings, all without any loss in quantitative performance indicators." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes Token Merge with Attention (ToMA) to tackle the issues of limited image quality due to the loss of important tokens and the inefficiency of attention mechanisms. The authors establish ToMA through three major components: a submodular-based token selection method, an efficient attention implementation, and (un-)merging as (inverse-)linear transformations. Based on this design, the paper significantly reduces the inference time for text-to-image generation model (SDXL)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper exhibits several Weaknesses:\n\n1.\tLack of qualitative comparison with ToMeSD.\n\n2.\tThe visual effects of ToMA are underrepresented, and quantitative indicators only partially reflect the quality of generation. More samples are needed to substantiate claims about \"the best trade-off between image quality and speed.\"\n\n3.\tCurrent Text-to-Image models (such as Flux and SD3) based on diffusion transformers have achieved new state-of-the-art results. While the paper states that ToMA can be applied to any attention-based T2I model, it is recommended that the authors verify ToMA's performance on the latest T2I models to enhance persuasiveness.\n\n4.\tIn the bottom of Figure 6, ToMA introduces considerable noise compared to the original result. Does this imply that, despite ToMA showing less performance loss in quantitative evaluations, it incurs greater performance loss in terms of visual perception?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "My primary concern is the lack of discussion regarding the DiT model. Could the authors provide additional results or discussions specifically related to DiT image generation models, such as PixelArt-Alpha?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The use of attention-based operations for token merging is well-designed and makes sense. Additionally, the authors' choice to make it an invertible function is highly meaningful, and they thoughtfully consider GPU implementations.\n\n- The authors also discuss sharing destination selections across steps, which indeed reduces computational costs and enhances the practicality of the overall approach.\n\n- The experiments are comprehensive and well-executed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Previous token selection methods have overlooked the relationships between tokens and have not utilized the latest attention implementations, limiting actual speedup.\nThis paper proposes a submodular function-based token selection mechanism and introduces an attention-based approach for merging and unmerging tokens. This design leverages the benefits of modern attention acceleration libraries and is reversible in nature.\nAs a result, the authors' method achieves an optimal trade-off between performance and efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The manuscript lacks discussion on the DiT model, focusing only on Stable Diffusion. In the \"Local Region\" section, it would be beneficial to include insights on how this technology could be adapted for DiT-like models. Without convolution layers, it is unclear if the locality is still evident enough to support the use of this method. Can you provide a brief analysis or discussion on how the locality assumptions might change for DiT models, and whether any modifications to the proposed method would be needed to accommodate those differences.\n\n- Some figures could be improved for better visualization.e.g., it is a little difficult to differentiate different methods in Figure 5.\n\n- The manuscript contains redundant content, specifically in lines L142-L150 and L151-L155, where identical information is repeated." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose an improved token merging algorithm to speed up diffusion." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024toma,\ntitle={To{MA}: Token Merging with Attention For Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xhtqgW5b93},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion models have emerged as leading models for image generation. \nPlug-and-play token merging techniques have recently been introduced to mitigate the heavy computation cost of transformer blocks in diffusion models. \nHowever, existing methods overlook two key factors: 1. they fail to incorporate modern efficient implementation of attention, so that, the overhead backfires the achieved algorithmic efficiency 2. the selection of token to merge ignores the relation among tokens, limiting the image quality. \nIn this paper, we propose Token Merging with Attention(ToMA) with three major improvements. Firstly, we utilize submodular-based token selection method to identify diverse tokens as merge destination, representative of the entire token set. Secondly, we propose attention merge, utilizing the efficient attention implementation, to perform the merge with negligible overhead. Also we abstract the (un-)merging as (inverse-)linear transformations which also allows shareable transformation across layers/iterations. Finally, we utilize the image locality to further accelerate the computation by performing all the operations on tokens in local tiles." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion", "Token Merge", "Attention" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/731cdda030b454cf7258c80592cbcd9a846466d2.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/fcdac60247f71d978b4212fd423cabccddba78a2.zip" }, "title": { "value": "ToMA: Token Merging with Attention For Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xi3sDtf8A0
L-MSA: Layer-wise Fine-tuning using the Method of Successive Approximations
main
Active
layer-wise finetuning;parameter-efficient fine-tuning;method of successive approximations
foundation or frontier models, including LLMs
3;3;3;3
4;3;4;4
2;3;3;2
2;2;2;2
3;2;1;2
3
3.75
2.5
2
2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why were methods like LoRA and other recent parameter-efficient fine-tuning techniques not included in the comparison? Would the results hold up against these widely-used benchmarks?\n\n2. How does the method perform on more diverse and real-world tasks outside the provided datasets? Are there specific domains where L-MSA is particularly effective or less so?\n\n3. Given the still significant computational demands, how does L-MSA compare in terms of efficiency gains relative to other lightweight fine-tuning techniques? Is the trade-off between computational cost and performance improvement justified?\n\n4. The method currently selects one layer at a time for fine-tuning. Could this approach be extended to simultaneously fine-tune multiple layers, and if so, how would it impact performance and computational cost?\n\n5. Have the authors considered dynamically reselecting layers during the training process? Would this adapt better to changing training dynamics and potentially improve generalization?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. **Novel Layer Selection Metric:** The paper introduces a new metric for layer selection based on the Method of Successive Approximations (MSA), which is theoretically grounded and offers a fresh perspective on fine-tuning strategies.\n\n2. **Theoretical Foundations:** The authors provide a comprehensive theoretical analysis within the context of deep linear networks, which strengthens the credibility of the proposed approach.\n\n3. **Empirical Validation:** The paper demonstrates the effectiveness of L-MSA across multiple datasets and tasks, showing consistent improvement over several existing layer-wise fine-tuning methods.\n\n4. **Parameter-Efficiency Focus:** By targeting specific layers for fine-tuning, the method aims to reduce computational costs, addressing a key challenge in training large-scale models.\n\n5. **Clear Contributions to Layer-Wise Fine-Tuning:** The research highlights the potential of selectively fine-tuning layers to achieve better performance, contributing to the growing field of parameter-efficient fine-tuning methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel approach for efficient fine-tuning of large-scale models. It addresses the challenge of substantial memory consumption associated with such models by introducing L-MSA, a method that fine-tunes only selected layers based on a new metric derived from the Method of Successive Approximations (MSA). This metric guides layer selection and optimizes their fine-tuning, resulting in better model performance with reduced computational costs.\n\nThe paper provides a theoretical analysis within deep linear networks and evaluates L-MSA across various datasets and tasks. It compares the proposed method with other layer-wise fine-tuning techniques, demonstrating its superior performance. The experimental results show that L-MSA consistently identifies the most impactful layers for fine-tuning and optimizes them effectively, outperforming baseline methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Lack of Comparison with State-of-the-Art Methods:** The paper does not compare L-MSA with prominent methods like **LoRA**[Hu et, al], which limits the evaluation of its relative effectiveness and practical impact.\n\n2. **Disjointed Presentation:** The flow of the paper is disrupted by referencing equations and concepts out of order, requiring readers to frequently navigate back and forth, which hampers comprehension and readability.\n\n3. **Inadequate Contextualization of Contributions:** The novelty of the proposed method is not sufficiently contextualized against a broader range of parameter-efficient fine-tuning techniques, making it harder to assess its uniqueness and value.\n\n4. **Generalization Concerns:** The paper acknowledges that the approximated updated loss may not always guarantee strong generalization to test data, which raises questions about the robustness of the approach in diverse real-world scenarios.\n\n5. **Computational Demands:** Despite aiming for parameter efficiency, the method still involves substantial computational overhead due to both forward and backward propagation, which could limit its practicality for very large-scale models.\n\n6. **Limited Scope of Empirical Comparisons:** While the paper evaluates L-MSA across several tasks, the range of comparative baselines is not exhaustive, potentially missing out on broader insights.\n\n[Hu et. al, ICLR 2022, LoRA: Low-Rank Adaptation of Large Language Models]" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Why the Method of Successive Approximations is good for layer selection and layer fine-tuning?\n2. What is the goal of the proposed method? Which layer should be actually selected for fine-tuning?\n3. Compared with the baseline fine-tuning methods, what is the main advantage of the proposed method?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The task of parameter-efficient fine-tuning is crucial for practical applications of pretrained models.\n\n2. The authors performed an in-depth analysis on the effectiveness of fine-tuning at the layer level." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores layer-wise fine-tuning strategies for adapting large pretrained models. The authors found that fine-tuning specific layers can lead to varied performance outcomes, and selectively fine-tuning certain layers may enhance results. Building on these insights, they propose a novel layer-wise fine-tuning approach that leverages Pontryagin’s Maximum Principle to guide layer selection for fine-tuning. The effectiveness of this method is demonstrated through transfer learning from ImageNet to CIFAR100." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper's novelty is somewhat limited, as the idea that fine-tuning different layers can lead to varying performance outcomes has already been widely explored in previous research. It is not clear what is the main contribution of the paper. \n\n2. The proposed method, which relies on the Method of Successive Approximation, lacks clear motivation. It’s unclear why this approach would lead to improved layer selection or how it compares favorably to other existing methods. In other words, what are the benefits of the proposed approach?\n\n3. For the theoretical analysis, it is not clear what is the main contribution. The authors should also connect the theoretical analysis to the proposed method and discuss why the proposed method can lead to better layer selection and fine-tuning. \n\n4. The explanation based on PMP is interesting, but the main technical contribution is not clear. It would be helpful if the authors could clarify where the primary novelty lies.\n\nOverall, while the paper presents a reasonable idea, it suffers from poor writing, insufficient motivation and unclear contribution." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- [Suggestion] Some of the contributions listed in the paper need to be merged and toned down. The paper currently lists four contributions. not contributions. The first contribution is closely related with the fourth one. Hence my suggestion to merge them, Moreover, this fourth contribution is related to the empirical validation of the proposed method. In this regard, the proper validation of a proposed method is not a plus but a must for a decent piece of scientific work. Therefore, I would suggest toning down this claim.\n \n- [Suggestion] In Fig. 6, I suggest reporting the loss curves on a per-dataset basis. It might also be informative to discuss any trend observed during the training iterations.\n \n- [Question] In l.379, it is indicated that “…he results illustrate that our L-MSA metric consistently identifies layers associated with improved training loss, effectively pinpointing those that contribute to better training outcomes...“ May you indicate how do you observe this in that figure? From what I can interpret, the results from Fig. 4 are point estimates at a given training iteration. Therefore, it is hard for me to assess how this figure allows to observe a consistent behavior.\n \n- [Question] In l.408, it is stated “…Notably, using L-MSA for layer-wise fine-tuning results in performance improvements of up to 20% compared to full fine-tuning” May you indicate the source of this observation?\n \n- [Question] In Sec. 4.3 a “No Adaptation” baseline is included. In l.432 it is indicated that this baseline “…provides a reference point for model performance without fine-tuning”, May you motivate how good of a reference this baseline is for datasets like CIFAR-Flip and ImageNet-C considering the number of output classes is significantly different (CIFAR-Flip) and/or reduce (ImageNet-C)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- A theoretical analysis for the proposed method has been provided (Sec. 3 & Sec. A.1).\n \n- The empirical evaluation of the proposed method includes a good variety of datasets and competing methods. This allows to assess the applicability of the proposed method in different contexts.\n \n- The validation of the proposed method includes an ablation analysis this is effective towards obtaining insights on how the different components of the proposed method contribute to its performance and how it compares w.r.t. full fine-tuning.\n \n- The paper adequately highlights the limitations of the proposed method (Sec. 5)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method to do a more targeted fine-tuning process where instead of updating every single layer/parameter in the architecture, the layer that has the strongest effect in the total loss is selected and updated on each iteration of the fine-tuning process.\n\nTowards this goal the paper proposes a metric to select the layer to be updated and an algorithm for the fine-tuning of the selected layers.\n\nExperiments on several datasets (CIFAR-100, CIFAR-C, CIFAR-Flip, Living-17 and ImageNet-C) including several related methods (full fine-tuning, LIFT, LISA, surgical Fine-tuning and Auto-RGN) provide evidence on the performance of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The originality of the paper is somewhat reduced. While the method put forward by the paper outperforms (in some cases) the considered baselines, as admitted by the paper (Sec. 4.1)  there are already methods in the literature that aim at a more targeted fine-tuning process.\n \n- In its current form the paper does not feel self-contained, there are several aspects of the paper that are delegated to the appendix. For instance, the models considered in experiments of Sec. 4.2 are not explicitly indicated. In other cases, the protocols and motivations behind the conducted experiments are not clear. Here a proper balance must be achieved to ensure the paper provides sufficient details as to allow the reader to critically analyze the proposed method and its conducted validation.\n \n- While pairing a method with its corresponding theoretical analysis is very desirable, the extend to which such analysis is currently provided (Sec. 3) is just too shallow as to be informative.  Perhaps it could be completely moved to the appendix in order to allocate space to further elaborate on other parts of the paper that are currently not detailed enough.\n \n- Some statements seem to lack supporting evidence. For instance, in several parts of the paper (l.535) statements are made regarding to the reduced computational costs of the proposed method. The evaluation section is missing however a proper experiment addressing that aspect.\n \n- The improvement put forward by the proposed method is not that outspoken. For instance, in Table 1 it is observed that the proposed method is better in only 2/4 considered settings.\n \n- In Fig. 6, averaged results over different datasets are reported. This not only hinders the variations in performance across datasets, but also behavior observed for the different methods across the considered datasets.\n \n\n- Weak positioning; Good part of the related work is centered around other not directly related aspects. For instance, Sec. 6.1 discusses large architectures, which has close no link w.r.t. the proposed method.  Similarly l.497-502 from Sec. 6.2 is related to prompt-based methods. Thus having a relatively weak link with the proposed method. This weak positioning w.r.t. related efforts becomes more evident when we consider that almost none of the related methods considered in the experimental section (Sec. 4) are covered in the related work section. In addition, I would suggest looking at the two references below which seem to be very related to the proposed method.\n \n - Youngmin Ro, Jin Young Choi, \"AutoLR: Layer-wise Pruning and Auto-tuning of Learning Rates in Fine-tuning of Deep Networks\", AAAI 2021\n - Basel Barakat; Qiang Huang, \"Enhancing Transfer Learning Reliability via Block-Wise Fine-Tuning\", ICMLA 2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Q1.The other works compared in this paper are all about NLP and LLM. Why did the authors only apply these works to image datasets and test them on image datasets instead of NLP datasets used in other papers?\n- Q2.Does L-MSA introduce additional memory overhead compared to randomly selecting other papers? I have not been involved in PEFT, but in theory it seems to require the same memory spikes as Full Fine-turn, which runs counter to the motivation of the authors.The authors should provide a detailed analysis of the memory usage of L-MSA compared to full fine-tuning and other PEFT methods, including peak memory usage and average memory consumption?\n- Q3.Does performing MSA on the loss of the network introduce more training time? Especially when the network size increases, it is necessary to calculate and compare the state and co-state variables layer by layer.\n- Q4.In Figure 5, in the experiments of ImageNet to CIFAR-100, it can be clearly observed that Full-Finetuning and Auto-RGN continue to converge, while L-MSA seems to have completed convergence. Can you show the results for more epochs?\n\nIf the author resolves my questions, I will consider raising my rating." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-written and easy to follow.\n- The experimental results seem to be promising, surpassing Full Fine-tuning on part of the dataset" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The machine learning community has made remarkable progress with the advent of large-scale models, but these models become a significant obstacle by consuming large amounts of memory during training. In this paper, we propose a fine-tuning Method using the Method of Successive Approximations, L-MSA. Experiments on different data sets verify the effectiveness of L-MSA" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- This paper emphasizes the advantages on large-scale models, but the datasets used seem to focus on extremely small datasets such as CIFAR, which show the worst performance on experimental fine-tuning on Imagenet-C. The only LIFT that seems to outperform is a paper that has not yet been published through peer review. This makes me worry about its practical prospects.\n- Not enough experiments, this paper is motivated by the fact that large-scale models consume memory, but there is no comparison of memory overhead in the paper." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose L-MSA, a novel layer-wise fine-tuning approach, which encompasses both the criterion for layer selection and the algorithm for fine- tuning the targeted layer." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024lmsa,\ntitle={L-{MSA}: Layer-wise Fine-tuning using the Method of Successive Approximations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xi3sDtf8A0},\nnote={under review}\n}" }, "abstract": { "value": "With the emergence of large-scale models, the machine learning community has witnessed remarkable advancements. However, the substantial memory consumption associated with these models has emerged as a significant obstacle to large-scale training. To mitigate this challenge, an increasing emphasis has been placed on parameter-efficient fine-tuning methodologies, which adapt pre-trained models by fine-tuning only a subset of parameters. We observe that in various scenarios, fine-tuning different layers could lead to varying performance outcomes, and selectively fine-tuning certain layers has the potential to yield favorable performance results. Drawing upon this insight, we propose L-MSA, a novel layer-wise fine-tuning approach that integrates two key components: a metric for layer selection and an algorithm for optimizing the fine-tuning of the selected layers. By leveraging the principles of the Method of Successive Approximations, our method enhances model performance by targeting specific layers based on their unique characteristics and fine-tuning them efficiently. We also provide a theoretical analysis within deep linear networks, establishing a strong foundation for our layer selection criterion. Empirical evaluations across various datasets demonstrate that L-MSA identifies layers that yield superior training outcomes and fine-tunes them efficiently, consistently outperforming existing layer-wise fine-tuning methods." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "layer-wise finetuning", "parameter-efficient fine-tuning", "method of successive approximations" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1e54a5d336fb0477b769b6c3fdbbedece45f7f28.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/a867fef986d3459ce644226439a031bbd29c1b76.zip" }, "title": { "value": "L-MSA: Layer-wise Fine-tuning using the Method of Successive Approximations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xiDJaTim3P
Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models
main
Active
Federated learning;prompt learning;vision-language model;mixture of experts
alignment, fairness, safety, privacy, and societal considerations
3;6;6;6
4;3;5;2
2;3;3;3
2;2;3;3
1;2;3;3
5.25
3.5
2.75
2.5
2.25
-0.258199
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Refer to the weakness part." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tThe method is novel which allows the client download prompts from others.\n2.\tThe experiments are extensive and show good performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces pFedMoAP that enables effective federated prompt learning for vision-language models like CLIP. The key innovation is allowing clients to download multiple pre-aggregated prompts as fixed non-local experts rather than being restricted to a single globally aggregated model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tIt is very similar to the prompt learning framework. How does your method differ from the classic prompt learning framework if you can directly access prompts from other clients?\n2.\tI noticed that similar work, like pFedPrompt, conducted experiments on large-scale datasets like UCF-101 or ImageNet. Would the scale of the dataset influence the performance?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. In Section 4.1 **Implementation details.**, for CIFAR10&CIFAR100, why the participation rate is 10%? Does it mean only one client will be selected to join the training process in each communication round?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Implementing prompt learning into the distributed environment is an important topic for FL applications due to its efficiency in computation and communication.\n\n2. The proposed learning framework is simple and effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a lightweight federated prompt learning framework to personalize the prompt learning process through the lens of mixture of experts (MoE). An attention-based gating network is also introduced to achieve efficient cross-client knowledge sharing. Experiments indicate that the proposed **pFedMoAP** performs better than state-of-the-art methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The quality of the writing is poor. The expression of the paper is somehow obscure, and not concise enough. Besides, there are many typos in the paper, e.g., lacking space between two adjacent words in Line 27, Line 198, Lin 415, and Line 469. \n\n2. The novelty is limited. The **pFedMoAP** seems like a naive combination of PromptFL and MoE. Although the selected non-local experts will be updated in each communication round, the intuition behind it is also not explained well, leading to weak convincingness. \n\n3. it is unclear why the attention-based gate network does not need to be uploaded to the server for aggregation. Please clarify.\n\n4. I think the ablation study of the paper is not enough. \n- To study the effectiveness of MoE-based textual feature enhancement mechanism, the authors should add experiments under different values of $K$ to confirm performance changes.\n\n- The author implements a dimension reduction operation in Section 3.3, please add experiments to show whether the performance will be influenced. \n\n- How about a larger number of clients, e.g., 100 or more?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weakness." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The integration of the MoE method into prompt-based federated learning is an innovative concept.\n\n2. The paper is well-organized and effectively communicates its main contributions. The methodology is clearly explained, particularly regarding the incorporation of the MoE approach in the federated learning context." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a Mixture of Experts (MoE) approach for prompt-based federated learning, employing a local attention-based gating network to refine text features for better alignment with local image data. It also highlights the lightweight design of the prompt, exploring a configuration where the prompt is updated directly by other clients rather than relying on an aggregated prompt from the server." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While this approach utilizes inter-client sharing, it could expose prompt updates directly to other clients, which may lead to privacy concerns as prompt updates could be tracked, making the model susceptible to certain attack algorithms. \n\n2. In the related work section in the appendix (Line 783), the authors mentioned that previous work utilized inter-client sharing prior to aggregation. However, these works allow prompt aggregation in the server and do not share local prompts with other clients. This sentence is somewhat unclear; please provide a clearer version to avoid misunderstandings. \n\n3. The paper claims that the MoE-based method can leverage prompts from other clients. Please provide experimental evidence to show whether your method’s effectiveness is sensitive to the total number of clients and the value of $K$. \n\n4. Some related works are encouraged to be properly cited in Related work. For example, personalized federated learning [a,b] and prompt-based federated Learning [c]. \n\n[a] Li, et al, FedTP: Federated Learning by Transformer Personalization, TNNLS 2023. \n\n[b] Cai, et al, Fed-CO2: Cooperation of Online and Offline Models for Severe Data Heterogeneity in Federated Learning, NeurIPS 2023. \n\n[c] Pan et al, Federated Learning from Vision-Language Foundation Models: Theoretical Analysis and Method, NeurIPS 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. pFedMoAP proposes a novel framework that personalizes the prompt learning process by allowing clients to download multiple pre-aggregated prompts as fixed non-local experts. This leverages the concept of Mixture of Experts, where a local attention-based gating network is implemented to generate enhanced text features that better align with the local image data on the client side.\n\n2. By facilitating a many-expert scenario (e.g., 10 experts) per client with negligible communication overhead, pFedMoAP achieves efficient and effective personalization. This approach overcomes the limitations of having too many or too few experts, which can lead to high communication costs or suboptimal performance, respectively.\n\n3. The algorithm consistently outperforms existing work across various heterogeneous federated settings, as demonstrated through extensive experiments on 9 datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a new paradigm in federated prompt learning for pre-trained Vision-Language Models (VLMs) such as CLIP, challenging the traditional restriction that clients are limited to downloading a single globally aggregated model. This shift is particularly beneficial for lightweight prompts, enabling more flexible and effective knowledge sharing among clients." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Determining which prompt experts should be included in the pool and when to update or replace them requires careful management. If the selection process is not optimal, it could lead to suboptimal performance. Additionally, maintaining a diverse and up-to-date pool of experts is crucial but can be complex.\n\n2. The local attention-based gating network, while designed to be efficient, may still introduce optimization challenges. Ensuring that this network converges to a solution that effectively combines local and non-local prompts is not trivial, especially when dealing with heterogeneous data distributions.\n\n2. As the number of clients increases, managing a dynamic pool of prompt experts and maintaining efficient communication between the server and all clients can become more challenging. The complexity of coordinating updates and ensuring that each client receives the most relevant prompts could grow significantly." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose pFedMoAP, a novel personalized federated prompt learning framework for CLIP-like VLMs under data heterogeneity from a Mixture of Experts perspective." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024mixture,\ntitle={Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xiDJaTim3P},\nnote={under review}\n}" }, "abstract": { "value": "Prompt learning for pre-trained Vision-Language Models (VLMs) like CLIP has demonstrated potent applicability across diverse downstream tasks. This lightweight approach has quickly gained traction from federated learning (FL) researchers who seek to efficiently adapt VLMs to heterogeneous scenarios. However, current federated prompt learning methods are habitually restricted to the traditional FL paradigm, where the participating clients are generally only allowed to download a single globally aggregated model from the server. While justifiable for training full-sized models under federated settings, in this work, we argue that this paradigm is ill-suited for lightweight prompts. By facilitating the clients to download multiple pre-aggregated prompts as fixed non-local experts, we propose Personalized Federated Mixture of Adaptive Prompts (pFedMoAP), a novel FL framework that personalizes the prompt learning process through the lens of Mixture of Experts (MoE). pFedMoAP implements a local attention-based gating network that learns to generate enhanced text features for better alignment with local image data on the client, benefiting from both local and downloaded non-local adaptive prompt experts. The non-local experts are sparsely selected from a server-maintained pool, fostering collaborative learning across clients. To evaluate the proposed algorithm, we conduct extensive experiments across 9 datasets under various heterogeneous federated settings. The results show that pFedMoAP consistently outperforms the state-of-the-art alternatives, underscoring its efficacy in personalizing prompt learning for CLIP within the federated learning paradigm." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Federated learning", "prompt learning", "vision-language model", "mixture of experts" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4b93ce3fbfa4257d51a27424369e4287a2f6ab04.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6b3cb5025f97bbcdcbdebfc02bd05abc18700b62.zip" }, "title": { "value": "Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xiQNfYl33p
A Generic Framework for Conformal Fairness
main
Active
Fairness;Conformal Prediction;Graph Neural Networks
alignment, fairness, safety, privacy, and societal considerations
5;6;6;6
4;2;2;3
2;3;3;3
2;2;3;3
1;3;2;2
5.75
2.75
2.75
2.5
2
-0.870388
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "The main desideratum is a substantial overhaul of the paper's writing, especially as it pertains to the methodology Section 3. \n\nThere are other minor concerns, such as the exact conformal methods that the current proposed method is tested against. For instance, APS has been superseded in the literature by RAPS, so that should be included in the comparison. Also, importantly, at least a couple of the many recently appearing class-conditional and local conformal methods should also be included in the comparison for the sake of fairness; indeed, these methods are designed to be able to empirically attain partial versions of conditional coverage without receiving explicit constraints (such as fairness ones) as input, so they should provide a stronger baseline than methods like APS." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper's main asset is a reasonably broad evaluation of the proposed methods on a variety of datasets where fairness concerns may necessitate using one of the tabulated conformal fairness metrics. Compared to prior works the experimental section appears more extensive, both in terms of metrics and datasets. Moreover, conformal fairness guarantees were displayed on graph data in addition to the standard supervised tasks. The results indicate that explicitly enforcing fairness according to any of the tabulated fairness criteria is in fact often necessary, as evidenced by vanilla methods not satisfying non-enforced fairness coverage constraints. Furthermore, no drastic deterioration in efficiency guarantees is observed, indicating that the price to pay for fairness conditional coverage may often be acceptable." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The manuscript introduces and studies a general-purpose conformal fairness framework. It consists of an algorithm that can enforce various notions of fairness (such as demographic parity, equalized odds, predictive parity, and several others contained in the ML fairness literature) on the coverage of prediction sets, in the sense of satisfying appropriate conformal guarantees on expected coverage conditional on sensitive labels/attributes. To demonstrate practical performance of the framework given the various fairness notions, experimental evaluation is provided on a variety of supervised learning tasks; besides the standard setting, graph conformal prediction with fairness constraints is also demonstrated in experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "--- The main weakness and bottleneck at this point is the writing of the manuscript. In particular, the writing of Section 3, which introduces the objectives, the general algorithm, and the analysis, is currently not acceptable. Indeed, lots of key notions and terms, both on the fairness side and on the conformal side, are not properly and unambiguously introduced and/or are discussed in arbitrary order. \n(Fairness: groups/group collections are never formally defined, nor are requirements on them specified (e.g. non-intersectionality etc); \"filters\" and \"filter functions\" are simply thrown into the presentation. No background on the fairness measures, no elaboration on how they are transformed into conformal analogues compared to original notions, except for an incoherent sentence starting with \"Essentially achieved...\" in line 130. Conformal prediction: auxiliary notions like \"interval widths\", \"label miscoverages\" appear right away in the pseudocode, which also uses clunky notation and for which the textual explanation is as confusing as the pseudocode notation.) This is not to say that one cannot fill in many of the details with enough imagination and enough expertise, but at the moment the writing is hardly structured and accessible enough.\n\n--- The theoretical/methodological side does not offer strong novel contributions; for the most part it simply wraps class-conditional and group-conditional conformal methodology into fairness-specific terminology --- and is currently doing it in a somewhat confusing way, given the currently unsatisfactory presentation as stated above.\n\n--- In addition, the literature background review, where it's claimed that there are \"very few prior efforts\" on fairness and conformal prediction, misses an established line of work on group-conditional fairness guarantees; these works study the enforcement of coverage guarantees (usually of both upper and lower bounds) on rich classes of subpopulations given by possibly arbitrarily overlapping groups.\n\nR. Barber et al: The limits of distribution-free conditional predictive inference [Journal of IMA 2020]\nC. Jung et al: Batch Multivalid Conformal Prediction [ICLR 2023]\nZ. Deng et al: Happymap: A generalized multi-calibration method [ITCS 2023]\nO. Bastani et al: Practical adversarial multivalid conformal prediction [NeurIPS 2022]\nI. Gibbs et al: Conformal Prediction with Conditional Guarantees [2023]" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- How does the framework handle scalability with multiple sensitive attributes, especially computationally?\n\n- Why were existing fairness-aware methods not used as baselines for comparison?\n\n- How practical is the exchangeability assumption for real-world graph data? Any empirical evidence?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Originality: Introduces \"Conformal Fairness,\" extending Conformal Prediction (CP) to address fairness in uncertainty quantification, especially for non-IID data.\n- Quality: Provides strong theoretical backing with rigorous proofs and effective validation through experiments on graph and tabular datasets.\n- Clarity: Clearly defined theoretical concepts and a stepwise algorithm description make the methodology accessible to those with relevant background knowledge. Providing additional background information on key concepts would help readers who are less familiar with the topics to follow more easily\n- Significance: Tackles an important gap by combining fairness with uncertainty quantification. Adaptable to multiple fairness metrics and data types, making it broadly applicable in ethical AI contexts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces \"Conformal Fairness,\" a framework that extends Conformal Prediction (CP) to ensure fairness across sensitive groups. The authors develop a theoretically grounded algorithm to control gaps in coverage between these groups, leveraging the exchangeability assumption instead of the typical IID assumption. This allows the framework to be applied to non-IID data types, including graph data. Experiments on both graph and tabular datasets show that the framework can effectively control fairness gaps while maintaining coverage guarantees." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper lacks a detailed discussion of the fairness-efficiency trade-off. Quantifying acceptable efficiency losses when fairness is improved would make the results more actionable for practitioners balancing both aspects.\n\n- The extension of the exchangeability assumption to real-world data may not always hold. Adding empirical evidence or discussion on when this assumption is valid in practice would make the claims more robust.\n\n-Lack of comparison with all existing fairness-aware methods limits benchmarking. Adding baselines would clarify the framework's effectiveness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I think it would be very helpful for the authors to explain more about their design choices. For example, why choose inverse quantile for miscoverage level? Explain a bit more about the intuitions behind lemma 3.1 and 3.2 would really help the readers." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Conformal prediction is a very interesting area and there are many interesting works in this area. It is a natural question to ask how to achieve fairness for models in this setting. The authors provide a very general framework that can achieve many different definitions of algorithmic fairness. The paper also did many interesting experiments in graph datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework to check and find the suitable threshold for conformal learning. Further, the framework can be easily extended to some constraints, like fairness constraints. The authors specifically studied achieving fairness properties for graph models. Further, the authors studied how to use their framework to check the fairness property of a model. The authors' algorithm first find the right use inverse quantile to check whether a threshold is good." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I personally think the author does not elaborate enough on why they chose this algorithm design. The writing of this paper could be improved." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper is well-written with a clear structure.\n\n2. The notion of conformal prediction is common, while its application on fairness is not widely studied to the best of my knowledge. The novelty of this notion is well justified.\n\n3. The conformal fairness metrics in Table 1 are intuitive and well-founded." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a notion of fairness, named $\\textit{conformal fairness}$. In contrast with standard fairness notions (e.g. demographic parity), this notion significantly applies conformal mapping, which the authors clearly defined. The authors developed an algorithm to achieve this type of fairness on graph datas and confirmed with experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the conformal fairness metrics are well-defined in Table 1, they were not carefully discussed in later results (Theorem 3.4). Specifically, Theorem 3.4 states that \"the coverage difference for Predictive Parity is satisfied\". In Table 1, the definition of Predictive Parity relies on a conformal prediction $\\mathcal{C}$, but the specific mapping/quantification is not discussed in the theorem statement.\n\n2. This is a continuation of the previous point. Suppose there is a conformal prediction in Theorem 3.4, how good is it, i.e. what shall the value of $\\alpha$ be, as in Equation 1?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Generic Framework for Conformal Fairness},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xiQNfYl33p},\nnote={under review}\n}" }, "abstract": { "value": "Conformal Prediction (CP) is a popular method for uncertainty quantification with machine learning models. While the method provides probabilistic guarantees regarding the coverage of the true label, these guarantees are agnostic to the presence of sensitive attributes within the dataset. In this work, we formalize \\textit{Conformal Fairness}, a notion of fairness using conformal predictors, and provide a theoretically well-founded algorithm and associated framework to control for the gaps in coverage between different sensitive groups. Our framework leverages the exchangeability assumption (implicit to CP) rather than the typical IID assumption, allowing us to apply the notion of Conformal Fairness to data types and tasks that are not IID, such as graph data. Experiments were conducted on graph and tabular datasets to demonstrate that the algorithm can control fairness-related gaps in addition to coverage aligned with theoretical expectations." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Fairness", "Conformal Prediction", "Graph Neural Networks" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/4df4abf5fa6f9e591d00bdee1717967002a28b2b.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/808fe5049890b5f355b94384745dcbdfec4f2b00.zip" }, "title": { "value": "A Generic Framework for Conformal Fairness" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xing7dDGh3
Vector-ICL: In-context Learning with Continuous Vector Representations
main
Active
large language models;in-context learning
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;5;6;6
4;4;4;3
2;2;4;3
2;2;3;3
2;2;4;3
5
3.75
2.75
2.5
2.75
-0.471405
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How do you see vector-ICL applied in practical scenarios, considering decreasing inference cost and increasing context length of LLMs? Given your experiments show that without task-specific fine-tuning, vector-ICL consistently outperforms tokens-based ICL?\n\n- Did you explore instruction tuning after pre-training? Authors should discuss the tradeoffs between task-specific finetuning and more general approaches like instruction tuning/ human preference training." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The concept of using vectors for in-context learning is a new exploration, and the proposed methods for vector-ICL and evaluations show promising results.\n\n- Using light-weight trainable projectors with simple pre-training on general task is also not very expensive and can be integrated as a part of general LLM pre-training. \n\n- Experiments cover a wide range of tasks and modalities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the feasibility of vector-ICL, that extends the in-context learning capabilities of LLMs to continuous vectors. Authors use light-weight projectors to align embedding space of input text embedding with LLM. To train the projectors, they use general language modeling objective followed by task-specific objectives. Authors experimented with multiple tasks and modalities, e.g., text classification, summarization, time-series classification, fMRI decoding, etc." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Method lacks depth. Specifically, replacing any length text for any complexity task with a single embedding may not be sufficient. Including an ablation where text is replaced with a series of vectors (one vector per sentence/ chunk) would be helpful.\n\n- Currently, the method requires task-specific fine-tuning to outperform token-ICL. I think authors should explore RLHF/ instruction finetuning datasets/ objectives to avoid task-specific finetuning." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Figure 4a: Why does the correlation vary so much between datasets and models of similar size?\n* Figure 4b: What do the axes represent?\n* lines 477-479: Can you expand on the block patterns and how they are explained?\n* How does the finetuning work precisely? Do you finetune with multiple shots already? If so, how many?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The proposed approach is simple to understand and implement.\n* The method is evaluated on a wide range of tasks and datasets, including multiple modalities.\n* Parts of the results suggests that the models perform better with more shots.\n* The paper is relevant to the ICLR audience." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new method for in-context learning. By encoding examples into single vector representations via a separate, frozen encoder, and training a simple projection on top of the vector, LLMs are trained to perform ICL from continuous vector representations alone (Vector-ICL). The projection is trained a) via pretraining on unlabeled data and can optionally be b) finetuned on a labeled corpus.\nThe method is evaluated on a range of tasks, including text only tasks like text classification and summarization, as well as multi-modal tasks that utilize encoders for brain scans, graph data, or time-series." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper claims that with finetuning the vector-ICL method outperforms standard ICL. However, The method's baseline is standard-ICL without any finetuning, i.e. the method compares a supervised method (albeit with a weak adapter projection P) with an unsupervised one. A baseline that is obviously missing is when finetuning the base model with standard ICL, perhaps using an adapter that is equally weak. For text classification and summarization, the soft prompting baseline may be adequate, but the reason for its inclusion is never discussed.\n* The method is not well-motivated. In lines 34-38 it states that many data modalities cannot be well represented in natural language, making continuous vector representations necessary for in-context learning. While this is true, many of the experiments on non-textual either don't benefit from multiple shots or need to be trained on downstream data to work (e.g. time-series classification and graph classification in Figure 3, bottom). What's the value of Vector-ICL here? If you need to finetune a model to perform a task, can it still be called in-context learning?\n* The paper structure could be better. The information pertaining to a particular task is scattered all over the paper, such that I found myself scrolling back and forth a lot. For example, I did not find Figure 3 particularly helpful because the information necessary to understand it fully is located far away from it (e.g. what the horizontal bar represents). I think it would be better to focus on fewer tasks in the main body of the paper, but explain these really well. The remainder could go into the appendix." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "q1: How does the proposed approach differ from soft prompt tuning (Li and Liang, 2021)?\n\nq2: In 3.3 it is mentioned that the LLM would be trained with ``conditional generation loss''. However, in Figure 2b that is referred to, the LLM is supposed to be frozen. Could you clarify this point and explain the training process in more detail, including which components are frozen and which are updated during a) pre-training and b) fine-tuning?\n\nq3: For Time Series (l 346), it is mentioned that you use the output of the last time step from Chronos-base. Doesn't this mean that the model is not able to use the full context of the time series but just the final state? What are the trade-offs between using the full context of the time series and just the final state?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "To the best of my knowledge, the proposed method is original (although some connections with soft prompt tuning should be discussed).\nThe paper is for the most part clear and the approach is interesting in the sense that it shows that LLMs can operate in-context on projected vectors. The results seem promising, as the learned projectors allow LLMs to tackle new tasks (graphs, fMRI etc) that are (assumed to be) not possible to tackle without projectors. The experimental part covers a wide range of tasks with different input types (8 tasks, with only 3 being classic text tasks and the proposed method beats its baselines in most cases. Clarity is generally good with some exceptions (see questions)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper is about projecting input data into an LLM embedding space, such that the LLM can perform in-context learning on the projections. The paper claims that this approach can improve the performance of the LLM on downstream tasks and opens up new application domains with Graphs, fMRI, and Time Series." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. A weakness is that the projectors require pre-training with a language modeling objective and task-specific fine-tuning. It feels this defeats the purpose of in-context learning (i.e. not needing any training data to tackle a new task) to some extent. Authors can reflect if a change of the proposed methods name would be needed here.\n2. Although the paper uses soft prompt tuning as a baseline, the relationship of the proposed approach with soft prompt tuning (Li and Liang, 2021; and follow-up work) is not explicitly discussed, i.e. as both methods learn some artificial token representations that are placed into the prompt.\n3. The paper assumes that regular ICL is not applicable to Graphs, fMRI, and Time Series. While arguably not a natural fit, this assumption should be supported by some experiments, e.g. just using the numerical PCA vectors as text, or some flattened representation of the graph edges. As people have even thrown the parameters of another LM at an LM -- just in text format, I could imagine that even those non-text data types could be successfully encoded as text in a more straightforward manner without the need for training artificial tokens.\n4. The experiments on time-series, graph classification and two fMRI tasks do not have a baseline, but only compare the pretrained-only projectors with fine-tuned projectors. It would be interesting to see how the proposed approach compares to some standard baseline of the respective domain, i.e. graph neural networks for graphs.\n\n\nMinor point: In terms of presentation, the results description could directly link to the results, even if repetitive, e.g., Fig 3." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper identifies an interesting research problem and presents a well thought-out exploration. The idea of using embeddings for conditional generation has been explored n various indirect forms (e.g., [1]), but the paper's focus on ICL seems to be a useful contribution.\n\n[1] Text Embeddings Reveal (Almost) As Much As Text (Morris et al., 2023)" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores using embeddings for examples in in-context learning (ICL), which may be more suited in non-textual tasks. The paper proposes a light-weight pretraining and finetuning scheme to obtain the encoder. The paper shows improvements over textual ICL on various non-textual tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed framework is back to encoder-decoder. In this context, previous works like Flan-T5 have thoroughly explored the model's ICL capabilities [2]. While the paper differs since it focuses on a single embedding and non-textual tasks, it's a bit odd to omit this context.\n\n2. Intuitively vector-ICL should not work as well as textual ICL for a lot of tasks. Clearly if the task requires a lot of symbolic data ICL is strictly more powerful than the proposed approach. But the paper seems to mostly show positive results. It'd be good to do some exploration of the cases vector-ICL is worse than textual ICL and by how much. \n\n[2] Scaling Instruction-Finetuned Language Models (Chung et al., 2022)" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We discover that large language models can effectively process and in-context learn from continuous representations from various domains, often outperforming regular ICL and domain-specific models across diverse tasks and modalities." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024vectoricl,\ntitle={Vector-{ICL}: In-context Learning with Continuous Vector Representations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xing7dDGh3},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLMs) have shown remarkable in-context learning (ICL) capabilities on textual data. We explore whether these capabilities can be extended to continuous vectors from diverse domains, obtained from black-box pretrained encoders. By aligning input data with an LLM's embedding space through lightweight projectors, we observe that LLMs can effectively process and learn from these projected vectors, which we term Vector-ICL. In particular, we find that pretraining projectors with general language modeling objectives enables Vector-ICL, while task-specific finetuning further enhances performance. In our experiments across various tasks and modalities, including text reconstruction, numerical function regression, text classification, summarization, molecule captioning, time-series classification, graph classification, and fMRI decoding, Vector-ICL often surpasses both few-shot ICL and domain-specific model or tuning. We further conduct analyses and case studies, indicating the potential of LLMs to process vector representations beyond traditional token-based paradigms." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "large language models", "in-context learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/85fc1ddf9c80f4dbb8482bf8a871242ea3d563d8.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Vector-ICL: In-context Learning with Continuous Vector Representations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xiyzCfXTS6
Optimistic Games for Combinatorial Bayesian Optimization with Application to Protein Design
main
Active
Combinatorial Bayesian Optimization;Game Theory;Gaussian Processes;Protein Design
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
3;3;6;8
3;3;3;4
2;2;3;3
2;2;2;4
2;3;3;3
5
3.25
2.5
2.5
2.75
0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- ***Challenges in Finding Nash Equilibrium***: Given that finding a Nash Equilibrium is PPAD-complete and challenging even with state-of-the-art solvers, could the authors provide more insight and detailed discussion on optimizing the equilibrium for protein applications, especially in high-dimensional cases? The paper could better illustrate why the benefits of introducing the equilibrium outweigh the additional challenges rather than implying that solvers are guaranteed to find the NE. This clarification would prevent confusion among readers not familiar with the literature, especially since multiple NE solvers are discussed while claiming guarantees of finding local optima.\n\n- ***Price of Anarchy Discussion***: The discussion of the price of anarchy briefly addresses the motivation for the problem formulation but is not sufficiently convincing. Providing more details and insights specific to protein design tasks could improve the presentation and strengthen the argument.\n\np.s. Due to my limited domain expertise, I am not confident in assessing the novelty of the problem formulation in this specific application." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- ***Innovative Formulation***: The paper introduces an interesting formulation by shifting the focus from kernel design for protein structures to an objective that reflects the insight that multiple separate contributing factors exist within protein design tasks.\n\n- ***Sample Efficiency Guarantee***: The algorithm comes with a sample efficiency guarantee, which is valuable for practical scientific experimental design tasks where data collection can be costly and demand a principled optimization solution." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel method that frames the protein design task as an optimization problem of finding a Nash Equilibrium (NE) with unknown utility functions. It provides sample efficiency guarantees, and empirical results using multiple NE solvers suggest promising improvements over existing baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- ***Limited Novelty in BO Guarantees***: The Bayesian Optimization (BO) component naturally inherits the guarantees of the Upper Confidence Bound (UCB) algorithm. There doesn't appear to be a significant difference or improvement, especially considering that the method does not address specific challenges unique to protein design.\n- ***Lack of Comparison with High-Dimensional BO Methods***: BO methods tackling protein design typically involve specific treatments for high-dimensional spaces. Including comparisons against these methods would strengthen the paper, especially when ESM embedding allows the direct application of such methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.\tDoes this algorithm work without assumption $|\\mathcal{X}^{(i)}|=d$? It means that in this case the numbers of possible decision variables are different.\n2.\tWhat is the v function in Algorithm 2? Is that the r function defined in Definition 3.1?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tCombinatorial Bayesian optimization (CBO) is a very important subfield in Bayesian optimization. While Bayesian optimization has been successfully applied to many areas, studying CBO helps us solve even more practical application problems, like protein design.\n2.\tSample complexity guarantees are provided to show the algorithm’s ability to achieve approximate Nash equilibrium.\n3.\tExperiments on protein design are comprehensive and ablation studies are provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies how to solve the combinatorial Bayesian optimization problem, which has many important potential applications like protein design. The key idea of this work is introducing the Nash equilibrium to Bayesian optimization, where a game is carefully designed for domain variables to play and the decision making is driven by equilibrium finding algorithms. It’s good to see substantial experiments in protein design in the end." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.\tTo be honest, as a ML researcher, I have limited background in Nash equilibrium and I think it could be beneficial for this paper to provide more information on introducing Nash equilibrium since this is an ICLR submission. Although locally optimal, why and how does Nah equilibrium help in CBO setting? Could you provide a brief comparison between the local optimality of Nash equilibria and global optimality in the context of CBO. Can we design some globally optimal CBO algorithms or are the authors aware of such algorithms? Also, it might be hard to theoretically investigate the performance difference between GameOPT and the global optimal solution, it is feasible to investigate it in experiments where a small finite decision space is used. That would greatly provide valuable insights into the method's effectiveness.\n2.\tI have concerns on the computational efficiency of Step 5 of Algorithm 1. Because the GP is assumed on function f which is now defined on the combinatorial space, how do you find the top B equilibria according to UCB? Could you provide more details on how to implement this step efficiently, discuss the computational complexity of this step compared to the overall algorithm, and explain any approximations or heuristics that may be used if an exact solution is computationally infeasible?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Is there any consideration of balance of exploration and exploitation? How is the data-efficiency guaranteed?\n2. What is the influence of batch size B? In what case may the algorithm perform worse?\n3. Why not compared baselines using tree-based surrogate models. As far as I know, the tree model is a suitable choice for protein design.\n4. How to prove that solving equation (2) is more tractable than solving other acquisition functions?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The discrete black-box optimization problems are important, and the proposed game-theoretical solution seems novel.\n2. The protein design is vital in drug discovery and healthcare.\n3. The presentation is clear and the idea is easy-to-follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studied the combinatorial Bayesian optimization in the discrete search space. Based on the theory of cooperative games and upper confidence bound of Gaussian processes (GPs), a new acquisition function that seeks to find the local equilibria of players was proposed. Based on batch and parallel computation, the proposed method achieved better scalability. The effectiveness and competitiveness were demonstrated in the application of computational protein design." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There is no mechanism to avoid or mitigate stucking in local optimality. At the same time, as a Bayesian optimization method, the exploration-exploitation balance is not considered. As a preliminary solution, random restarts can be introduced into the search process, similar to what the trust-region Bayesian optimization algorithm does [1], to ensure exploration and global search ability. \n2. As local optima, the regret bound is not insightful enough to show the superiority. Currently, the regret is similar to what the original GP-UCB is, but only local optima are guarenteed. While the GP-UCB ensures global search and convergence. In my opinion, assuming convexity in the local region containing equilibria may help improve the theoretical results [2].\n3. The testing benchmark problems are not diverse enough. Some well-known synthetic problems are required for fair comparison, espectially that employed in the state-of-the-art work.\n4. The influence of hyper-parameters, especially the batch size B, is not clear. The baseline algorithms, especially the GP-UCB, are not tailored for batch optimization. It is recommended to consider smaller B, such as B=1, first. \n5. The literature review is insufficient. More baseline algorithms in combinatorial Bayesian optimization and general optimizers should be considered.\n\n[1] David Eriksson, et al.: Scalable Global Optimization via Local Bayesian Optimization. NeurIPS 2019.\n\n[2] Shuang Li, et al.: \"Why Not Looking backward?\" A Robust Two-Step Method to Automatically Terminate Bayesian Optimization. NeurIPS 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Large-scale combinatorial optimization is an important challenge, though Bayesian optimization (BO) may not be the most typical approach for tackling it.\n- While theoretical analysis is provided for the convergence rate to an $\\epsilon$-Nash equilibrium. However, it does not necessarily equate to finding the global optimum of the objective function, and the results are not significantly different from known results for existing game-theoretic BO literature." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles scalability challenges in combinatorial black-box optimization tasks, specifically when each dimension contains the same number of discrete points. While this assumption is not universally applicable in combinatorial Bayesian optimization, it is relevant in specific domains, such as protein engineering. The authors introduce GameOpt, a novel approach that leverages the uniform structure of the discrete search space. By reframing the combinatorial optimization problem as an $n$-player cooperative game, the method aims to identify an $\\epsilon$-Nash equilibrium at each iteration. This transformation enables the application of scalable solvers and polynomial-time equilibrium-finding techniques from game theory, offering a computationally efficient strategy for optimizing the acquisition function, particularly GP-UCB." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Clarity**\nThe weakest part of this paper is definitely the clarity, particularly in math. There are so many uncommon, unexplained, and undefined symbols, that make me frustrated. Detailed below:\n\n**Problem Statement**. What exactly is $x$? I initially understood it as a $d \\times n$-dimensional variable. However, in Line 107, the statement \"f(x) corresponds to --- the amino acid sequence $x$, and each $x$ can take $20^n$\". I've lost. Is $x$ meant to be a $d$-dimensional variable or a $d \\times n$-dimensional variable? I also can’t interpret the meaning of \"each $x$ can take $20^n$\". Shouldn't $d = 20$ in this case? Additionally, the definition of the fitness function $f(x)$ is unclear. Is $f(x)$ defined for each discrete variable, or is it applied to the entire variable set? For example, if $x$ is assumed to be $d$-dimensional, does our objective function take the form $\\prod_{i=1}^n f(x^{(i)})$? Or is $x$ actually $d \\times n$-dimensional, and thus $f(x)$ refers to something else? I’ll proceed under the assumption that $x$ is $d \\times n$-dimensional.\n\n**Section 3**. I understand the reward $r^{(i)}$ here is the UCB defined for $x$ as a $d \\times n$-dimensional input, and each player $i$ corresponds to one of the discrete variables. However, since UCB is defined on the entire $d \\times n$-dimensional space, how can we apply UCB as $r^{(i)}$ then? How do you select the other $n-1$ variables? Or does $r^{(i)}: \\mathcal{X} \\rightarrow \\mathbb{R}$ imply that each player takes a $d \\times n$-dimensional input and $r^{(i)}$ = UCB for all $i$? If that is the case, and all players share the same utility function, why do equilibria arise? Wouldn't this reduce to the same optimal points for all players? Additionally, in Definition 3.1, $r^{(i)}$ takes two arguments, but this is supposed to be UCB. Why does UCB take two arguments? What is arg eq? What exactly does Eq (2) represent? I'm also confused by the explanation on Lines 172-173: \"Intuitively, equilibria are computed by breaking down the complex decision space into individual decision sets.\" Could you precisely explain this process before jumping into the intuition? How is finding a Nash equilibrium equivalent to maximizing UCB? What is the payoff $v$ in Algorithms 2 and 3? I've lost again. \n\nI’ll stop pointing out specific issues here, but I want to note that the same level of confusion persists throughout the remaining sections (although the introduction was quite smooth).\n\n- **Lack of Critical Guarantee**\nIs finding a Nash equilibrium mathematically equivalent to maximizing the objective function $f$? If I understand correctly, this process is more akin to constructing a Pareto frontier, which contains the global maximum, making it more aligned with finding the set of local maxima. How, then, does this algorithm guarantee global convergence? If the goal of this work is local optimization or a heuristic approach, this should be clearly stated in the title or assumptions. Approximations can be valuable, but only if they provide some guarantee that they can recover the original problem under some conditions. Existing work, such as [1], provides such guarantees. I am not surprised that a local optimization algorithm finds local optima (as this work does) faster than a global optimizer (like [1]) in certain experiments, as they target different objectives. Even with these results, I wouldn’t choose this method for global optimization. When the function query is highly expensive, the computational overhead of the acquisition function becomes comparable. Moreover, [1] is not exactly state-of-the-art as it dates to 2022. There are likely more recent studies, such as [2], though performing a literature search is not my role here.\n- **Limited Evaluation to Assess Practicality**\nHeuristic approaches are completely acceptable if they prove useful in real-world applications. However, in that case, we expect the algorithm to be genuinely practical and perform best among existing heuristics. In the context of protein optimization, chemists would most likely start with deep learning-based approaches, especially diffusion-based methods, which are already well-established. There is a plethora of scalable and sample-efficient work in this area, such as [3, 4]. These methods seem like a more natural choice than using deep learning embeddings as features, as done in this work. Even with embeddings, the GP has no prior data on the objective function, meaning BO must start without a pretrained dataset. Alternatively, why not reduce the dimensionality of the embedding features to make them more tractable for the naïve GP-UCB? This could be achieved by adding just one additional layer to the transformer. In doing so, standard GP-UCB would likely perform just fine like popular latent BO doing. Since this paper does not compare its method against these popular alternatives or simpler alternatives, I cannot properly evaluate whether it represents a good heuristic. What I gather from this work is that it performs better than outdated methods on the limited tasks provided, but that alone does not demonstrate its superiority or practicality compared to modern approaches.\n- **Limited Novelty**\nThe equation on line 295 appears nearly identical to the existing optimistic game-theoretic approach presented in [5], but it is not cited. Additionally, the idea of treating each discrete variable as a player is not particularly novel. For instance, Shapley value GP [6] and additive kernels [7] can be seen as variants of this concept (although they treat dimensions as players). While the specific combination of game theory, combinatorial optimization, and Bayesian optimization may be novel, it ultimately feels like a straightforward combination of existing approaches.\n- **Missing Limitation**\nI understand that this method leverages the problem structure where each dimension contains the same number of discrete variables, $n$. However, in typical combinatorial Bayesian Optimization, the goal is hyperparameter optimization, where each dimension may have a different number of categorical values, and the space can sometimes be a mixture of continuous and discrete variables. This method is not applicable to such cases. In mixed-variable scenarios, the existence of a Nash equilibrium cannot be guaranteed.\n\n- **Citation**\n- [1] Daulton, Samuel, et al. \"Bayesian optimization over discrete and mixed spaces via probabilistic reparameterization.\"NeurIPS 2022\n- [2] Papenmeier, Leonard, et al., \"Bounce: reliable high-dimensional Bayesian optimization for combinatorial and mixed spaces.\" NeurIPS 2023\n- [3] Gruver, Nate, et al. \"Protein design with guided discrete diffusion.\" NeurIPS 2023\n- [4] Campbell, Andrew, et al. \"Generative flows on discrete state-spaces: Enabling multimodal flows with applications to protein co-design.\" arXiv preprint arXiv:2402.04997 (2024).\n- [5] Han, Minbiao, et al., \"No-Regret Learning of Nash Equilibrium for Black-Box Games via Gaussian Processes.\" UAI 2024.\n- [6] Chau, Siu Lun, et al., \"Explaining the uncertain: Stochastic shapley values for gaussian process models.\" NeurIPS 2023\n- [7] Kandasamy, Kirthevasan, et al., \"High dimensional Bayesian optimisation and bandits via additive models.\" ICML 2015." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024optimistic,\ntitle={Optimistic Games for Combinatorial Bayesian Optimization with Application to Protein Design},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xiyzCfXTS6},\nnote={under review}\n}" }, "abstract": { "value": "Bayesian optimization (BO) is a powerful framework to optimize black-box expensive-to-evaluate functions via sequential interactions. In several important problems (e.g. drug discovery, circuit design, neural architecture search, etc.), though, such functions are defined over large $\\textit{combinatorial and unstructured}$ spaces. This makes existing BO algorithms not feasible due to the intractable maximization of the acquisition function over these domains. To address this issue, we propose $\\textbf{GameOpt}$, a novel game-theoretical approach to combinatorial BO. $\\textbf{GameOpt}$ establishes a cooperative game between the different optimization variables, and selects points that are game $\\textit{equilibria}$ of an upper confidence bound acquisition function. These are stable configurations from which no variable has an incentive to deviate$-$ analog to local optima in continuous domains. Crucially, this allows us to efficiently break down the complexity of the combinatorial domain into individual decision sets, making $\\textbf{GameOpt}$ scalable to large combinatorial spaces. We demonstrate the application of $\\textbf{GameOpt}$ to the challenging $\\textit{protein design}$ problem and validate its performance on four real-world protein datasets. Each protein can take up to $20^{X}$ possible configurations, where $X$ is the length of a protein, making standard BO methods infeasible. Instead, our approach iteratively selects informative protein configurations and very quickly discovers highly active protein variants compared to other baselines." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Combinatorial Bayesian Optimization", "Game Theory", "Gaussian Processes", "Protein Design" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3ae89ed0db94e1278d950a381ce4c10078e49a6b.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Optimistic Games for Combinatorial Bayesian Optimization with Application to Protein Design" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xizpnYNvQq
Revisiting In-context Learning Inference Circuit in Large Language Models
main
Active
In-context Learning; Induction Circuit; Mechanistic Interpretability
interpretability and explainable AI
6;6;6;8
3;3;3;3
3;3;3;3
3;2;2;3
4;2;2;4
6.5
3
3
2.5
3
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the Weaknesses section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Attempts to explain the inner workings of ICL, based on reasonable assumptions and investigative tools.\n- The findings align with previous work, encouraging readers to accept the claims presented in the paper.\n- Visualized results help readers quickly grasp the core concepts and findings of the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to explain the mechanisms behind in-context learning (ICL) using the inference circuit framework.\n\nAccording to the authors, the ICL process consists of three internal steps:\n1. Summarize: Large language models (LLMs) encode each demonstration within its corresponding forerunner token, $s_i$ .\n2. Semantics Merge: The semantics of each demonstration and its label are combined into the representation of the label $y_i$ .\n3. Feature Retrieval and Copy: LLMs rely on the accumulated labels $y_{1:k}$ to respond accurately to the query $s_q$ , yielding the most appropriate answer.\n\nEach step is empirically validated using methods such as kernel alignment and embedding comparisons. The authors also seek to align their findings with those of prior research, reinforcing the credibility of the arguments presented in this work." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While the proposed framework is logical and reasonable, it remains challenging to argue definitively that the core mechanism of ICL follows the assumptions presented in the paper. As noted in Section 5.2, there are exceptions that do not align well with the proposed framework, raising concerns that the explanations may be superficial and fail to capture the essence of ICL. This is understandable, as fully explaining the inner workings of neural networks is inherently difficult, if not nearly impossible.\n- I am somewhat unclear about the core novelty of this paper. As I understand it, the primary contribution seems to be the attempt to apply the existing inference circuit framework to the ICL of specific LLMs, including LLaMA 3. In Section 2.1, I did not find explanations that clarify why the procedure conducted in this paper is particularly innovative, compelling, or novel. More comprehensive comparisons with prior work employing induction or inference circuits to illustrate the inner workings of ICL would be helpful to underscore the merits and uniqueness of this study.\n- In Section 3.1, sentence embeddings generated by an external encoder (BGE M3) are compared to hidden representations computed by an LLM. Since these two representations come from different models, without any modification or fine-tuning, there is a risk that their vector spaces are not aligned or compatible. This raises concerns about whether this experiment is sufficiently reasonable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The authors proposed to use the mutual nearest-neighbor kernel alignment of the intermediate representations of LLMs and sentence embeddings produced by another pre-training model to assess the quality of these representations. This method is novel.\n\n2. Extensive analysis has been performed on all three steps of the proposed framework. Possible explanations have also been provided for many phenomena.\n\n3. The experiments are performed with real-world LLMs and datasets, which makes the insights more likely to be useful in practice.\n\n4. The paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a three-step inference circuit to capture the in-context learning (ICL) process in large language models (LLMs):\n\n1. Summarize: Each input (both demonstration and query) is encoded into linear representations in the model's hidden states.\n\n2. Semantics Merge: The encoded representation of each demonstration is merged with its label, creating a joint representation for the label and demonstration.\n\n3. Feature Retrieval and Copy: The model retrieves and copies the label representation most similar to the query's representation, using this merged representation to predict the query's label.\n\nThis circuit explains various ICL phenomena, such as position bias, robustness to noisy labels, and demonstration saturation. Ablation studies show that removing steps in this process significantly reduces performance, supporting the dominance of this circuit in ICL. The paper also identifies some bypass mechanisms that operate in parallel, assisting the inference circuit through residual connections." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The majority of the analysis is based on associations without verifying their strength and whether those effects are causal. For example, Figure 2 right does not look significant enough for me. The peaks highlighted in Figure 5 also look pretty noisy to me.\n\n2. The causal evidence that the authors provided in the ablation study only shows the effect of deleting the hypothesized important components in ICL. What if unimportant components are deleted? Would they have a similar effect? Only if the unimportant components have a significantly weaker effect on ICL performance, can we draw a causal conclusion that the proposed three-step process dominates ICL." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N.A." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please see weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The findings in this paper are clearly explained. Experimental results and visualizations enhance readability and help the audience follow the study's progression easily.\n\n2. This work is well-connected to existing ICL research, with discussions that compare its findings to prior studies on ICL explainability and demonstration selection.\n\n3. The study uses multiple LLMs, strengthening the generalizability of the findings across different model architectures.\n\n4. The insights provided are thought-provoking and have potential practical implications for ICL applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the mechanisms within large language models (LLMs) that enable in-context learning (ICL) tasks, breaking down the process into three distinct stages: summarization, semantic merging, and feature retrieval/copying. The study employs a variety of experiments across multiple LLMs to validate its findings. Overall, this paper presents valuable insights that can contribute to the field of LLM research, particularly within the ICL community." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. [Section 3.1] The authors used mutual nearest-neighbor kernel alignment to evaluate LLMs' summarization abilities. However, the term “summarize” lacks clarity. Does it refer to encoding capabilities similar to those in BGE?\n\n2. [Section 3.1] Additionally, the kernel alignment metric may not be sufficiently robust, as alignment scores in Figure 2 range only from 0.25 to 0.35, which is not significant enough. Consequently, the finding on “summarization” may hold only to a limited extent.\n\n3. [Section 4.1 – Copying from Text Feature to Label Token] It is unclear whether the copying mechanism is applied solely to label tokens or if it extends to other tokens within the input. Using results from other tokens as a baseline could provide a more nuanced understanding of the copying process.\n\n4. [Figure 5, Right] After layer 40, the classification accuracy drops significantly. The authors did not investigate potential reasons for this decline. Could it be due to the gradual degradation of copied information?\n\n5. The experimental setup in Section 5.1 is insufficiently detailed. For instance, how many attention heads are disconnected at each layer? Additionally, the experiments lack certain baselines, such as randomly disconnecting some attention heads to observe the impact on model performance.\n\nDespite these questions and weaknesses, I believe this paper still offers meaningful insights." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The three-stage framework proposed in the paper is quite interesting. The questions I have raised are mentioned in the weaknesses section. Here, I would like to know what inspired you to propose this framework. Each part of the framework consists of very specific ideas—were they derived from repeated trial and error, or were they inspired by something else? Alternatively, are they improvements on a significant prior work?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **Originality:** To my knowledge, the three-stage circuit proposed by the authors is a novel contribution.\n- **Quality:** The hypothesis put forward is reasonable, and the experiments are thorough with a well-crafted methodology.\n- **Clarity:** The arguments and evidence presented in the paper are clear, and the experimental descriptions are appropriately detailed.\n- **Significance:** Currently, ICL is one of the most important applications in the LLM field, and understanding the mechanisms behind ICL will greatly aid in enhancing its performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a three-stage ICL circuit hypothesis and provides thorough empirical examinations of the existence and significance of these stages. Within this circuit framework, many phenomena are explained, such as how Forerunner Tokens encode input text representations and the bias in input text encoding towards position. These findings present intriguing insights." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The three-stage ICL framework appears to have implicit applicability conditions, which I believe should be clarified.\n\nFor example, in Fig. 1 on page 2, a few-shot scenario with $ k=2 $ is presented, which indeed fits the three-stage ICL circuit framework. However, in a zero-shot scenario ($ k=0 $), step 1 may still exist, but steps 2 and 3 would not be applicable. In a few-shot scenario with $ k=1 $, steps 1 and 2 might still apply, but step 3 cannot exist.\n\nTherefore, the framework proposed in this paper should be limited to discussions of scenarios where $ k \\geq 2 $. A related question arises: if the focus is restricted to this scenario, what potential issues might emerge?\n\nFurthermore, if we condition on $ k \\geq C $ (where $ C $ is a fixed value), could this value vary depending on the problem type? For instance, in tasks like SST-2 and SST-5, which have different label set sizes, might the value of $ C $ differ across these scenarios?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We decompose In-context Learning into 3 operations and measure their operating dynamics to catch many inference phenomenon of ICL in Large Langauge Models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024revisiting,\ntitle={Revisiting In-context Learning Inference Circuit in Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xizpnYNvQq},\nnote={under review}\n}" }, "abstract": { "value": "In-context Learning (ICL) is an emerging few-shot learning paradigm on Language Models (LMs) with inner mechanisms un-explored. There are already existing works describing the inner processing of ICL, while they struggle to capture all the inference phenomena in large language models. Therefore, this paper proposes a comprehensive circuit to model the inference dynamics and try to explain the observed phenomena of ICL. In detail, we divide ICL inference into 3 major operations: (1) Summarize: LMs encode every input text (demonstrations and queries) into linear representation in the hidden states with sufficient information to solve ICL tasks. (2) Semantics Merge: LMs merge the encoded representations of demonstrations with their corresponding label tokens to produce joint representations of labels and demonstrations. (3) Feature Retrieval and Copy: LMs search the joint representations similar to the query representation on a task subspace, and copy the searched representations into the query. Then, language model heads capture these copied label representations to a certain extent and decode them into predicted labels. The proposed inference circuit successfully captured many phenomena observed during the ICL process, making it a comprehensive and practical explanation of the ICL inference process. Moreover, ablation analysis by disabling the proposed steps seriously damages the ICL performance, suggesting the proposed inference circuit is a dominating mechanism. Additionally, we confirm and list some bypass mechanisms that solve ICL tasks in parallel with the proposed circuit." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "In-context Learning; Induction Circuit; Mechanistic Interpretability" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3f39e829a743291c790f552ca58b45508f9ace60.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Revisiting In-context Learning Inference Circuit in Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xjKz6IxgCX
SafeWatch: An Efficient Safety-Policy Following Video Guardrail Model with Transparent Explanations
main
Active
Video Guardrail Model;Safe Foundation Models;Efficient LLMs Inference;LLM Safety;Multimodal Foundation Models
alignment, fairness, safety, privacy, and societal considerations
3;6;6;6
3;3;3;3
1;3;3;3
2;3;3;3
2;3;2;4
5.25
3
2.5
2.75
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The proposed model for video guardrailing, along with the benchmark for evaluating safety policies in existing MLLMs, addresses sensitive content types. Given the nature of the videos and policies (safety categories) evaluated, it is essential to examine the work for potential biases, privacy concerns, potential harms, and legal compliance." }, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns", "Yes, Privacy, security and safety", "Yes, Legal compliance (e.g., GDPR, copyright, terms of use)", "Yes, Potentially harmful insights, methodologies and applications" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- SFT Baseline: Could the authors provide additional context for the \"SFT baseline\" mentioned in Figure 5?\n- Inference Cost: What accounts for the increase in inference cost with additional few-shot examples, as illustrated in Figure 5?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper addresses a critical topic by proposing guardrails for video MLLMs based on defined safety policies, which is timely and important with the rise of MLLMs.\n- It introduces a baseline model built upon the InternVL2-8B backbone and leverages two plug-in modules to (1) improve latency during training and inference, and (2) reduce positional biases related to the policy order.\n- The benchmark provides a comprehensive evaluation of existing MLLMs on video guardrailing tasks, demonstrating the model’s effectiveness across six safety policy categories, covering 30 subtopics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an MLLM-based video guardrail model that takes into account safety policies to provide a multi-label video content output including explanation, considering both the safety policies and the video content. The proposed model comprises two plug-and-play modules to improve latency of the guard rail model and mitigate positional biases by breaking down the safety guidelines. This work also introduces a benchmark for video guardrailing using multi-agent consensus and comparison across existing MLLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Details about the training and testing splits within the benchmark are insufficient, leaving questions about data partitioning.\n- The authors should clarify if any videos were discarded during dataset curation due to multi-agent discussion pipelines not reaching a consensus or human verification disagreements on final explanations. This clarification could shed light on the multi-agent approach's effectiveness in generating explanations that align with human perspectives, especially given video content's subjective nature." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The example in the upper left corner of Figure 3 may have 2 issues.\n\n1. This type of data likely comes from real copyrighted videos, which may involve copyright infringement.\n2. The authors did not blur or mask faces in their examples, which could raise privacy concerns." }, "flag_for_ethics_review": { "value": [ "Yes, Legal compliance (e.g., GDPR, copyright, terms of use)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could the authors design corresponding experiments and proofs to demonstrate that the mechanism producing the algorithm's effects aligns with their claims that the PEPE algorithm can \"allow each policy to be encoded independently and in parallel\" and that \"equivalent positional embedding ensures that different policies are treated without bias\"?\n2. Could the authors provide some quantitative analysis of the annotation quality? Can this pipeline approach human-level annotation quality? Compared to annotation by a single LLM/VLM, what advantages does incorporating Multi-agent Discussion bring?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The authors contribute a very large-scale benchmark for video security\n2. The authors propose the PEPE algorithm, which can mitigate positional bias in the input\n3. The authors propose the PAP algorithm, which maintains high recognition accuracy while reducing inference costs" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "SAFEWATCH is a new video moderation system that efficiently identifies harmful content and explains its decisions. It features two main innovations: PEPE (for parallel policy encoding) and PAP (for selective video analysis), both designed to make the system faster and more accurate. The researchers also developed SAFEWATCH-BENCH, a large dataset containing 2 million videos across six categories of harmful content, which they used to train and test their system." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The working mechanism of the PEPE algorithm lacks detailed theoretical explanation or experimental validation. The authors conduct ablation experiments to prove the effectiveness of the PEPE algorithm, but they don’t provide sufficient proof of its underlying principles. In lines 293-297, the authors claim that the PEPE algorithm can provide independent representations for each policy, which can alleviate the position bias problem in MLLM mentioned in lines 266-269. **However, regarding this claim, there are neither experimental designs nor mathematical proofs to support it. I have doubts about whether the mechanism behind the algorithm truly aligns with the authors' claims.**\n \n \n2. There is a lack of explanation regarding the effectiveness of the multi-agent propose-discuss pipeline mentioned in line 105. The authors mention in lines 105-106 that they use a novel pipeline for data annotation, but there is limited discussion about this pipeline. In the pipeline-related content, the authors do not cite any references, and based on my knowledge of related work, this pipeline has not been used in any previous work, making this the first work to employ this pipeline. Given this, **I am uncertain whether this pipeline can provide sufficiently high-quality annotation results**, and the authors have not provided any quality analysis of the annotation results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The authors claim data contributions, but there is unsafe content in SafeWatch-Bench." }, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns", "Yes, Privacy, security and safety", "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Could you provide some evaluation results for Safety-aware Event Sampling? Currently, its effectiveness or limitations are unclear.\n2. Is SafeWatch-Bench truly a video benchmark? Specifically, does it truly require reasoning across multiple frames for a model to achieve high performance?\n3. It appears that humans do not directly provide annotations for SafeWatch-Bench; instead, annotations are model-generated, with human reviewers checking if re-annotation is necessary. In the caption for Figure 2, you mention that the 1K test set has high-quality annotations. Were these test set annotations created directly by humans, or were they produced via the multi-agent propose-discuss-consensus pipeline?\n4. Could you describe how the preference pairs were curated for the preference post-tuning stage? Additionally, how were the challenging benign examples—those easily identified by humans but likely to mislead guardrail models—selected? A detailed explanation of these curation processes would be helpful.\n5. What is the quality of the synthetic videos generated by the GenAI models? Are they accurately aligned with the unsafe prompts? Was any video filtering applied to filter out misaligned videos?\n6. Could you clarify your model training data recipe? Are the unsafe videos used in stage-1 training and the guardrail tasks in stage-2 training drawn from the instruction-tuning dataset within SafeWatch-Bench? Additionally, how many samples are used for each stage, task, and dataset?\n7. Which layers are tuned in the preference post-tuning stage?\n8. In Table 3 of Appendix A.1, there is a column named \"Temporal location\". What does it mean?\n9. What is the SFT Baseline mentioned in Figure 5 and Table 5?\n10. It was claimed that prior datasets lack detailed descriptions of the videos, suggesting that SafeWatch-Bench offers a detailed description for each video. Is that correct?\n11. There is unsafe content in SafeWatch-Bench. Will SafeWatch and SafeWatch-Bench be released? If so, how do you plan to ensure their proper use?\n\nMinor comments:\nLine 749 has placeholder text." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Their model, SafeWatch, outperforms SOTA video guardrails on SafeWatch-Bench by 19.6% and on existing benchmarks by 15.4%, while reducing inference costs by an average of 25%. SafeWatch also demonstrates strong policy-following abilities, outperforming baselines by 20% in zero-shot adaptability to new policies. Additionally, both LLM-as-a-judge and human evaluators confirm the high quality of the explanations provided by SafeWatch.\n2. The design choices are well-founded, following best practices for efficient MLLM construction.\n3. This is an important area of study, with meaningful contributions (if these contributions are reproducible)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors of the paper propose a multimodal large language model (MLLM) called SafeWatch, designed to follow customized safety policies and provide multi-label video guardrail categorical outputs with answer explanations in a zero-shot manner. They also introduce SafeWatch-Bench, a large-scale video guardrail benchmark containing over 2 million videos spanning 6 safety broad categories and covering over 30 finer-grained risk categories to ensure comprehensive coverage of potential safety scenarios.\n\nThe technical contributions include:\n- Model Design: The authors introduce two key plug-and-play modules: Parallel Equivalent Policy Encoding (PEPE) and Policy-Aware Adaptive Pruning (PAP). \n - PEPE mitigates high latency from extensive input contexts and policy positional bias by dividing lengthy safety guidelines into independent chunks encoded in parallel with equal importance. \n - PAP, on the other hand, reduces latency by selecting the most relevant visual tokens for each policy while discarding those with low relevance.\n\n- Data: Each instance in SafeWatch-Bench is annotated with multi-label guardrail categories and detailed explanations. The dataset includes 2 million videos—both real-world and generative from various SOTA models—comprising an instruction-tuning set and a test set of 1K hand-selected, high-quality annotated videos across subcategories.\n\n- Training strategy: The authors fine-tune InternVL2-8B with their modeling changes on this new data via three stages, i.e., multi-task training, adaptive-pruning training, and preference post-tuning. \n - Stage 1: Only PEPE is trained during this stage on a large corpus of unsafe videos, as well as traditional VQA and captioning tasks on normal videos. \n - Stage 2: Both PEPE and PAP are fine-tuned on guardrail tasks. \n - Stage 3: Preference pairs are curated to enable the preference post-tuning." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Missing Evaluation: The evaluation of the Safety-aware Event Sampling step is absent. This step should be crucial for model performance, as the authors used TransnetV2 to segment videos into safety-aware events, sampling a single frame per event for further MLLM processing.\n2. Dataset Clarity: The dataset’s specifics and its exact use in model training remain unclear. For instance, there is no information on the average video length or the typical length of an explanation in SafeWatch-Bench. Additionally, the quality of the SafeWatch-Bench test set is not fully addressed, which is particularly important for an evaluation dataset.\n3. Reproducibility Concerns: Reproducibility in data collection and model training is questionable. For instance, Section 4.2 on \"multi-agent consensus video annotation\" provides a basic idea of the processes but lacks sufficient detail for replication (e.g., missing the prompts used, configurations such as the number of frames used for each model, etc.). Additional issues are noted in the \"Questions\" section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The author mentioned that Human Verification is used for curating the 2M video benchmark. Can the authors make a detailed description of the Human Verification procedure?\n\n2. More procedure should be enclosed in the appendix." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed dataset is novel and the data curation procedure is well-organized.\n2. The proposed Parallel Equivalent Policy Encoding and Policy-Aware Adaptive Pruning can effectively encode the Safety Policy Guidelines and reduce the redundancy of video tokens.\n3. The results are good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposed a safety aware video understanding benchmark, including 2M human verified videos. The unsafe scenarios are separated into 6 classes. The authors also design a pipeline for automatically data generation. For the video understanding model, the authors propose the Parallel Equivalent Policy Encoding and Policy-Aware Adaptive Pruning to encode the Safety Policy Guidelines and reduce the redundancy. The result of the trained model is good comparing to both close- and open- source models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Dataset Construction: \n (1) The release and separation of the dataset is a concern. The author can only provide links to publicly available sources and annotations. But it is common that the link may fail.I understand that this is something unavoidable, but it will undoubtedly reduce the frequency of use and impact of this dataset.\n (2) 2M videos for human verification is a huge effort, the authors don't provide any details of the procedure.\n\n2. Model training:\n (1) Some of the training procedure is ambiguous, there should be more details about Preference Post-tuning procedure." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose an efficient MLLM-based video guardrail model and a large-scale instruction-tuning benchmark dataset to achieve customized safety policy and transparent explanations." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024safewatch,\ntitle={SafeWatch: An Efficient Safety-Policy Following Video Guardrail Model with Transparent Explanations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xjKz6IxgCX},\nnote={under review}\n}" }, "abstract": { "value": "With the wide adoption of generative AI and rapid growth of high-quality video generation, video guardrails have become more crucial than ever to ensure safety and security across platforms. Current video guardrails, however, are either overly simplistic, relying on pure classification models trained on simple policies with limited number of unsafe categories, which lack detailed explanations, or prompting multimodal large language models (MLLMs) with long safety guidelines, resulting in inefficient and impractical guardrails for real-world content. To bridge this gap, we propose SAFEWATCH, an efficient MLLM-based video guardrail model designed to follow customized safety policies and provide multi-label video guardrail outputs with content-specific explanations in a zero-shot manner. In particular, unlike traditional guardrails that encode entire policies autoregressive, causing inefficiency and bias, SAFEWATCH uniquely encodes each policy trunk in parallel and eliminates their position bias such that all policies are attended simultaneously with equal importance. In addition, to improve efficiency and accuracy, SafeWatch incorporates a policy-aware visual token pruning algorithm that adaptively selects the most relevant video tokens for each policy, discarding noisy or irrelevant information. This allows for more focused, policy-compliant guardrail with significantly reduced computational overhead. Considering the limitations of existing video guardrail benchmarks, we propose SafeWatch-Bench, a large-scale video guardrail benchmark comprising over 2M videos spanning six safety categories which covers over 30 tasks to ensure a comprehensive coverage of all potential safety scenarios. We have conducted extensive experiments, showing that SafeWatch outperforms all SOTA video guardrails on SafeWatch-Bench by 19.6% and 15.4% on existing benchmarks, while reducing inference cost by 25% on average. SafeWatch also demonstrates strong policy-following abilities and outperforms baselines by 20% in zero-shot adaptability to new policies. Additionally, both LLM-as-a-judge and human evaluators\nconfirm the high quality of the explanations provided by SafeWatch." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Video Guardrail Model", "Safe Foundation Models", "Efficient LLMs Inference", "LLM Safety", "Multimodal Foundation Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a931c7c7320b297943eff52b2db77c595912603b.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SafeWatch: An Efficient Safety-Policy Following Video Guardrail Model with Transparent Explanations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xjornbs7aT
Action Mapping for Reinforcement Learning in Continuous Environments with Constraints
main
Active
Constrained MDPs;continuous action space;deep reinforcement learning
reinforcement learning
3;3;5;6
4;4;3;4
2;1;2;3
1;2;3;3
2;1;2;3
4.25
3.75
2
2.25
2
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The paper's theme is closely related to Safe RL, as mentioned. However, many Safe RL algorithms are not included as baselines for comparison. What is the rationale behind this omission?\n2. While PPO+Replacement has slower learning efficiency, it strictly ensures the satisfaction of constraints. This property is valuable in some online environments, but AM lacks this capability. Are there any proposed methods to address this limitation?\n3. In Figure 4(c), the performance of AM-SAC suddenly increases at 7500 steps. How can this phenomenon be explained?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The motivation for this work is clear: it addresses the SCMDP problem by utilizing a model to learn a feasible action space, effectively converting SCMDP into an MDP and improving the algorithm’s sample efficiency.\n2. The paper provides a step-by-step explanation of related concepts, making it very accessible to readers who may not be familiar with the field.\n3. The description of the action and state spaces for the tasks in the experiments is clear, and the importance of the AM algorithm is effectively illustrated through visualizations at the end of the experimental section." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce a novel approach known as action mapping (AM) within the context of deep reinforcement learning (DRL), showcasing its effectiveness, particularly when utilizing approximate feasibility models. Their results suggest that the integration of approximate model knowledge can improve training performance and enable agents to represent multi-modal action distributions, thus enhancing exploration strategies. By applying AM to both PPO and SAC, which represent on-policy and off-policy RL algorithms respectively, the authors provide comparative experimental results. These findings demonstrate that the AM method can transform state-wise constrained Markov Decision Processes (SCMDP) into Markov Decision Processes (MDP), thereby enhancing the sample efficiency of the original algorithms." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The absence of accompanying code makes it difficult to replicate the experimental results.\n2. The experimental section appears somewhat limited, as it only tests the method in two environments. This reduces the persuasive power and credibility of the results.\n3. The action mapping approach is not end-to-end, requiring pre-training with trajectory data before its application. This introduces additional costs, which are not adequately discussed in the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The most significant contribution is how action mapping allows agents to express multi-modal action distributions through a simple Gaussian in latent space, improving exploration. Ability to plan with approximate feasibility models is a notable advantage since perfect models are rarely available in practical applications." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles the problem of how to efficiently train agents in environments with constraints (like a robotic arm avoiding obstacles or an aircraft maintaining non-holonomic constraints). Traditional DRL approaches struggle with poor sample efficiency and slow convergence in such constrained environments. The authors propose \"action mapping,\" which decouples the learning process into two parts: first training a feasibility policy that learns to generate all feasible actions for any state, then training an objective policy to select optimal actions from this feasible set, effectively transforming a state-wise constrained Markov Decision Process (SCMDP) into an unconstrained MDP. They validate their approach through two experiments - a robotic arm end-effector pose task with perfect feasibility models and a path planning problem with approximate feasibility models - demonstrating superior performance compared to common approaches like action replacement, resampling, and projection." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper omits constraint violation plots for the path planning task, making it impossible to verify claims about performance with approximate feasibility models.\n \n- The feasibility model is a critical component of the proposed architecture, yet the paper lacks essential analysis and ablations of this component. Key questions remain unanswered: How is state space sampling performed during training? What metrics determine sufficient training of the feasibility model? How does the quality/approximation level of the feasibility model impact overall performance? Without these analyses, it's difficult to understand the method's robustness and its applicability to scenarios where perfect or near-perfect feasibility models aren't available.\n\nMinor Comments:\n\n- Typo on line 406, “actiosn”" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics review needed." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. **[About Weakness 1]** Isn’t the required setup for the proposed method too strict? Other constrained RL methods estimate the cost function without prior knowledge, obtaining cost information similarly to rewards. In such cases, would the proposed action mapping approach still be applicable?\n\n2. **[About Weakness 2]** Could you provide experimental results in a wider variety of environments?\n\n3. **[About Weakness 3]** Could you also show performance comparisons with other constrained RL methods?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The method for training the feasibility policy is novel, allowing for fewer constraint violations and higher returns compared to other methods.\n2. Experiments were conducted in environments requiring constraints, such as a robotic arm task and a spline-based path planning.\n3. The approach is straightforward and can be combined with any RL algorithm." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an action mapping method that distinguishes between feasibility and objective policies during training. By pretraining the feasibility policy first and then training the objective policy, the approach enables more efficient learning within a reduced action set. Experimental results demonstrate that the proposed method results in fewer constraint violations and achieves higher returns compared to previous action replacement, resampling, and projection methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The assumption that the feasible policy can be pretrained seems overly strict. Pretraining requires prior knowledge of the cost function $C^\\tau(s;\\pi)$ and the feasibility model $G(s,a)$, which may be difficult to assume in general.\n2. The experimental environments appear limited. It would be beneficial to include comparisons in environments like Safety Gym or other constrained RL environments.\n3. There is a lack of baseline algorithms. Currently, the comparisons are limited to variants of action mapping, such as action resampling and projection. Direct comparisons with a wider range of methods, including Lagrangian approaches, would strengthen the evaluation. Even if these methods are primarily designed for standard CMDPs rather than SCMDPs, adjusting the constraint thresholds to be more strict would allow for a fair comparison under the SCMDP constraints used in the original experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. For the training in Section 4.1, does it require a ground-truth $g(s,a)$?\n2. What is the bound of the output of $\\pi_f$ and $\\pi_o$?\n3. What are the implementation details of the proposed method, e.g., network structure, hyperparameters?\n4. For Figure 4, why is SAC+Replacement not included in Figure 4c? And where is the constraint violation plot for SAC?\n5. Throughout the paper, it seems that the definitions in Equation 5-7 are not necessary, as in Section 4 and the experiments only the function $g$ is assumed. Can the authors provide more explanation on this point?\n6. How many seeds/trials are used in Figure 4?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The target problem to solve in this work is of great significance to many real-world applications.\n- The related works are introduced and discussed satisfactorily." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a strategy called action mapping for RL in continuous environments with state-wise constraints. The idea is to learn a latent action space and an action mapping model, with which the policy samples a latent action and the latent action is further mapped to a feasible action. The proposed method is evaluated in a robotic arm end-effector pose positioning task and a path planning environment, showing better performance than several existing methods in terms of higher returns and lower constraint violations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Although the authors mentioned the reference [Theile et al., 2024], I found the content in Section 4 overlaps largely with the content in Section 3 and Section 4 of [Theile et al., 2024]. For example, Equation 16 in this paper is almost the same as the JS loss in Table 2 of [Theile et al., 2024]. Therefore, the novelty and contribution of this paper are questionable.\n- The whole training process is not clear. Adding a pseudocode of the proposed algorithm will help. Especially, the training of the feasibility model is not clear enough. In addition, many technical details are missing. Please see my questions below.\n- The idea of action mapping is closely related to the research on action representation learning [1-4]. The illustration in Figure 1 is very similar to the concept presented by Figure 1 in [1]. These related works should be included in the related work section for a detailed discussion.\n\n### Minors\n\n- The symbol $J$ in Equation 1,2 and Equation 3,4 are inconsistent.\n- The legends in Figure 4 are too small.\n\n---\n### Reference\n\n[1] Yash Chandak, Georgios Theocharous, James E. Kostas, Scott M. Jordan, Philip S. Thomas. Learning Action Representations for Reinforcement Learning. ICML 2019: 941-950\n\n[2] Boyan Li, Hongyao Tang, Yan Zheng, Jianye Hao, Pengyi Li, Zhen Wang, Zhaopeng Meng, Li Wang. HyAR: Addressing Discrete-Continuous Action Reinforcement Learning via Hybrid Action Representation. ICLR 2022\n\n[3] Pengjie Gu, Mengchen Zhao, Chen Chen, Dong Li, Jianye Hao, Bo An. Learning Pseudometric-based Action Representations for Offline Reinforcement Learning. ICML 2022: 7902-7918\n\n[4] William F. Whitney, Rajat Agarwal, Kyunghyun Cho, Abhinav Gupta. Dynamics-Aware Embeddings. ICLR 2020" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose action mapping, a method that integrates feasibility models into deep reinforcement learning, significantly improving training efficiency in constrained environments with continuous action spaces." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024action,\ntitle={Action Mapping for Reinforcement Learning in Continuous Environments with Constraints},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xjornbs7aT},\nnote={under review}\n}" }, "abstract": { "value": "Deep reinforcement learning (DRL) has had success across various domains, but applying it to environments with constraints remains challenging due to poor sample efficiency and slow convergence. Recent literature explored incorporating model knowledge to mitigate these problems, particularly through the use of models that assess the feasibility of proposed actions. However, integrating feasibility models efficiently into DRL pipelines in environments with continuous action spaces is non-trivial. We propose a novel strategy, termed action mapping, that leverages feasibility models to streamline the learning process. By decoupling the learning of feasible actions from policy optimization, action mapping allows DRL agents to focus on selecting the optimal action from a reduced feasible action set. We demonstrate through experiments that action mapping significantly improves training performance in constrained environments with continuous action spaces, especially with imperfect feasibility models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Constrained MDPs", "continuous action space", "deep reinforcement learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/66001385e4c21c2dc795d322e5db8aa8747b0762.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Action Mapping for Reinforcement Learning in Continuous Environments with Constraints" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xkR3bcswuC
Generative Models: What Do They Know? Do They Know Things? Let's Find Out!
main
Active
Visual knowledge;Generative models;Intrinsic Images
interpretability and explainable AI
5;5;6;6
4;4;3;4
2;2;3;3
3;2;3;2
3;3;4;3
5.5
3.75
2.5
2.5
3.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Their findings reveal that a minimal amount of labeled data, sometimes as few as 250 images, suffices for the effective recovery of these intrinsic images.\n2. The research claims a positive correlation between the generative quality of a model and the accuracy of its recovered intrinsic properties, which is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on investigating the intrinsic knowledge encoded within various generative models, such as GANs, autoregressive models, and diffusion models. The study aims to uncover whether these models inherently capture fundamental scene properties, including Depth, Surface Normals, Albedo, and Shading. Through the use of Low-Rank Adaptation (LoRA), a lightweight technique that introduces minimal learnable parameters, the authors propose a model-agnostic approach to recover these intrinsic features." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In Table 1, why do different generative models exhibit varying abilities to capture scene intrinsics without altering the generator head? For instance, even within the same model, such as SG-XL, all scene intrinsics (e.g., Normal, Depth, Albedo, and Shading) are recoverable for the FFHQ dataset, while none of these intrinsics can be captured for ImageNet. Are there any hypotheses regarding which properties of the models or datasets might influence their differing abilities to capture intrinsic features?\n\n2. While the paper asserts that “enhancements in image generation quality correlate positively with intrinsic recovery capabilities”, this claim seems not convincing enough due to the following reasons:\n\na) Figure 2 attempts to validate the claim by showing a relationship between generated image quality (measured by FID) and recovery errors. However, it includes only three generative models (SG-XL, SGv2, and VQGAN), which represent GAN-based and autoregressive models but exclude diffusion models. This limited selection makes it challenging to empirically confirm the claim. It would be better to provide more generative models in Figure 2, including diffusion models as well. Moreover, discussing the technical reasons behind such correlations would strengthen the argument.\n\nb) The claim lacks rigor due to inconsistencies. For instance, Figure 2 suggests that SG-XL outperforms SGv2 and VQVAE in generative quality, yet Table 2 shows SG-XL occasionally underperforming them, such as in Shading (on FFHQ). Moreover, factors beyond the generative model itself, like the dataset, also impact performance. For example, SGv2 performs worse in Depth but better in Shading when switching from FFHQ to LSUN Bedroom. Please provide a more in-depth statistical analysis of the correlation between generative quality and intrinsic recovery capabilities across all models and datasets.\n\n3. One of the paper’s objectives is to demonstrate that “tiny new parameters and data are enough for intrinsic recovery”. However, as mentioned in this paper, several existing works (e.g., [1]) have already shown that parameter-efficient adaptation methods, like linear probes, can effectively extract intrinsic features such as depth. This work merely replaces linear probes with LoRA and demonstrates its effectiveness, which feels somewhat incremental. Besides, it would be better to discuss more, e.g., it would be helpful to discuss further -- for example, whether there are particular scenarios where LoRA outperforms linear probes or if LoRA provides any advantages beyond performance.\n\n[1] Yida Chen, Fernanda Viegas, and MartinWattenberg. Beyond surface statistics: Scene representations in a latent diffusion model. 2023.\n\n4. The paper notes a lack of ground truth data for certain maps (like albedo), which raises the question of whether using a light physics simulator or tools like Unreal Engine could provide high-quality ground truth labels. This approach could also enable the creation of more complex scenes and lighting conditions for more rigorous model testing. I am curious about the feasibility of incorporating such simulators or tools into this work, and whether this approach could substantially enhance the reliability of the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please address the comments in the weakness section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- [S1: Originality] Though the proposed method is very standard, the paper presented a generic approach and demonstrated that the approach recovers intrinsic information across diverse generative models, including GANs, autoregressive models, and diffusion models. \n\n- [S2: Clarity] The motivation, methodology, and results are clearly articulated, enabling a good understanding of the research contributions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the intrinsic information (e.g., normal, depth, albedo, and shading) encoded in various generative models, including GANs, autoregressive models, and diffusion models. Key findings include (1) Intrinsic information is a byproduct of training generative image models; (2) Low-Rank Adaptation (LoRA) is a generic technique to study the intrinsic information encoded, better than other approaches such as linear probing and fine-tuning; (3) The quality of a generative model and accuracy of its recovered intrinsics (e.g., depth prediction accuracy) are positively correlated. To recover each intrinsic property, a structured prediction module (e.g., depth prediction) has been used to provide pseudo groundtruth during the LoRA optimization. Experiments have been conducted mainly on depth and normal prediction." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- [W1: Deeper understanding] While the paper establishes what intrinsic knowledge is encoded, it doesn't delve into how generative models utilize this knowledge during image generation. \n - [W1.1] For example, the reviewer would like to understand if intrinsic property emerges with a half-trained generative model (or at different training epochs).\n\n- [W2] The reviewer find that most intrinsic property do not contain high-frequency details (e.g., sharp edges in the depth prediction). This could be the shortcoming of using LoRA fine-tuning in the design. Although multi-step diffusion inference has been added in Section 5, the question still remains if a single-step approach is sufficient to recover high-fidelity intrinsic information.\n - [W2.1] How does multi-step inference apply to other generative models such as StyleGAN?\n - [W2.2] What’s the motivation of applying LoRA to attention layers of a diffusion model?\n - [W2.3] The ablation studies on applying LoRA to different modules (Appendix B) is interesting. It seems to suggest that LoRA is not successful when applied to up or down blocks. What’s the insight behind the discovery?\n\n- [W3] The reviewer does not fully understand the claim that StyleGAN-XL trained on ImageNet is an exception (Line 365). For example, StyleGAN-XL achieved a much lower FID (Line 452) compared to other generative models, which seems to contradict with the claim that StyleGAN-XL’s limited ability to generate realistic images (Line 366). Please clarify this point in the rebuttal.\n\n- [W4] Specific to the comparison between SD-1.X and SD-2.1, how much of the performance difference can be attributed to the improved encoder-decoder? Is it possible to recover the intrinsic property by applying LoRA fine-tuning on encoder-decoder alone?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Related to the last weakness, Section 4.4 identifies LoRA as the superior method for intrinsic image recovery, but the underlying reasons remain unclear. Could the authors provide insights into its advantages over alternative methods?\n- The authors demonstrate a positive correlation between image intrinsics and generation quality. Could they elaborate on the nature of this relationship? Specifically, does better intrinsic understanding lead to improved generation quality, vice versa, or is there a bidirectional relationship between these aspects?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The work raises fundamental questions about the nature of internal representations in generative models and their relationship to generation quality, contributing to our understanding of these architectures.\n- The research demonstrates that world representations in powerful generative models are more concrete and accessible than previously assumed, establishing important connections to explainable AI.\n- The authors present compelling insights about the minimal data requirements for intrinsics recovery, suggesting that the underlying knowledge is already embedded in the model's parameters.\n- The methodology is straightforward yet effective, making the findings easily reproducible and applicable to various generative models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates whether current generative models possess a genuine understanding of the world they're generating, rather than just statistical pattern matching. The authors propose using the generation of image intrinsics (such as depth maps, normal maps, etc.) as a proxy for evaluating the models' physical understanding and explainability. They demonstrate that a simple adaptation method (LoRA) can effectively recover these intrinsic properties and conduct extensive analyses to support their hypothesis about models' world understanding." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper appears to have originated as a study specifically focused on diffusion models, with other generative architectures added later to broaden its scope. While Section 4's experiments and Section 5's generation quality refinements primarily use Stable Diffusion, validating these findings across different architectures would strengthen the paper's claim of developing a general approach.\n- The experiments in Section 4.2 focus solely on normal map recovery when determining optimal rank and dataset size. The authors should clarify whether these optimal settings generalize to other intrinsics or if each intrinsic property requires specific configurations.\n- The paper would benefit from a deeper analysis of why LoRA is particularly effective for intrinsic recovery. While Line 102 suggests that \"intrinsic information is distributed throughout the network,\" this insight isn't fully developed in Section 4.4 or elsewhere. Understanding this mechanism could also inform the development of future recovery methods.\n\nThe paper presents strong empirical findings, though deeper analysis is needed to enhance its impact on interpretability and explainable AI research." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "To AC: I can clearly understand the task, the proposed approach and the experiments in the paper. Overall, I find the paper supports the claims with properly designed experiments, but I cannot evaluate the importance and novelty of the paper at the moment since I don't follow literature on this intrinsic estimation topic. I would evaluate these two aspects later by referring to opinions from other reviewers. So please use this review with proper weight when make decisions." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-written and easy-to-follow\n\n2. The approach is straightforward and effective by using LoRA to adapt pretrained generative models for downstream tasks beyond image generation. The predicted intrinsic results look decent.\n\n3. By using LoRA instead of retraining or finetuning the whole model, the approach requires less data and parameters to perform the image intrinsic estimation task. As an experimental work, the paper mostly supports the claims therein with extensive experiments." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to use pretrained generative models extended with trainable LoRA layers as image intrinsic predictors. The proposed approach aims to learn effective intrinsic extractors with as few LoRA parameters and training samples as possible. Extensive experiments are conducted to primarily show that (1) with less LoRA parameters and data samples than state-of-the-art approaches, extracting intrinsic images is still possible (2) it is the prior knowledge in pretrained generative models that helps extract intrinsic image, or in other words, generative models do encode useful intrinsic knowledge though it is not clear how such intrinsic knowledge are used during generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Fig 8 and Fig 9 show that there is a performance peak as the number of LoRA params or training samples vary. Why there is a peak? For the case of fixed number of training samples, with more LoRA params than Rank 8, the performance degrades. Is this due to underfitting that there is no enough data to train larger LoRA well enough? Similarly, for the case of fixed number of LoRA params, is this due to overfitting that more data overfits a relative small LoRA?\n\n2. A relevant question to 1. If you increase the number of LoRA and the number of training data at the same time avoiding overfitting or underfitting, can you outperform existing approaches eventually as shown in Table 3? Furthermore, the paper shows qualitative results for all four intrinsics Surface Normal, Depth, Albedo and Shading throughout the paper, but shows quantitative results on only Surface Normal, Depth in Table 3. Is there is any reason for this practice?\n\n3. The paper overclaims a bit about \"minimal\" requirements on parameter updates and data. Why do you say minimal and in what sense it is minimal? Thought the approach is able to work with fewer params and data , the quantitative performance doesn't outperform existing approaches as shown in Table 3.\n\n4. Typo Table 3 caption, \"intrinsicsacross\"->\"intrinsics across\"" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Intrinsic images are encoded across generative models (GAN, Autoregressive and Diffusion). We can recover them using LoRA while using the same decoder head that generates the image." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024generative,\ntitle={Generative Models: What Do They Know? Do They Know Things? Let's Find Out!},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xkR3bcswuC},\nnote={under review}\n}" }, "abstract": { "value": "Generative models excel at mimicking real scenes, suggesting they might inherently encode important intrinsic scene properties. In this paper, we aim to explore the following key questions: (1) What intrinsic knowledge do generative models like Autoregressive models, GANs and Diffusion models encode? (2) Can we establish a general framework to recover intrinsic representations from these models, regardless of their architecture or model type? (3) How minimal can the required learnable parameters and labeled data be to successfully recover this knowledge? (4) Is there a direct link between the quality of a generative model and the accuracy of the recovered scene intrinsics?\n\nOur findings indicate that a small Low-Rank Adaptation (LoRA) can recover intrinsic images---depth, normals, albedo, and shading---across different generators (GAN, Autoregressive, and Diffusion) while using the same decoder head that generates the image. As LoRA is lightweight, we introduce very few learnable parameters (as few as 0.04% of Stable Diffusion model weights for a rank of 2), and we find that as few as 250 labeled images are enough to generate intrinsic images with these LoRA modules. Finally, we also show a positive correlation between the generative model's quality and the accuracy of the recovered intrinsics through control experiments." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Visual knowledge", "Generative models", "Intrinsic Images" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/df84f7acd2b4bdab7db09523ee3d18d81698d448.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/49ecba92fa656bcdcf4b388713cdfef7335ae689.zip" }, "title": { "value": "Generative Models: What Do They Know? Do They Know Things? Let's Find Out!" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xkgfLXZ4e0
Correlating instruction-tuning (in multimodal models) with vision-language processing (in the brain)
main
Active
brain encoding;fMRI;visual processing;multimodal instruction-tuned models;language decoder;LLMs;MLLMs
applications to neuroscience & cognitive science
5;6;6;6
4;3;2;4
2;3;3;3
3;3;3;3
3;3;3;3
5.75
3.25
2.75
3
3
-0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* The paper focuses on the visual areas of the brain. How would the results look if the entire brain was used? A small discussion would suffice. \n* How many parameters are ViT-H and CLIP? Are they comparable?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* Novelty with instruction-tuned MLLMs: Overall, I haven’t seen too much work on brain encoding models with instruction-tuned MLLMs. This work is highly timely.\n* Well-designed and controlled experiment: The paper uses a controlled and well designed experiment for exploring brain fits.\n* Instruction breakdown: I really appreciated Figure 3 and 4. I thought it was pretty interesting to see how different instruction types fit responses in the visual cortex. This was a fairly novel result. \n* I also really liked Figure 6 which focused on shared variance. This was an interesting result that showed which instruction types had a higher degree of shared features which resulted in a higher degree of brain similarity." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies brain-encoding fits of instructions-tuned MLLMs in comparison with traditional methods for designing multimodal embeddings such as via CLIP. To study this question, the paper feeds different types of instructions to the MLLMs in comparison to other baselines like CLIP or vision encoders. The paper finds that instruction-tuned MLLMs have much better fits with visual areas of the brain than vision embedding models and similar performance to CLIP. The authors break down the instructions by type to identify which type of instructions have stronger correlation with visual areas of the brain and do the same with visual concept groups. The authors also include a shared variance experiment that aims to characterize whether shared features were used for each instruction group that were relevant for encoding model performance. \n\nOverall, I liked this paper and would vote for acceptance outside of a few concerns. It answered a relevant question about instruction-tuned MLLMs that haven’t really been explored before." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Comparisons: Overall, I think it would be better to include a baseline with a non-instruction-tuned MLLM that has the same architecture. For example, maybe BLIP-2 instead of InstructBLIP? This would have really explored the role of instruction tuning more thoroughly in comparison. BLIP-2 should be available off the shelf and I believe this would be a salient comparison. I would also be curious about how a language model of similar size would do. \n* Another concern here is that improvement over ViT-H could be due to an increase in parameters. See questions. \n* Nit: Could Figure 3 be sharpened? The text is quite blurry and difficult to read. \n\nIncorporation of some of these comparisons would help raise my score/confidence." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns", "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper introduces a novel approach to evaluating the brain alignment of instruction-tuned MLLMs, providing valuable insights into how these models process multimodal information in relation to human brain activity.\n- The findings have implications for understanding how brain activity corresponds to the processing of multimodal information, which could be valuable for cognitive neuroscience and AI research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the alignment of instruction-tuned multimodal Large Language Models (MLLMs) with brain activity, particularly focusing on how these models process visual information when prompted with natural language instructions. The study explores the brain alignment of MLLMs compared to unimodal and non-instruction-tuned multimodal models and assesses the effectiveness of various task-specific instructions on brain alignment. The paper presents a comprehensive analysis of how different instructions from MLLMs capture visual concepts and their correlation with brain activity, using fMRI data from the Natural Scenes Dataset (NSD)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The study relies on the NSD dataset, where subjects passively view images, which may not fully capture brain activity aligned with task-specific instructions. Active task engagement during fMRI scans could provide a more comprehensive evaluation.\n- How do the authors address ethical considerations regarding the use of fMRI data, especially in relation to participant privacy and data security?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "-\tIn Figure 2 it seems like CLIP is far outperforming the randomly initialized network. Why is there no asterisks?\n-\tIs Figure 6 summarizing the results across NSD-General? Why show only for one representative subject rather than the average of all?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "-\tInstruction-tuned multimodal models are an interesting way to investigate task tuning, as they have higher performance than prior fine-tuned models like the taskonomy set\n-\tThe comparison of retinotopic versus category-selective tuning is interesting, and may yield novel insights into high-level vision.\n-\tVariance partitioning allows a more fine-grained look into the contribution of different task tuning" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors test the ability of instruction tuned multimodal LLMs to match human visual cortex responses to static scenes in the NSD. They compare instruction tuned models to one standard multimodal model (CLIP) and a unimodal vision model.\n\nThe findings reveal that all multimodal models match visual cortex responses better in both low- and high-level visual regions. There does not seem to be a difference between CLIP and instruction-tuned models in any region. Within the instruction tuned models, there seem to be differences between tasks that best explain different parts of visual cortex, with lower-level tasks (e.g., color) better describing retinotopic areas and higher-level tasks (e.g., image captioning) better explaining higher-level tasks. Variance partitioning reveals that there is shared variance between many pairs of tasks, but some tasks like food labeling and scene recognition are more unique.\n\nIt is difficult to draw strong comparisons about these models compared to unimodal vision models though, because of the many differences (architecture, training data, task). As others have pointed out in prior work, advantages of multimodal models in visual cortex significantly decrease when you consider more balanced pairs (e.g., SLIP family models in Wang et al, and Conwell et al) and it seems likely that many of these differences would also diminish in more controlled modeled comparisons. If there are matched vision transformers trained on the same dataset, including them would strengthen the claims of the paper. \n\nThe task comparisons are interesting, but it would help for the authors to spell out what is at stake in these investigations. Does better fit of one type of instruction mean tell us what tasks to train neural networks on for improving human-aligned AI, or does it tell us something about the tuning properties of visual cortex. It would help to clarify this, particularly in the paper introduction and discussion." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-\tWhile the instruction model-brain comparisons are very interesting, it is not entirely clear what is at stake. Is the central question spelled out in the intro (lines 83-85) important for better human-alignment of AI, or in order to reveal tuning properites of the human brain? If the latter, the authors should flesh this out, and state some limits of this model comparison approach (see below). The primary goal of the study should be clarified in the introduction.\n-\tThe authors make a distinction between the advantage of multimodal models in high- versus low-level visual cortex. The results look very qualitatively similar in early versus late ROIs in Figures 2 (and also compared to other category selective regions in Figure 10) so if this is a major point, it should be backed up with a statistical analysis, (e.g., non-parametric anova, or permutation-based comparison of the vision-multimodal difference in both regions).\n-\tPrior work has shown that when architecture and training set are matched (e.g., the SLIP family models) advantages of multimodal models largely go away (e.g., Wang et al 2023, Conwell et al., 2022). It seems likely that the advantage of the multimodal models over unimodal vision models in this paper is similarly due to different/richer training data, rather than multimodality itself.\n-\tThe distinction between the different instructions/tasks could be made clearer. It seems like the tasks vary from low- high-level and many of the papers claims suggest this distinction. It would help to explicitly order the instructions and make this distinction clear early in the paper (e.g., Table 1). Without a priori labels for low- versus high-level tasks, some of the conclusions seems somewhat circular (eg a task that best explains retinotopic cortex is low-level). This ordering could also help make the results in Figure 3 more clear.\n-\tIt is difficult to see trends that are constant across the two different instruction-tuned models in Figure 3 and 11. InstructBLIP seems to show the trends highlighted in the paper, mPLUG-Owl looks quite random. Perhaps the re-ordering of the tasks/color bar suggested above will help. Alternatively, perhaps small differences across the models are being magnified in the winner-take-all plot. Either way, the authors should address these discrepancies.\n-\tAs a small point, the ROI labels in Figure 2 could be more intuitive. I believe “whole brain” refers to the NSDGeneral mask? If so, this should clarify that it is only visual cortex, not whole brain. pRF-Visual and FLOC-PLACES would be more clear as “retinotopic early visual cortex” and “scene-selective” or something similar\n-\tIt is unclear what Figure 5 is adding to the paper. Layerwise comparisons in different transformer networks are somewhat confusing and also seem dependent on the structure of encoders/decoders. The authors should consider moving this to the supplement. If it is important to the main findings, the author should clarify how the layerwise comparisons relate to the main text, and how they should be interpreted in light of any architectural differences across models (eg encoder vs decoder layers)\n-\tThe variance partitioning is a major strength of the paper, but it seems to only investigate shared variance between pairs of tasks. The more important question seems to be how much (if any) unique variance is explained by models tuned on any one task? And what these results can tell us about the brain." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does the CLIP baseline work?\n - L235 states “we input both image and ground truth caption pairs” — are the CLIP image and CLIP text embeddings simply concatenated? Are you using the final pooled output from CLIP, or some intermediate layer? If you are using an intermediate layer, how do you pool across image patches / tokens?\n- In L29, what does “effectively capture” mean?\n - Does this mean that the MLLM embedding correlates with the “expected” brain region that is known to do a certain type of processing? This is not obvious from the discussion in L365.\n- Below are a few minor comments.\n - L280: typo; “random **initialization** of the 3 MLLMs”\n- Below is a comment on the limitations section, although this note is not important for my rating and I leave this to the discretion of the authors.\n - I wish the paper would also briefly discuss the limitations of fMRI itself, and how it is not exactly synonymous with “processing in the brain.” Specifically, fMRI is imprecise, as it “measures a surrogate signal” where the “spatial specificity and temporal response” is constrained [1].\n - Nevertheless, fMRI is the most available / accessible brain signal to study, and it is still interesting to study the relation between machine learning models and fMRI.\n\nReferences\n\n[1] Logothetis, N. What we can do and what we cannot do with fMRI. *Nature* 453, 869–878 (2008)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The central question the paper studies (the alignment of MLLMs and fMRI) is novel, as MLLMs have not been studied in prior work. The paper is also interesting because it presents some insights into the internals of MLLMs, for example, the fact that image captioning overlaps highly with other instructions.\n- The presentation of the paper is good. The experiments are well-motivated and interpreted with precise language. The paper is also thorough in its setup and appropriately references prior work.\n - The ablations relating specific instructions or model layers to brain regions (Sec. 6.2) were interesting, even if for some instructions the association was inconclusive.\n - Ablations, such as the baseline “cross-subject prediction accuracy” (L174) and “variance partitioning approach” (L262), are motivated by prior work. The usage of a ridge regression based model (L242) and Pearson Correlation as a metric (L252) for brain alignment is also standard practice." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the correlation between multimodal LLMs (MLLMs) and fMRI from the Natural Scenes Dataset (NSD). Specifically, they feed images and instructions to the MLLMs, cache the embeddings, and fit a linear model to map from MLLM embeddings to fMRI. They find that MLLMs exhibit higher brain alignment than vision-only models and CLIP. They also analyze the correlation of specific instructions and specific MLLM layers with brain regions. Finally, they do a variance partitioning analysis to quantify the overlap between different pairs of instructions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The experiments could include additional ablations for the input to the MLLM.\n - For example, similar to the setup in Sec. 6.1, one could ablate feeding only the image or only the instruction to the MLLM, which reduce to “vision-only” or “LLM-only” baselines. This could help control for model size / other model statistics, i.e., ViT-H might also perform worse because it is a smaller model." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024correlating,\ntitle={Correlating instruction-tuning (in multimodal models) with vision-language processing (in the brain)},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xkgfLXZ4e0},\nnote={under review}\n}" }, "abstract": { "value": "Transformer-based language models, though not explicitly trained to mimic brain recordings, have demonstrated surprising alignment with brain activity. Progress in these models—through increased size, instruction-tuning, and multimodality—has led to better representational alignment with neural data. Recently, a new class of instruction-tuned multimodal LLMs (MLLMs) have emerged, showing remarkable zero-shot capabilities in open-ended multimodal vision tasks. However, it is unknown whether MLLMs, when prompted with natural instructions, lead to better brain alignment and effectively capture instruction-specific representations. To address this, we first investigate the brain alignment, i.e., measuring the degree of predictivity of neural visual activity using text output response embeddings from MLLMs as participants engage in watching natural scenes. Experiments with 10 different instructions (like image captioning, visual question answering, etc.) show that MLLMs exhibit significantly better brain alignment than vision-only models and perform comparably to non-instruction-tuned multimodal models like CLIP. We also find that while these MLLMs are effective at generating high-quality responses suitable to the task-specific instructions, not all instructions are relevant for brain alignment. Further, by varying instructions, we make the MLLMs encode instruction-specific visual concepts related to the input image. This analysis shows that MLLMs effectively capture count-related and recognition-related concepts, demonstrating strong alignment with brain activity. Notably, the majority of the explained variance of the brain encoding models is shared between MLLM embeddings of image captioning and other instructions. These results indicate that enhancing MLLMs' ability to capture more task-specific information could allow for better differentiation between various types of instructions, and hence improve their precision in predicting brain responses." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "brain encoding", "fMRI", "visual processing", "multimodal instruction-tuned models", "language decoder", "LLMs", "MLLMs" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5ab0da392e0f68af6635e8ac452c88cdf4adccbc.pdf" }, "presentation": null, "primary_area": { "value": "applications to neuroscience & cognitive science" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Correlating instruction-tuning (in multimodal models) with vision-language processing (in the brain)" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xlbXRJ2XCP
MaxCutPool: differentiable feature-aware Maxcut for pooling in graph neural networks
main
Active
Graph neural networks;graph pooling;graph coarsening;maxcut
learning on graphs and other geometries & topologies
3;5;5;6
4;2;3;4
3;3;3;2
1;3;2;3
3;3;3;3
4.75
3.25
2.75
2.25
3
-0.207514
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Introducing the concept of maxcut into attributed graph pooling is novel and meaningful, and it is worth noting the proposed method is feature-aware and differentiable.\n- The background and related work are introduced in detail and accompanied by illustrations, which greatly aid in understanding the proposed method.\n- It is encouraging that the proposed method was tested on three tasks, including maxcut partition, graph classification, and node classification." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces an innovative method for computing the MAXCUT in attributed graphs which is fully differentiable, enabling joint optimization of the MAXCUT with other objectives. Leveraging the resulting MAXCUT partition, the authors develop a hierarchical graph pooling layer tailored for Graph Neural Networks. The authors claim that the pooling layer is sparse, differentiable, and effective for downstream tasks on heterophilic graphs, addressing a key challenge in graph learning by providing a versatile and adaptable pooling strategy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The authors are advised to theoretically and empirically analyze the complexity of MaxCutPool, showing whether it introduces additional computational overhead to GNNs.\n- It is also recommended that the authors provide the code to facilitate a better understanding of the proposed method and to ensure the reproducibility of the experiments.\n- The authors claim that MaxCutPool is particularly effective for heterophilic graphs, but no detailed analysis is provided. What is the underlying interaction between MaxCutPool and heterophilic graphs? Additionally, considering that traditional message passing (including GIN) is designed for homophilic graphs and is regarded as a low-pass filter, could this conflict with MaxCutPool and potentially lead to poorer results?\n- My primary concern is with the experimental section. In graph classification tasks, the proposed approach does not demonstrate satisfactory performance and even falls below the no-pooling baseline on nearly half of the datasets. In Table 3, the authors mark the performance of the no-pooling baseline in gray, even when it ranks highest, which is confusing.\n- Is MaxCutPool-NL learning $\\mathbf{s}$ solely with task loss after removing $\\mathcal{L}_{cut}$? Does this imply that the core concept of MaxCut has been removed? If so, it still appears to be comparable.\n- Although the work emphasizes its advantages on heterophilic graphs, it does not perform better than other pooling methods on the GCB-H and MUTAG datasets, which have the highest levels of heterophily.\n- Compared to graph classification, the performance on node classification is satisfactory. However, it seems that datasets are not the commonly adopted ones for node classification (such as Planetoid or OGB) and exhibit very low levels of heterophily.\n\nThe method and idea are very interesting, but the experiment does not sufficiently demonstrate their effectiveness and necessity. I will consider increasing the score once this issue is addressed." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* The method uses HetMP, which can be interpreted as a Laplacian sharpening kernel that enhances differentiation between signals across nodes, thereby reducing smoothness. This approach performs well on heterophilic graphs. However, in Figures 2(b-c), the steps seem to be based on a homophily assumption—where neighboring nodes tend to belong to the same cluster. Could you provide further clarification on this?\n* Beyond Equation (4), how does the proposed method perform when using other graph kernels or filters, such as high-pass filters? Additionally, how does the method perform on homophilic graphs?\n* Can the proposed method scale to graphs of arbitrary size, or are there practical limitations on the scalability threshold?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The writing is generally reader-friendly, featuring detailed illustrations such as figures and hyperparameters. Theoretical analyses are also provided.\n- The proposed method is easy to understand yet remains effective, performing well in experiments.\n- This work includes comprehensive experiments covering MaxCut partition computation, graph classification, and node classification, which lend credibility to the conclusions presented in the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel GNN-based approach for solving the MaxCut problem in attributed graphs. The proposed method exhibits good performance across multiple experiments, particularly when evaluated on the newly introduced heterophilic graph benchmark dataset for graph classification tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Please refer to the questions below." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In experiment 4.1, what were the features used for GNN? Were these learnable vertex embeddings?\n2. How does maxcutpool scale with the number of vertices in a graph?\n3. What is the intuition as to why learning the max cut operator provides a good pooling operator? Is it just that this is a high-frequency projection?\n4. In experiment 4.3, could you provide the tuned baseline for a GNN with no pooling?\n5. With $\\delta$ set to 2 in your node classification experiments, your propagation operator is already tuned towards heterophilic settings, and the stated intuition for why maxcutpool works is because it is projecting out high frequency components. Could you provide a node-classification results as a function of delta? Given that these datasets are all heterophilic, it's hard to tease out whether the improvements are due to the pooling or the propagation operator.\n6. How was hyperparameter tuning performed?\n7. Can this work be extended to link prediction in any meaningful way? Would it be possible to view super-node membership as a labeling trick?\n8. Does the unpooling operator involve any normalization?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is clearly presented\n2. The experiments are well defined and clearly motivated\n3. The authors make significant efforts to contextualize their work within the rest of the literature.\n4. The algorithm is clearly outlined in enough detail to reimplement" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Graph neural networks are constructed out of a few relatively simple building blocks -- A propagation operator, a message function, and a pooling function. These three components have received significant attention over the last few years, with both heuristic and graph theoretic approaches to improve the performance of GNNs on a variety of graph ML tasks. In this work, the authors present a novel pooling operator termed MaxCutPool, that is inspired by the classic graph theory problem of finding the max cut. They provide justification for the value of this pooling operator through robust experiments to find significant performance improvements." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper is well written, but not without weaknesses. In particular:\n\n1. The authors do not discuss the computational cost of running their pooling operators in comparison to others. While potentially less relevant for graph classification tasks, it is highly relevant for node classification tasks. Please include some discussion, at least in the appendix, of this.\n2. The authors do not justify _why_ it is expected that the max cut should provide a better pooling operator.\n3. The results in experiment 4.2 indicate only mild performance gains, and the presented method frequently underperforms the case with no pooling. It's was hard for me to understand from the text what the intuition for this is.\n4. The experiments in section 4.3 are hard to understand without a no-pooling option." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* l.289: Is CON always defined using the assignment matrix $\\mathbf{S}$ based on nearest neighbor aggregation?\n* How well does the differentiable MaxCut approximate MaxCut compared to more other algorithms in terms of approximation and time complexity? \n* Is the differentiable MaxCut required or does this method work with other algorithms for MaxCut?\n* How important is the nearest neighbor aggregation?\n* How do the other methods perform with the same level of hyperparameter tuning?\n* Why is there no reproducible implementation provided?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper proposes an intuitive pooling method. It is very nicely written and nicely describes the current state-of-the-art for graph pooling. I understood almost all of the details." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel graph pooling method based on the maxcut algorithm. As MaxCut assigns high scores to disconnected nodes, this paper motivates its connection to one-every-K pooling methods and score-based methods. They formulate a differentiable form of MaxCut which they integrate into a pooling layer that utilizes heterophilic message-passing layers. Several experiments comparing MaxCutPool to other pooling methods are conducted." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* W1: Auxiliarly loss: As with other top k pooling approaches, the selction is a discrete action and thus not differentiable. Optimizing $\\mathcal{L}_{cut}$ may not lead to an s that is well-suited for the downstream task. Claiming that the proposed pooling operator is fully differentiable is thus misleading.\n* W2: This paper proposes multiple parts without thoroughly analysing each of them:\n * i) Gradient descent for combinatorial problems are typically not preferred as the solution can arbitrarily bad due to local minima. As this paper proposes to use gradient descent for MAXCUT, there needs to be a more detailed comparison with existing methods, ideally in terms of time/space complexity, expected performance and guarantees. \n * ii) It also remains unclear whether this gradient based approach is better for downstream-tasks. The only experiment that compares the proposed MaxCutPool with existing MAXCUT algorithms is in Table 2, which shows slight improvements. To me it is not convincing to use this gradient based approach vs. traditional algorithms within MaxCutPool.\n * iii) While the nearest neighbor aggregation is proposed as a algorithms for many pooling methods, it is only evaluated in combination with MaxCutPool. As the difference is rather small it is unclear whether this step is needed. If the authors decide to propose such an algorithm, a more detailed evaluation would be needed. \n* W3: The overall contribution is quite limited. As pointed out by the authors, many algorithms for graph pooling exist and the theoretical and empirical benefits of MaxCutPool do not convince me.\n* W4 Experiments: \n * i) There is no reproducible implementation provided.\n * ii) Graph classification: There seems to be a large hyperparameter optimization ofr MaxCutPool, while \"all [other] pooling layers were used with the default hyperparameters\". This does not seem to be a fair comparison. Especially when only spliiting into train and validation, the search space needs to be of similar size.\n * iii) Heterophilic tasks are only defined for node classification tasks as heterophily is typically defined as having mostly different labels between adjacent nodes. As node labels do not exist for graph-level tasks, it is unclear what heterophilic graph classification means. The authors also did not define what heterophilic graphs are.\n \nAdditional minor points:\nl.156: Sharpening propagation operators cannot learn any kind of gradients, but sharp gradients.\nl.425: \"This is the first known instance of a non-expressive pooler passing the expressiveness test provided by this dataset, serving as a counterexample to Theorem 1 in Bianchi & Lachi (2023)\". To my understanding, Theorem 1 provides a sufficient condition not a necessary one." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A GNN-based approach for computing MAXCUT on attributed graphs, used to implement graph pooling." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024maxcutpool,\ntitle={MaxCutPool: differentiable feature-aware Maxcut for pooling in graph neural networks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xlbXRJ2XCP},\nnote={under review}\n}" }, "abstract": { "value": "We propose a novel approach to compute the MAXCUT in attributed graphs, i.e., graphs with features associated with nodes and edges. Our approach is robust to the underlying graph topology and is fully differentiable, making it possible to find solutions that jointly optimize the MAXCUT along with other objectives.\nBased on the obtained MAXCUT partition, we implement a hierarchical graph pooling layer for Graph Neural Networks, which is sparse, differentiable, and particularly suitable for downstream tasks on heterophilic graphs." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph neural networks", "graph pooling", "graph coarsening", "maxcut" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5632c3e0767e0bcf16ebe248b6f8a188dd180538.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MaxCutPool: differentiable feature-aware Maxcut for pooling in graph neural networks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xljPZuprBA
Exploring Edge Probability Graph Models Beyond Edge Independency: Concepts, Analyses, and Algorithms
main
Active
Random graph models;edge dependency;triangle density;subgraph densities;tractability;variability
learning on graphs and other geometries & topologies
3;5;5;6
4;3;3;4
3;3;2;3
2;2;3;3
2;2;2;3
4.75
3.5
2.75
2.5
2.25
-0.229416
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "-/-" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper builds on a solid theoretical foundation of edge-dependent random graph models, extending existing work to improve clustering and variability.\n- The supplementary materials provide sufficient details for understanding and reproducing the experiments.\n- The paper demonstrates a clustering and variability across by fitting different real-world datasets.\n- The paper offers a _potentially_ useful tool for generating random graphs with enhanced clustering and variability by fitting model parameters to specific datasets.\n- The authors provide a thorough reproducibility package, including all experimental workflows, parameters, and scripts, though this package could benefit from more comprehensive documentation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a generative random graph model that aims to balance realism of the generated graphs with computational efficiency of the generation process, seeking to increase clustering and variability in graphs generated by traditional models like Erdős-Rényi and stochastic block models. To achieve this, the paper uses an edge-dependent graph model (EPGM), which they use to create controlled dependencies among edges, and proposes a tractable algorithm for graph generation. The study includes a theoretical framework, practical algorithms, and experimental validation to demonstrate the generation of graphs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Validation**: The term \"desirable\" graph properties---namely, realistic structures, patterns, and variability---is vague, highly domain-dependent, and lacks formalization in the paper, making it challenging to assess whether the proposed model achieves these goals. A formal definition and visual examples earlier in the text could maybe help clarify these aims.\n- **Limited Exploration of Desirable Properties**: The paper does not fully explore the binding approach’s effects on other important network properties, such as modularity, conductance, and resilience, which are also highly desirable in certain domains.\n- **Dependence on Parameter Fitting**: The model generates graphs after fitting model parameters to real-world networks, rather than directly from scratch. It is unclear whether this model could lend itself to straightforward manual parameterization to generate synthetic graphs independently, a feature offered by models like the Lancichinetti-Fortunato-Radicchi (LFR) benchmark graph generator. This suggest that the graph generator has a narrow practical use.\n- **Scalability**: It is unclear if the model can efficiently generate realistic extremely large graphs, given that the experimental results focus on relatively small networks.\n- **Unclear Tractability**: The explanation of the binding schemes' tractability could be more detailed and intuitive, especially as it seems that binding is computationally slower than edge-independent methods, which could hinder its scalability to larger graphs.\n- **Comparison with Established Benchmarks**: The paper does not compare with the Lancichinetti–Fortunato–Radicchi (LFR) benchmark data generator, which also generates graphs with **power-law degree distributions, variability, and high clustering**. Given that the LFR model shares similar motivations and goals with the proposition, a comparison here would add necessary context. Additionally, faster and more efficient follow-up models to LFR align well with the goals of this study, particularly in terms of tractability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "(a) Similar effort can be found in the exchangeable network models, in view of relaxing conditional independence among edges. See for example \n\nHarry Crane and Walter Dempsey. Edge exchangeable models for interaction networks.\nJournal of the American Statistical Association, 113(523):1311–1326, 2018. PMID:\n30467447.\n\nWeichi Wu, Sofia Olhede, and Patrick Wolfe. Tractably modeling dependence in networks\nbeyond exchangeability. Bernoulli, 31(1):584 – 608, 2025\n\nPlease discuss more on this in literature.\n\t\t\n(b) The organization of the paper can be improved. For example, Definition 3.1 and Theorem 3.3 seem to take up too much space without demonstrating any clear importance. By contrast, the key definition of binding is placed in the appendix. \n\n(c) The authors focus a specific subclass ( good subsets ) of EPGM in section 4, however, why are the subsets considered are important? The paper can be improved if more motivation besides the feasibility are given. \n\n(d) The authors should compare and discuss more general models, such as the graphon model and the exponential random graph model as mentioned in their Section 3.2 in addition to the four specific models in Section 5.4. These models exhibit edge dependence, and is able to produce many triangles (especially the exponential random graph). It would be beneficial if the authors could provide a discussion on these models. \n\n(e) Theorem 5.7 provides the probability result for 3-motifs only, and the proof uses an enumeration method that could be difficult to extend to higher order motifs, which can be restrictive sometime in practice. A way to alleviate this is to provide codes in package for motives 4,5,6 as discussed in page 21. Also, please discuss how the closed-form varies with changes in \\( p \\), \\( g \\), and \\( R \\) for readers to better understand the results. Is it possible to consider several motives together? \n\n(f) Besides triangle, other network motives such as transitivity, can be important. Could the binding methods extend to 'transitivity' and 'triangle' simultaneously? Also please add some reference for the importance of triangle since the paper focuses on this quantity. \n\n(g)The authors conduct fittings in their simulations, however, the discussion of fitting in the main text is quite limited. If possible, the authors could provide some discussion on the properties of fitting, such as consistency.\n\n(h) The results in Table 1 is a little confusing. In the caption it writes ``The statistics are averaged over 100 random trails\". But the number of triangles are solved by tractability results. The data set is given, so what does the `random trial mean'? For edge independent models, densities of triangles can be computed tractably, too. On the other hand, the results in Table 1 might not be very meaningful. First, the results show that a parameter of a model is close to the corresponding real world data. However, what really matter is that the proposed method can produce a graph with similar property in high probability, which can be evaluated using simulated mean squared of $($ground true $\\Delta$-generated $\\Delta$$)^2$. Second, the comparison is not meaningful. For instance, in the case of the ER model, the authors estimate the connection probability matrix using the ER graph, then optimize their model parameters (number of triangles) based on a loss function that relates to the triangles, and subsequently compare the number of triangles from their model with those from the ER graph. Then the results are surely better since the authors are optimizing the number of triangles (with respect to the observed number of triangles) in a large class of model that include the ER graph. On the other hand, ER model, or other independent edge model is not necessarily good for the real data set in addition to the discrepancy of the numbers of triangles. Therefore, improving in triangle numbers comparing with those possibly inappropriate model could be less significant in practice. The authors could consider comparing with other models that take into account triangles, such as exponential random graphs and others.\n\t\t\n(i) Please discuss in detail the differences between LOCLBDG and PARABDG, as these two methods are distinct (as evidenced by the results in Figure 3). It would be beneficial to explain under which scenario one should choose LOCLBDG (or PARABDG)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors propose a general binding algorithm, as well as local and parallel binding algorithms, to generate networks with high variability and clustering. They also present closed-form results concerning triangles and discuss time complexities." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the concept of EPGM, which is wider than the edge-independent graph models. The authors propose a general binding algorithm, as well as local and parallel binding algorithms, to generate networks with high variability and clustering. They\nalso present closed-form results concerning triangles and discuss time complexities. The authors conduct simulation studies to evaluate their models. Overall, the idea of binding is interesting." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Only provides results for triangles\n2. Focus on a specific subclass of EPGM\n3. Simulation results are not very convincing" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Two kinds of questions are asked above regarding 1) the modeling side of binding, and 2) possible guarantees that binding alleviates the dense subgraph count vs overlap issue that motivates the work.\n\n### Typos / minor\n- Lines 3-5 in Algorithm 2: Is this simply a for-loop from $1$ to $R$? That might be slightly clearer if so. \n- Line 371 \"EGPMs\" -> \"EPGMs\"" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The central problem that is tackled, alleviating limitations of edge-independent / inhomogeneous ER graphs, is well-motivated in the text.\n- The proposed concept of binding can be applied to augment several different edge-independent random graph models.\n- Experiments demonstrate the effective of binding at matching triangle on four well-known kinds of RGMs and several well-known graph datasets.\n- There is theoretical work guaranteeing the tractability of the sampling algorithms.\n- The organization of the paper is generally logical and clear." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the concept of binding to upgrade edge-independent random graph models, which are RGMs in which each edge is sampled independently. Binding allows for sampling a graph with given marginal probabilities for each edge, but with some dependence between the edges, allowing for more realistic graph distributions in terms certain statistics, like high clustering coefficient. The idea is to partition all possible edges, then to sample a single uniform random variable for each group in the partition, and add edges to the sampled graph if the random variable is less than edge's marginal probability. This adds edge dependence in the sense that if $e$ and $e'$ are in the same partition, and the marginal probability of the former edge is higher, then the sampling of the former guarantees the sampling of the latter. The paper then proposes a scheme for finding a partition at sampling time, as well as a parallelized scheme which relaxes the edge partition to independent edge groupings. Finally, the paper presents theoretical validation of the tractability of their methods, showing that the samples are generated in quadratic time; and empirical validation of the increased quality of samples when using binding, showing that upon fitting parameters of their model to match the input graph's triangle count, samples indeed match the triangle count closely, while not harming how well the node degree and node pair distance distributions are matched." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Some writing could be clearer. For example, at the end of page 1, the authors say they will explore EPGMs, but it is not clear at that point what it is, or whether it is a previously-proposed concept versus something being proposed in this paper. More natural language descriptions of math, e.g., Property 4.4, would facilitate reading.\n- Given that binding is perhaps the main concept of this work, its formal definition should be in the main paper, and there should probably be some natural language description of the idea of general binding along with Algorithm 1 (e.g., as given in the summary above). There could also be more description of the modeling implications of binding: Does binding reflect some natural process that gives rise to real-world graphs? On the other hand, could the model assumption in binding be limiting in some way?\n- At least in the main paper, the theoretical work gives guarantees about the tractability of the sampling algorithms, but there is no proof that binding can alleviate modeling issues beyond Proposition 5.2, which is fairly vague - \"binding produces higher or\nequal subgraph densities\", but how much higher? E.g., does binding solve the motivating issue in Theorem 3.3 that edge-independent models cannot generate many triangles with low overlap?\n- There could be more discussion of binding" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "None" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My questions are mostly about how \"realistic\" this random process is:\n\n1. What are some types of real-world network properties you believe *cannot* be captured by the binding processes you outlined in the paper?\nCan it capture realistic community structure faithfully?\n\n2. One thing missing from the paper, as outlined in the weaknesses part, is an experimental part showing the this specific model is useful for downstream applications (and not just as proxy for estimating quantities that are practically useful in downstream applications, such as triangle density). I would imagine that one way to instantiate it is by pre-training some graph neural network on this kind of graphs and show that this pretraining helps improve the performance on some applied benchmark. Do you think it will be possible/realistic to design such an experiment? In what context do you think this random model could come up useful?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- An elegant, novel and flexible method to generate random graphs. Although the generated graphs from, say, the local binding model do not yet seem fully realistic to me, this is a good step in the right direction and the basic binding primitive is natural and analyzable.\n\n- The paper is well written in general. The mathematical statements are clear, intuitive, interesting, and believable (I have not fully checked the proofs).\n\n- The topic of \"realistic\" random graph models is very important and this paper certainly advances the literature (even if by a small step) on this topic." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new configurable way to generate random graphs with dependencies between the edges, in a fine-grained way that make it possible to control parameters such as the number of triangles, clustering coefficients, etc. \n\nThe simplest and most famous random graph models, such as Erdos-Renyi, have edges appear independently of each other (in Erdos-Renyi for example the probability of each edge to appear is $p$, independently of all others). This kind of behavior makes analysis easier, but on the other hand it provably cannot capture properties of realistic graphs, such as a large number of triangles/motifs, and the tendancy of realistic graphs to be relatively clustered.\n\nMore modern random graph models, that are better at capturing real world characteristics, tend to have dependencies between the edges. One example is random geometric graphs, where points are randomly located on a high-dimensional sphere. But still, most of these models are not very flexible, they are controlled by some global parameters but it may be difficult/impossible for the designer to build a graph that matches specific desired properties.\n\nThis paper provides several variants of a new and simple method, called binding, to generate random graphs with high variability (i.e., that do not match a single specific pattern) but also being realistic (having high triangle density, high clusterability, etc). The core idea is simple, and stems from the following question: what is the random graph model with all edges appearing with probability $p$ (possibly dependently), and, say, the largest expected number of triangles? It turns out that this is the distribution which is equal to a complete graph with probability $p$, and empty graph otherwise. Binding is a similar process, picking the same \"threshold\" to groups of random nodes in order to create small cliques / dense graphs. The proposed model allows to work with predetermined edge probabilities, not necessarily uniform.\n\nThe authors propose a couple of different binding methods, one that is sequential and another one that is parallelizable and easier to analyze. They prove several theoretical results: for example, that it is possible to efficiently match a target expected number of triangles by a suitable optimization process. The authors also run some experiments demonstrating the verstality of this method in practice, showing ability to generate diverse and realistic degree distributions or obtain a large triangle coefficient. Another experiment shows that the running time is decently fast, especially for the parallelizable version." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The experimental section does not provide a lot of information beyond the theoretical part of the paper. It is relatively clear to me, without the experiments or even looking at the proofs, that this model will be able to capture a target triangle density, for example. I would have hoped for the experimental section to show other directions in which this model is realistic. (See questions below.)" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We explore edge probability graph models (EPGMs) beyond edge independency, and show that realization beyond edge independence can produce more realistic structures while maintaining high tractability and variability." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024exploring,\ntitle={Exploring Edge Probability Graph Models Beyond Edge Independency: Concepts, Analyses, and Algorithms},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xljPZuprBA},\nnote={under review}\n}" }, "abstract": { "value": "Desirable random graph models (RGMs) should \n*(i)* generate *realistic* structures such as high clustering (i.e., high subgraph densities),\n*(ii)* generate *variable* (i.e., not overly similar) graphs, and \n*(iii)* remain *tractable* to compute and control graph statistics.\nA common class of RGMs (e.g., Erd\\H{o}s-R\\'{e}nyi and stochastic Kronecker) outputs edge probabilities, and we need to realize (i.e., sample from) the edge probabilities to generate graphs.\nTypically, each edge's existence is assumed to be determined independently for simplicity and tractability.\nHowever, with edge independency, RGMs theoretically cannot produce high subgraph densities and high output variability simultaneously.\nIn this work, we explore realization beyond edge independence that can produce more realistic structures while maintaining high traceability and variability.\nTheoretically, we propose an edge-dependent realization framework called *binding* that provably preserves output variability, and derive *closed-form* tractability results on subgraph (e.g., triangle) densities in generated graphs.\nPractically, we propose algorithms for graph generation with binding and parameter fitting of binding.\nOur empirical results demonstrate that binding exhibits high tractability and generates realistic graphs with high clustering, significantly improving upon existing RGMs assuming edge independency." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Random graph models", "edge dependency", "triangle density", "subgraph densities", "tractability", "variability" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/103eb1afb82ca41bc9e79104d539f16a1b0ff899.pdf" }, "presentation": null, "primary_area": { "value": "learning on graphs and other geometries & topologies" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6667cb63203da5a8e910bf5f6c37ae2d5135f98e.zip" }, "title": { "value": "Exploring Edge Probability Graph Models Beyond Edge Independency: Concepts, Analyses, and Algorithms" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xlrpVyMIwz
Positional Encoder Graph Quantile Neural Networks for Geographic Data
main
Active
Graph Neural Networks (GNNs); Quantile regression; Geospatial data; Uncertainty quantification; Calibration; Model recalibration.
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
1;3;5;6
4;4;3;4
2;2;3;3
2;1;2;3
2;2;3;3
3.75
3.75
2.5
2
2.5
-0.375823
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Please elaborate on the limitation of the proposed method (e.g., computational cost).\n- Is the proposed method robust against noise?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- PE-GQNN combines quantile prediction and distribution recalibration in a single model (not two-stage models), enhancing both predictive accuracy and calibration efficiency.\n- By limiting GNN operations to specific features and introducing target values near the output layer, the model effectively prevents data leakage and improves computational efficiency.\n- The authors use pinball loss for quantile regression, which allows one to provide a regularization effect, improving prediction reliability across diverse quantile levels.\n- Extensive experiments on multiple real-world datasets demonstrate PE-GQNN’s superior performance in both predictive accuracy and uncertainty estimation compared to existing methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents the Positional Encoder Graph Quantile Neural Network (PE-GQNN), a novel model for spatial data prediction. PE-GQNN integrates Positional Encoder Graph Neural Networks (PE-GNN) with Quantile Neural Networks (QNN) to improve predictive accuracy and quantify uncertainty. Experiments on real-world datasets show that PE-GQNN consistently outperforms existing methods in predictive accuracy and uncertainty calibration." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The proposed method is practically useful, but the way PE-GNN and QNN are combined is somewhat straightforward.\n- There are several innovations in the architecture, but they are all practical techniques, not theoretically sophisticated.\n- There is no discussion of computational cost for high-dimensional data.\n- There is no discussion of the shortcomings of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The Figure 4(a) does not have legend. What does the 9 curves represent? It is unclear to me how does these 9 curves represent 10 samples.\n2. Instead of using MADECP, what is the performance when using popular adopt interval? For example, 95% confidence interval?\n3. The real-world geographic data usually collect in a temporal manner, so how does the data is separated into different train/val/test segments? It leads to data leakage issue for sure if the separation does not only use historical data for training.\n4. The sensor of the geographic data also will be removed/replaced/re-deployed/defected from time to time in real world applications. Does the proposed method also robust in these cases?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The framework is easy to understand and follow.\n2. The final experimental results indicate some effectiveness of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduced a distribution-free uncertainty quantification framework by integrating PE-GNNs, Quantile Neural Networks for geographic data." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The term \"fully nonparametric\" could be misleading. It would be more accurate to use \"distribution-free.\"\n\n2. The novelty of the proposed method is limited. Quantile regression is already proposed and widely used in distribution-free uncertainty quantification. For example:\n - \"Single-model uncertainties for deep learning.\" Advances in Neural Information Processing Systems 32 (2019).\n - \"Image-to-image regression with distribution-free uncertainty quantification and applications in imaging.\" International Conference on Machine Learning. PMLR, 2022.\n\n Additionally, using KNN-Graph with geographic coordinates has been proposed and widely applied in different domains:\n - \"Dynamic graph CNN for learning on point clouds.\" ACM Transactions on Graphics (TOG) 38.5 (2019): 1-12.\n - \"Spatiotemporal graph convolutional networks for earthquake source characterization.\" Journal of Geophysical Research: Solid Earth 127.11 (2022): e2022JB024401.\n\n3. PE-GNN does not claim the method as SOTA. So which baseline are considered as SOTA in this submission? PE-GNN is better than GNN but not the SOTA. At least there are more available models in applying GNN for earthquake data and traffic data." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What does ST stand for in the last sentence of the 1st paragraph of Section 2?\n\nIn Figure 4(a), why does the predicted density of PE-GQSAGE look like Gaussian distribution? The proposed method outputs predicted quantiles at the given tau. So, the prediction by the proposed method does not necessarily to be Gaussian shape." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "Proposed some techniques for quantifying uncertainty in spatial regression.\nExperiments with three datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a graph neural network-based method for uncertainty quantification in spatial regression." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The technical contribution of this paper is limited since the proposed method is a combination of Positional Encoder Graph Neural Networks and quantile regression. The proposed method introduces some techniques, such as the use of response variables y and quantile parameters tau in neural networks. However, the novelty of these techniques is incremental.\n\nThe experimental results are not convincing. There have been proposed many quantile regression methods in neural networks; i.e., outputting variance in the last layer, the use of sparse Gaussian processes at the last layer. These methods can be easily combined with graph neural networks. The comparison with such existing methods can demonstrate the effectiveness of the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The paper has not ethics concerns founded." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It is better to add some simple examples to illustrate the integration process and novelty in the process on how to design this new graph. \n\n2. The paper shall demonstrate the complexity change for this new PE-GQNN graph." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "There are several strengths demonstrated in the paper:\n1. The paper introduces the Positional Encoder Graph Quantile Neural Network (PE-GQNN), a new approach that integrates PE-GNNs, Quantile Neural Networks, and recalibration techniques in a fully nonparametric framework, requiring minimal assumptions about the predictive distributions.\n2. The paper has demonstrated the results on three datasets: California Housing, Air Temperature, and 3Droad with 6 different approaches including the proposed PE-GQNN." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors proposed the Positional Encoder Graph Quantile Neural Network (PE-GQNN) as a new framework to enhance predictive modeling for geographic data. The major contributions of this paper are listed as the following: The empirical results showed the capability of PEGQNN to achieve lower MSE, MAE, and MPE compared to traditional GNN and PE-GNN. Also, PE-GQNN demonstrated substantial improvements in predictive accuracy and uncertainty quantification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The weaknesses of this paper are listed as the following:\n\n1.\tThe innovation of this paper seems incremental. Positional Encoder Graph Quantile Neural Network (PE-GQNN) is just a simple combination of PE-GNN with Quantile regression model.\n\nFirst, the paper shall illustrate in detail the challenges in the integration process. Normally a good integration will include some short cuts to reduce the total cost while comparing with the cost of simple addition of several algorithms together directly. Please try to add some \"novel points\" or new ideas to demonstrate your merits in integration.\n\nSecond, it is better to add some simple examples to illustrate the integration process and novelty in the process on how to design this new graph. Please illustrate which cost you saved compared the cost that you simply integrate several different phases from literature.\n\nThird, will the new integrated framework achieve higher performance compared simply by adding several phases together? What other advantages do you have for the new framework?\n\n\n2.\tThe paper shall demonstrate the complexity change for this new PE-GQNN graph. \n\nIs this new graph a simple integration? \nHow much is the increase of the total complexity? \nIs there any new approach to lower the total complexity?\n\nPlease provide a specific complexity analysis comparing PE-GQNN to PE-GNN, including time and space complexity.\n\n\n3.\tFor the experiments, the following should be addressed. \n\nFirst, the paper presented experimental results obtained from three datasets: California Housing, Air Temperature, and 3Droad. It seems that the paper lacks discussion about whether other kinds of datasets are suitable for this new approach. Also, how much the total cost changes to implement this new approach.\n\nSecond, please discuss the generalizability of their approach beyond the three datasets used. Also, please address what characteristics of a dataset make it suitable for PE-GQNN.\n\nThird, additionally, it would provide valuable practical insights if the authors can demonstrate a comparison of implementation costs between PE-GQNN and existing methods." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose PE-GQNN, which combines GNNs, quantile loss, and recalibration to improve uncertainty quantification and predictive accuracy in spatial data, outperforming current methods without increasing computational complexity." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024positional,\ntitle={Positional Encoder Graph Quantile Neural Networks for Geographic Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xlrpVyMIwz},\nnote={under review}\n}" }, "abstract": { "value": "Positional Encoder Graph Neural Networks (PE-GNNs) are a leading approach for modeling continuous spatial data. However, they often fail to produce calibrated predictive distributions, limiting their effectiveness for uncertainty quantification. We introduce the Positional Encoder Graph Quantile Neural Network (PE-GQNN), a novel method that integrates PE-GNNs, Quantile Neural Networks, and recalibration techniques in a fully nonparametric framework, requiring minimal assumptions about the predictive distributions. We propose a new network architecture that, when combined with a quantile-based loss function, yields accurate and reliable probabilistic models without increasing computational complexity. Our approach provides a flexible, robust framework for conditional density estimation, applicable beyond spatial data contexts. We further introduce a structured method for incorporating a KNN predictor into the model while avoiding data leakage through the GNN layer operation. Experiments on benchmark datasets demonstrate that PE-GQNN significantly outperforms existing state-of-the-art methods in both predictive accuracy and uncertainty quantification." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Graph Neural Networks (GNNs); Quantile regression; Geospatial data; Uncertainty quantification; Calibration; Model recalibration." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6ebb1a286fcba9b1b6ca861b716ffb25fcb94659.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Positional Encoder Graph Quantile Neural Networks for Geographic Data" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xlxDTVAbNM
Lowering Data Diversity can Accelerate Training: Case Studies in Synthetic Tasks
main
Active
synthetic tasks;data diversity;curriculum learning;data filtering;learning plateaus;batch gradients
unsupervised, self-supervised, semi-supervised, and supervised representation learning
1;3;5;5
4;4;4;3
2;1;3;3
1;1;2;2
1;2;3;3
3.5
3.75
2.25
1.5
2.25
-0.522233
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Fig 5, is there a reason why $\\alpha=1$ performed best in all cases?\n- It would be interesting to have some theoretical justification for why the interventions, especially the power law sampling, lead to increased training speed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is overall clearly written.\n- The observations are indeed interesting and surprising especially since the interventions improve training without discriminating between datapoints." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors conduct a controlled set of experiments on three synthetic settings. They observe the loss plateau and find that by biasing the training distribution away from the test distribution to reduce data diversity at the start of training, one can accelerate training. Results from various simple and counterintuitive data interventions interventions suggest that simpler data filtering techniques can match or outperform complex optimization methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- From a loss landscape perspective, the initial speedup seen upon reducing data diversity suggests that the parameters are getting stuck in a local suboptimal minimum that may lie near the initialization. However, numerous methods have been developed to avoid getting stuck in such minima such as adding regularization, introducing learning rate schedulers, etc. Therefore, while reducing data diversity may seem to help speed up, it may not contribute to finding a better generalization solution at all. \n- The paper claims that a purely data-driven approach can have the same speedup as optimization methods; however, low similarity and high variance are also often associated with sharp minima in the loss landscapes. Therefore, the authors could consider comparing methods like sharpness-aware minimization, momentum, etc. in this context. Moreover, PCGrad is designed for training across multiple tasks, that are expected to have potential interference. However, I wonder how scalable it would be in a single-task setup for increasing model size, even though it does improve training speed. \n- While the authors mention that their synthetic settings are simple, I think finding the \"optimal\" amount of biasing needed for initial speedup can be quite challenging on real data, as it may require an extensive search over hyperparameters. I wonder if the authors could suggest any potential strategies for finding the optimal amount of biasing in real-world scenarios." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "# High-level questions\n1. Section 2.2. There is a notion of \"task\" in linear regression and fact memorization problems. What is a task in the sparse parity problem?\n2. Section 2.2. Why study 3-layer MLP here but transformers for the other two problems?\n3. Line 165. What is the $i^th$ token $d_i$? If these are the elements $s, r, n_1, ..., n_{k_noise}$, it doesn't make sense to have the transformer predict noise token $n_t$ given $s, r, n_1, ..., n_{t-1}$ since the noise tokens are drawn independently. If I'm reading this wrong, what is the tokenization procedure here?\n\n# Low-level questions\n1. Line 32: \"followed by abrupt learning to low loss\". What does this question mean? Are the authors pointing to the \"grokking\" line of work [1]?\n2. Line 93. Typo: $x$ is in $\\mathbb{R}^d$ but $(x, y)$ is not.\n3. Equation 3. What is $\\tau$?\n4. Line 132. What is $\\mathcal{D}$ here? Uniform distribution over the $d$-dimensional hypercube?\n5. Line 162. What is the ICL-LR setting? It is likely the 1st problem, but this abbreviation is introduced here for the first time.\n\n[1] Power, A., Burda, Y., Edwards, H., Babuschkin, I., & Misra, V. (2022). Grokking: Generalization beyond overfitting on small algorithmic datasets. arXiv preprint arXiv:2201.02177." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper presents experiments on different setups, involving both transformers and MLPs that are used in practice. There has been recent interest in using transformers/MLPs to learn sparse parities [1, 2] and in-context linear regression [3, 4], which the paper studies. The general problem in the paper: of accelerating training is of interest to the ML community. The paper takes approaches this problem from the lens of data curation goes back to Curriculum Learning [5].\n\n[1] Barak, B., Edelman, B., Goel, S., Kakade, S., Malach, E., & Zhang, C. (2022). Hidden progress in deep learning: Sgd learns parities near the computational limit. Advances in Neural Information Processing Systems, 35, 21750-21764.\n\n[2] Edelman, B. L., Goel, S., Kakade, S., & Zhang, C. (2022, June). Inductive biases and variable creation in self-attention mechanisms. In International Conference on Machine Learning (pp. 5793-5831). PMLR.\n\n[3] Garg, S., Tsipras, D., Liang, P. S., & Valiant, G. (2022). What can transformers learn in-context? a case study of simple function classes. Advances in Neural Information Processing Systems, 35, 30583-30598.\n\n[4] Zhang, R., Frei, S., & Bartlett, P. L. (2023). Trained transformers learn linear models in-context. arXiv preprint arXiv:2306.09927.\n\n[5] Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009, June). Curriculum learning. In Proceedings of the 26th annual international conference on machine learning (pp. 41-48)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper demonstrates with experiments that lowering training data diversity accelerates training. In 3 experiment setups with synthetic data---in-context linear regression in transformers, learning sparse parity functions with MLP, and fact memorization in transformers---the authors argue that a variety of interventions can accelerate training (e.g. interventions on the gradient updates or on the training dataset)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "# Motivation\nThe paper aims to accelerate training for the sake of optimization only, but that is not the central goal in learning. In fact, research in ML optimization techniques serve to fulfill the **central goal of generalizing to the test distribution**. Moreover, the paper frequently finds in their experiments that faster training (using the methods they present) often leads to worse generalization, so I doubt the benefits of the proposed methods.\n\n**In fact, the authors mention this exact severe limitation in line 52.** I'd find insights into generalization more helpful than just focusing on training optimization.\n\n## On reducing data diversity to optimize faster\nIt is not shown that accelerating training by reducing data diversity leads to increased generalization to the test distribution. I think that would be a good argument for \"reduce data diversity -> thus train faster -> thus generalize to test distribution better\". As an example, consider the setting where we want to learn the target function $f(x) = x^2$ given a finite number of training samples $x_1, ..., x_n$. We can forget about the target (test distribution) and instead sample from the function $g(x) = 1$, _just for the sake of low data diversity_. Optimizing on samples to learn $g$ is faster, but this is far from the goal of learning the target $f$.\n\nOn another note, decreasing the number of training tasks/samples to reduce training data diversity should intuitively lead to overfitting. Figure 4 shows this exact phenomena, so I'm not convinced that accelerated training with the paper's data curation methods is actually helping in learning.\n\n# Techniques\nIn Section 3, the paper argues that biasing batch gradients can accelerate training. Isn't this technique the same as SGD with momentum or an adaptive gradient method like AdaGrad or Adam? I'm not sure this is a novel contribution.\n\nFinally, the paper dos not conduct data-diversity experiments on real-world problems such as image classification (even on well-studied datasets like CIFAR or MNIST) and language generation. I appreciate that the authors mention this limitation in Section 6 and 7, but I believe the current version of the manuscript is severely lacking with just synthetic data experiments.\n\n# Writing\nThe paper needs numerous citations in the introduction to substantiate claims. Several typos and clarifications needed:\n1. Citations for lines 41-44.\n2. Line 96. It's verbose to call the task $f_w$ when we can simply write $w^T x + \\epsilon$ as used in Line 93. Lines 96-105 can be written in 3-5 lines at max.\n3. Line 297. The paper conflates \"faster optimization\" with \"learning\". The model trains faster, but does not generalize (or \"learn\") better or faster.\n\nMoreover, the statement \"low data diversity accelerates training\" is repeated throughout the paper (paraphrased in many ways), making the paper more verbose than necessary. I understand that it is the paper's main argument, but it is overly used in my view." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. There is a wide body of work on generalization through a theoretical lens, typically taking advantage of both model complexity and train/test distribution similarity. How does this work fit into this literature?\n\n2. Does this happen in more conventional language modeling (or non language modeling tasks)?\n\n3. Why does this effect happen at all? I don't see any justification of this phenomenon, only a few demonstrations of it." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The authors take a stance that, to my knowledge, has not been studied before in machine learning. If this were a well understood, real phenomenon, I think it could be impactful." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper argues that convergence using SGD can be more efficiently obtained, in some cases, by using a training data distribution that has less in common with the test distribution. This argument is of course counterintuitive, and it disagrees with a wide body of fundamental research in the field around how an ideal training dataset is one that has more in common with the general distribution, not less." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Overall I found this paper’s argument unconvincing, as they experiment only on three synthetic settings and provide no arguments as to why this effect might occur. Of course, this effect cannot be shown forever, and eventually this skewing of the training distribution will have a negative effect on test performance. When and why does this happen, and what can be gleaned from this work about more conventional model training? \n\nAs the authors write themselves in the limitations section, they do not have an understanding of when or why this occurs, and do very limited evaluation and show preliminary results. I realize this section was written to get ahead of criticism they anticipate to this effect, but the work seems very incomplete with these questions left unanswered. The future work in the discussion section, for example, is very much in-scope, and should be part of the paper the authors are writing.\n\nAt this time, I feel that the presented investigation is not thorough enough to meet the bar for publication in ICLR." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Your approach and optimization algorithms can achieve the same goal, so how would you convince others to use your method instead of an optimization algorithm? \n\nWould combining your approach with other optimization algorithms lead to more significant performance improvements?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper presents a clear experimental setup, demonstrating how intentional data biases can speed up learning in synthetic tasks.\n\nThe use of visuals makes its findings easier to understand.\n\nThis research is useful in situations where training efficiency is a priority." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper examines methods to accelerate training by reducing data diversity, focusing on synthetic tasks like in-context linear regression, sparse parity, and fact memorization. Traditional approaches typically improve training by adjusting optimization algorithms to mitigate plateaus; however, this study found that simple interventions in data sampling, such as reducing task diversity or sampling with non-uniform distributions, can achieve similar benefits. These findings provide insights into data filtering and curriculum learning approaches, suggesting that less diverse but strategically chosen training data could enhance model efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The application scope is narrow to meet real-world needs, and it lacks an explanation for the observed phenomena." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Reducing data diversity can speed up training in a variety of synthetic settings." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024lowering,\ntitle={Lowering Data Diversity can Accelerate Training: Case Studies in Synthetic Tasks},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xlxDTVAbNM},\nnote={under review}\n}" }, "abstract": { "value": "We identify a loss plateau at the start of training in the three synthetic settings of in-context linear regression, sparse parity, and fact memorization. While careful tweaks to the optimization algorithm can mitigate these plateaus, we find that a simpler orthogonal approach of *lowering the data diversity*, and in doing so, biasing the training distribution *away* from the test distribution, counter-intuitively also speeds up training. This connection between data diversity and training speed holds for three different diversity-*reducing* interventions across our varied synthetic settings. Our findings offer a new perspective on data filtering and curriculum learning for training machine learning models." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "synthetic tasks", "data diversity", "curriculum learning", "data filtering", "learning plateaus", "batch gradients" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e87ef6894e9f1c1925402adf3cbd2645006bab84.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Lowering Data Diversity can Accelerate Training: Case Studies in Synthetic Tasks" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xlxGsX1pc7
U-MATH: A University-Level Benchmark for Evaluating Mathematical Skills in LLMs
main
Active
Large Language Models (LLMs);Mathematical Reasoning;Benchmarking;University-Level Mathematics;Multimodal;Automatic Evaluation;Solution Assessment
datasets and benchmarks
5;5;5;6
4;3;4;3
2;2;2;2
2;2;2;3
2;2;3;3
5.25
3.5
2
2.25
2.5
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.It is recommended to expand both the U-MATH datasets size and the number of subjects.\n\n2.It is recommended to expand the µ-MATH datasets size.\n\n3.In Table 4, you only use accuracy to present the results. Since the study involves math problems, which are more complex than simple classification tasks, could you consider adding additional evaluation metrics like perplexity or WinoGrande ACC (to assess whether ambiguous problems are correctly identified)? This would give readers a clearer picture of how well the models truly understand and respond to university-level math questions. For more details, you might refer to examples in this paper: https://proceedings.mlr.press/v235/dao24a.html." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is well-organized, providing a clear outline of the datasets, experimental setup, and evaluation metrics. The authors explain each component in a structured manner, making it accessible to readers.\n\n2. The datasets include a range of mathematical subjects and problem types, which reflects an effort to cover diverse aspects of mathematical reasoning, though the depth and breadth could still be improved.\n\n3. The introduction of U-MATH and µ-MATH provides additional benchmarks for evaluating LLMs in mathematical tasks, which may offer a reference point for similar studies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the U-Math datasets, based on university-level mathematics, addressing the issues of insufficient thematic diversity and a lack of visual information question types in current datasets for evaluating the mathematical abilities of large language models. The U-Math datasets was tested on several large language models, revealing that the highest accuracy for text-based tasks was only 53%, while the highest accuracy for visual tasks was only 30%." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the U-MATH datasets consists of 1,125 samples and covers six subjects, the sample size is still too small. Evaluating the mathematical abilities of large models using a limited amount of data is not sufficiently convincing.\n\n2. Although the 340 samples in the µ-MATH datasets have been carefully selected to provide a challenging test, a larger sample size could enhance the representativeness of the evaluation, especially across different topics and problem types." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What measures have been taken to mitigate potential biases introduced by using LLMs as judges for solution correctness?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "S1. The inclusion of university-level problems offers a significant advancement over existing datasets that mainly focus on elementary or high school-level tasks.\n\nS2: By integrating visual tasks alongside traditional textual ones, the dataset challenges LLMs to interpret and reason across multimodal formats.\n\nS3: µ-MATH introduces a novel approach to evaluate LLMs' ability to assess solutions, addressing biases and limitations in current evaluation practices." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces U-MATH, a comprehensive university-level mathematical benchmark designed to evaluate the performance of Large Language Models (LLMs) in solving advanced mathematical problems. The dataset consists of 1,125 problems sourced from university coursework, covering six core topics such as Algebra, Calculus (Differential and Integral), Multivariable Calculus, Sequences, and Series, with approximately 20% of the tasks involving visual components. To complement the U-MATH dataset, the authors also present µ-MATH, a meta-evaluation set for assessing the accuracy and reliability of LLM-based evaluators." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1: The reliance on LLMs as judges (e.g., GPT-4o) to evaluate free-form answers could introduce biases and inconsistencies, particularly since LLMs may struggle with complex derivations or nuanced interpretations of mathematical expressions.\n\nW2: The µ-MATH set includes LLM-generated solutions, which may limit the diversity and challenge of evaluation due to inherent model tendencies or training biases. This could result in less rigorous meta-evaluation as models may overfit to known patterns or heuristics." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I don't have further questions." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.U-MATH Benchmark: This is a publicly available dataset of university-level math problems, covering six topics: Pre-Calculus, Algebra, Differential Calculus, Integral Calculus, Multivariable Calculus, and Sequences & Series. A unique aspect of this dataset is its inclusion of open-ended questions that require LLMs to perform multi-step reasoning.\n2.µ-MATH Meta-Evaluation Benchmark: This benchmark is specifically designed to test LLMs’ ability to assess the correctness of mathematical solutions. It contains 340 questions selected from U-MATH, accompanied by LLM-generated answers manually labeled as correct or incorrect, aimed at evaluating the capacity of LLMs to act as “judges.”\n3.Model Comparison: The paper compares the performance of various LLMs, including general-purpose models, specialized math models, and multimodal models, demonstrating the significant challenges LLMs still face in both text and visual tasks. For instance, the highest accuracy for text-based questions is 53%, while performance on visual questions is even lower, with an accuracy of only 30%.\n4.Challenges for LLMs as Math Judges: LLMs perform poorly when evaluating mathematical solutions, with the best-performing LLM judge achieving an F1 score of only 76% on µ-MATH, indicating that there is still room for improvement in this task." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduced a new benchmark dataset, U-MATH, designed to evaluate large language models (LLMs) on university-level math problems. The proposed U-MATH benchmark includes 1,125 college-level math problems collected from real educational materials, covering six core mathematical subjects, with 20% of the problems involving image understanding. Additionally, the paper introduces a meta-evaluation dataset named µ-MATH, aimed at assessing the ability of LLMs to judge the correctness of mathematical solutions." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The U-MATH dataset introduced in the paper supplements the current math datasets by addressing college-level gaps, while the µ-MATH meta-evaluation dataset enables assessment of large models’ ability to evaluate college-level math solutions. However, aside from knowing that this training set focuses on university mathematics and includes six subjects, we lack information about the dataset’s question diversity, difficulty, reasoning steps required to solve the problems, and other aspects. Additionally, the dataset’s size may be insufficient.\n2.The paper mentions that the dataset has been released but does not provide an access link, so I have no direct way to review the dataset.\n3.The experiments in the paper provide valuable insights into the capabilities of current text-based and multimodal LLMs in solving university-level math problems.\n4.The paper states that U-MATH aims to promote further research and improve LLMs' ability to handle complex math problems. How is \"complex\" defined here? Does it refer to higher-grade, more challenging (for humans) knowledge, or does it mean problems requiring more and deeper reasoning steps?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Why are there no examples of problems that require visual input?\n\nThe accuracy when using LLM as a judge is not provided, especially for higher mathematics problems where answers may be in different forms but are actually equivalent, indicating that it is easier to make mistakes compared to comparing a single form of answer." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper demonstrates a high-quality collection of problems that are well-balanced across six core mathematical subjects. This ensures a comprehensive evaluation of LLMs across different areas of mathematics.\nThe problems sourced from actual teaching materials add a layer of authenticity and practical relevance to the benchmark, ensuring that the skills assessed are applicable to real-world academic standards.\nThe creation of µ-MATH for meta-evaluation is an innovative approach to assessing the ability of LLMs to evaluate mathematical solutions. This adds another layer of complexity and originality to the benchmarking process, focusing not just on problem-solving but also on the assessment capabilities of the models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces U-MATH, a novel benchmark designed to evaluate the mathematical reasoning capabilities of Large Language Models (LLMs) at the university level. It comprises 1,125 unpublished, open-ended problems sourced from actual teaching materials, balanced across six core mathematical subjects, with 20% of the problems requiring image understanding. Additionally, the paper presents µ-MATH, a meta-evaluation dataset aimed at assessing the ability of LLMs to evaluate free-form mathematical solutions. The experiments conducted reveal significant challenges in advanced mathematical reasoning and visual problem-solving, with the best-performing models achieving only 53% accuracy on text-based tasks and 30% on visual problems. The paper also highlights the difficulty LLMs face in assessing solutions, with the highest µ-MATH F1-score being 76%, indicating room for improvement in LLMs’ evaluation capabilities. The datasets and evaluation code are open-sourced to facilitate further research." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While the inclusion of visual elements in 20% of the problems is a step forward, the remaining 80% are text-based. The paper could benefit from expanding the visual problem set to better assess and train LLMs in multimodal mathematical reasoning, which is increasingly important for real-world applications.\n\nThe paper focuses on university-level mathematics, but it is unclear how well the findings generalize to other levels or types of mathematical reasoning. Future work could explore the transferability of the models trained on U-MATH to other mathematical domains." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "U-MATH, a challenging university-level math benchmark with both textual and visual problems, and additional μ-MATH benchmark to evaluate solution assessment capabilities." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024umath,\ntitle={U-{MATH}: A University-Level Benchmark for Evaluating Mathematical Skills in {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xlxGsX1pc7},\nnote={under review}\n}" }, "abstract": { "value": "The current evaluation of mathematical skills in LLMs is limited, as existing benchmarks are relatively small, primarily focus on elementary and high-school problems, or lack diversity in topics. Additionally, the inclusion of visual elements in tasks remains largely under-explored. \n \nTo address these gaps, we introduce **U-MATH**, a novel benchmark of 1,125 unpublished open-ended university-level problems sourced from teaching materials. It is balanced across six core subjects, with 20\\% of problems requiring image understanding. Given the open-ended nature of U-MATH problems, we employ an LLM to judge the correctness of generated solutions. To this end, we release **$\\boldsymbol\\mu$-MATH**, an additional dataset to evaluate the LLMs' capabilities in assessing solutions.\n\nThe evaluation of general domain, math-specific, and multimodal LLMs highlights the challenges presented by U-MATH. Our findings reveal that LLMs achieve a maximum accuracy of only 53\\% on text-based tasks, with even lower 30\\% on visual problems. The solution assessment proves challenging for LLMs, with the best LLM judge having an F1-score of 76\\% on $\\mu$-MATH.\n \nWe open-source U-MATH, $\\mu$-MATH, and evaluation code on GitHub." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models (LLMs)", "Mathematical Reasoning", "Benchmarking", "University-Level Mathematics", "Multimodal", "Automatic Evaluation", "Solution Assessment" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5b892a9eaff28812a984c7732d30f2fb644b7533.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "U-MATH: A University-Level Benchmark for Evaluating Mathematical Skills in LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xmgvF0sLIn
Elucidating the Design Space of Text-to-Audio Models
main
Active
audio generation;text-to-audio;synthetic data;diffusion;flow matching
applications to computer vision, audio, language, and other modalities
3;5;5;6
4;5;4;5
2;4;3;4
2;3;2;4
3;4;3;4
4.75
4.5
3.25
2.75
3.5
0.688247
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. “We initialize the final projection layer of DiT to output zeros.” Do you mean zero-initialize all the weight and bias in the projection layer? Or do you use gated operation? More explanations are welcomed.\n2. Please consider adding the result of Make-an-Audio and AudioLDM in Table 2 and Table 3, as they are also very important baselines to compare with.\n3. In Table 6, the ETTA trained on AudioCaps shows 3.00 FAD on the audiocaps evaluation set. This is significantly lower than the current state-of-the-art. It would be helpful if the author could explain this result.\n4. Line 510: Please say more about the loss divergence issue? It seems contradictory as the evaluation metrics still look normal. More explanations would be helpful." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The author proposed a synthetic data generation pipeline and created the first million-size dataset with high audio-text correlation.\n2. The author has implemented a comprehensive list of objective evaluation metrics and shows interesting results during comparison.\n3. The author mentioned they will open-source their code for reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper empirically studies the way to improve the current text-to-audio generation system, including creating a new synthetic audio-caption paired dataset, an improved architecture, and other related settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As there are a large number of evaluation metrics used. It would be helpful to explain what each evaluation metrics focus on. For example, there are three types of Frechet Distance - what are the differences?\n2. It is not clear how ETTA perform without pretraining on the large-scale datasets. For example, what would the result look like if the ETTA model is trained on AudioCaps from scratch (I assume this is a common setup)?\n3. Lack of subjective evaluation. As the author mentioned, there is not yet an effective objective evaluation metric for the TTA task. Out of the seven main conclusions in the paper, only one is backed by the subjective evaluation. This can make the conclusion less convincing. \n4. The architectural improvement seems a bit marginal. The improvement of ETTA seems to come from the new synthetic dataset mostly." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please refer to Weaknesses#2." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The contribution of the dataset: Experiments demonstrate that AF-Synthetic significantly improves performance.\n2. It provides a practical guide for hyperparameter tuning in the field of TTA.\n3. Each experimental conclusion is highlighted with a purple-bordered box, which makes the paper very reader-friendly and pleasant to navigate." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The primary focus of this paper is not to explore novel model designs, but rather to provide a comprehensive understanding of the current paradigms in TTA models. It seeks to identify critical factors that contribute to performance improvements and to evaluate scalability concerning data and model size. Additionally, the paper introduces AF-Synthetic, the first million-size synthetic caption dataset with strong audio correlations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As the authors themselves mentioned, the paper doesn't propose a novel method. Moreover, as a paper based on extensive experiments, while it offers valuable conclusions, it lacks an innovative insight or discovery that stands out. Therefore, I believe this paper might be a better fit for the Dataset & Benchmark track. \n2. I would also suggest that the authors include the following three points: \n - Could the authors further analyze the \"potentially mode-collapsed mode\" part? In theory, the FD metric should be able to measure sample diversity. \n - Since the authors have already tried logit-normal *t-sampling*, could they also test the *Min-SNR Weighting Strategy*? \n - Regarding the Auto-guidance section, it might be worth experimenting with removing CFG (Classifier-Free Guidance) for the first 40% of steps and then adding CFG for the remaining 60% of steps to see if this improves the FD score." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What can be the reason behind mode-collapsed models consistently produce good scores across multiple metrics, while the backbone for these metrics are different?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The work aims to push the performance of diffusion-based text-to-audio models and show better results than open source models. The improvements come from a better dataset and an extensive evaluation of design/hyperparameter choices built on top of SOTA models. I appreciate the amount of work put into creating the dataset and try many different hyperparameters and design choices, and the commitment to open source the code that appears to push the field ahead." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on pushing the performance of diffusion-based text-to-audio models by introducing a new synthetic dataset and revisiting design choices. \n- New synthetic dataset (AF-Synthetic): The authors improve the pipeline from AF-AudioSet to generate 1.35M captions using audio samples from multiple datasets. The result is a large dataset with high CLAP score, while existing datasets are either small or have simple to no filtering method. The authors also point out that AF-Synthetic captions are different than existing datasets.\n- The model is built upon Audio-VAE (from stable-audio-tools), Diffusion Transfromer (Peebles & Xie, 2023) with some changes in the architecture.\n- The authors compare across datasets and metrics to discover the optimal choice for the number of function evaluations and classifier-free guidance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Since this is a purely empirical paper, my main concerns are mostly about evaluation and results.\n\n- Since the dataset is curated using CLAP scores, I find the CLAP score not a reliable indicator, while FAD and KL use old models to extract features.\n- There are no subjective scores in the results except for Table 9 which compares the final model with others.\n- The improvements in ETTA-DiT, OT-CFM and t-sampling are not consistent across metrics.\n- Using AF-AudioSet and AF-Synthetic yields similar results in AudioCaps and MusicCaps despite much larger size (Table 6-7)\n\nOverall, I find that a few conclusions in the paper are not very helpful (e.g. increasing model size improves the performance), especially when we have inconsistent metrics and lack of subjective scores. While it's good to see many hyperparameters being assessed, the contribution is lack of novelty since it does not propose new techniques or reveal surprising findings. The lack of reliable metrics, which authors also admit in the final section, also weaken claims and conclusions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethic concerns" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. For the comparison of the proposed AF-Synthetic dataset, the paper indicates that both AF-AudioSet and Sound-VECaps have around 161K samples remaining after filtering with a CLAP score of 0.45. Since both datasets are primarily developed from AudioSet, is there a significant overlap in the subset that achieves a higher CLAP score? If so, could this raise concerns regarding the reliability of using the CLAP score as a filtering metric?\n2. What are the key differences between AF-AudioSet and the proposed AF-Synthetic, apart from the fact that the latter generates ten captions per sample and selects the one with the highest CLAP score?\n3. What distinguishes the AdaLN layer in the proposed ETTA-DiT structure from the one used in the original DiT model mentioned in a previous section?\n4. How does the ETTA model perform with improvements limited to the ETTA-DiT component (e.g., when trained solely on AudioCaps)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper introduces a novel filtering pipeline to select the best captions using the CLAP score, developing one of the largest high-quality audio-language datasets. The proposed ETTA generation model is trained across a wide range of diverse audio datasets and achieves competitive scores on both AudioCaps and MusicCaps, demonstrating state-of-the-art performance in both text-to-audio and text-to-music tasks.\n\nExternal ablation studies investigate the system's performance across different model sizes and training/sampling strategies. Subjective evaluations highlight significant improvements in the proposed ETTA system over baseline models, proving its enhanced ability to generate complex and imaginative captions.\n\nOverall, this paper presents interesting work in the field of audio generation. The authors first introduce a large-scale dataset and then present a state-of-the-art generation system trained using this dataset. Various experiments demonstrate the effectiveness of different methods or modules within the system, concluding with an analysis of the ETTA system's limitations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a 1.35M high-quality audio-caption dataset utilizing the state-of-the-art (SoTA) audio-language model, Audio Flamingo, referred to as AF-Synthetic. Additionally, the paper introduces a text-to-audio system based on the Latent Diffusion Model (LDM) with a Diffusion Transformer (DiT) backbone. Experimental results demonstrate that the proposed Elucidated Text-To-Audio (ETTA) system achieves SoTA performance across multiple metrics on both the AudioCaps and MusicCaps datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The authors claim the effectiveness of the proposed dataset by demonstrating the SoTA performance of the ETTA system trained on AF-Synthetic. However, the paper lacks sufficient experiments directly comparing the dataset's contribution, such as showing how models like Tango or AudioLDM perform when trained with AF-Synthetic.\n2. All the metrics used for evaluating the text-to-audio system are objective, which is generally sufficient but may not always reflect real-world performance. Therefore, including subjective evaluations like the Mean Opinion Score (MOS) would be beneficial, particularly for comparing models that achieve top performance across some metrics.\n3. The authors attempt to compare the performance between different baseline models. While ablation studies already demonstrate the effectiveness of the proposed ETTA strategies, the paper lacks experiments where the proposed model is trained on the same dataset (e.g., AudioCaps) to clearly illustrate the improvements contributed solely by the system itself.\n4. The ETTA model mainly builds on the backbone of the Stable Audio system. Beyond the training and sampling strategies (Flow Matching and ODE solvers, which are standard improvements in current LDM systems), the key enhancement appears to be the use of Adaptive Layer Normalization (AdaLN) within the DiT structure and the T5-base model for text embedding. These techniques are already implemented in the original DiT paper [1] and other baseline models. As a result, the model seems more like an engineering application with limited novel contributions.\n\nOverall, this is an interesting paper with great effort, I am willing to change the score if the author can solve the questions and fulfil the following experiments:\n1. Train baseline models like Tango or AudioLDM on AF-Synthetic and compare to their original performance.\n2. Train ETTA on other datasets like AudioCaps or TangoPromptBank and compare performance to when trained on AF-Synthetic. These experiments would help separate the contributions of the model architecture from the dataset.\n3. Develop human evaluations on a subset of generated samples, comparing ETTA to top baseline models, such as Mean Opinion Score (MOS) or other equivalent subjective ratings on audio quality, text relevance and so on.\n4. More clearly articulate the novel aspects of their model architecture compared to existing work like DiT and Stable Audio. \n\n[1]. Peebles, William, and Saining Xie. \"Scalable diffusion models with transformers.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We elucidate the design space of text-to-audio and present ETTA with state-of-the-art result and improved abilities to generate creative audio." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024elucidating,\ntitle={Elucidating the Design Space of Text-to-Audio Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xmgvF0sLIn},\nnote={under review}\n}" }, "abstract": { "value": "Recent years have seen significant progress in Text-To-Audio (TTA) synthesis, enabling users to enrich their creative workflows with synthetic audio generated from natural language prompts. Despite this progress, the effects of data, model architecture, training objective functions, and sampling strategies on target benchmarks are not well understood. With the purpose of providing a holistic understanding of the design space of TTA models, we setup a large-scale empirical experiment focused on diffusion and flow matching models. Our contributions include: 1) AF-Synthetic, a large dataset of high quality synthetic captions obtained from an audio understanding model; 2) a systematic comparison of different architectural, training, and inference design choices for TTA models; 3) an analysis of sampling methods and their Pareto curves with respect to generation quality and inference speed. We leverage the knowledge obtained from this extensive analysis to propose our best model dubbed Elucidated Text-To-Audio (ETTA). When evaluated on AudioCaps and MusicCaps, ETTA provides improvements over the baselines trained on publicly available data, while being competitive with models trained on proprietary data. Finally, we show ETTA's improved ability to generate creative audio following complex and imaginative captions — a task that is more challenging than current benchmarks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "audio generation", "text-to-audio", "synthetic data", "diffusion", "flow matching" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/79f2fd64cae74742cebec0806882a9b061294cb8.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Elucidating the Design Space of Text-to-Audio Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xnF2U0ro7b
Feature-Based Online Bilateral Trade
main
Active
bilateral trade;online learning;contextual bandits
reinforcement learning
6;6;8;8
3;4;3;3
2;3;4;4
2;3;3;3
3;3;3;2
7
3.25
3.25
2.75
2.75
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) I am wondering why the stronger bounds from Contextual Search (Liu et al) were not used in place of the Feature Based Pricing. It seems many of the ideas would carry over and you would achieve improved regret guarantees.\n\n2) It would be useful to know what new ideas are introduced for noisy setting and how much was already known in other settings." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The authors propose a very reasonable contextual model of online bilateral trade. I find the new model to be well motivated and combines two natural areas of study namely bilateral trade and online contextual regret minimization. The algorithms themselves seem interesting and are fairly natural. \n\nThe reduction from the two bit strong budget balanced case to the one bit global budget balanced case is perhaps the most interesting to me. Essentially it is a general recipe where by one can exploit explore-or-commit algorithms and perform the explorations in such a way that we can always get feedback about either the buyer or seller. However, we may lose regret compared to other party, and thus we need to ensure that there is sufficient budget to do this. This is done by measuring the average profit the 2 bit algorithm can learn and then appropriately setting the parameters to balance out the findings." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper tackles the bilateral trade problem in an online setting where there is an additional context present. At each time step $t$, a buyer and a seller arrive with private values $b_t$ and costs $s_t$. The algorithm must provide a pair of prices $(p,q)$ such that a sale happens if the buyer's value $b_t$ is less than $q$ and the seller's value is above $p$. The Gain from this trade assuming a sale happens is $(b_t - s_t)$. The authors consider the problem of maximizing the gains from trade assuming either two bit feedback where we find out if $\\mathbb{1}[b_t \\leq q] $ and if $\\mathbb{1}[s_t \\geq p]$. They also consider the model where we have only one bit feedback where we know the product of these two bits of feedback. The original problem was already studied by Cesa-bianchi et al. This problem considers the setting where the buyer and seller have a hidden vector of preferences $\\theta_b, \\theta_s$ and their private values are generated from a shared context $b_t = x_t^T \\theta_b$ and $s_t = x_t^T \\theta_s$ . They consider where there may be some noise that is added as well as the budget balanced setting where the prices offered to both parties must be the same ($p=q$). \n\n\nThe main results:\n\n1) In the two feedback setting with strong budget balance $p=q$ at each time step when there is no noise in the setting. Here the authors use a natural modification of the feature based toolification.\n2) In the two bit feedback model with noise, where the noise is i.i.d coming from distributions with bounded support and densities. Finally they devise an algorithm following the explore-or-commit framework where the authors decompose the gain in terms \n3) They also study the one bit feedback problem where you only find out if a sale happens or not. To get good bounds for this model they assume they have a good regret for the strongly-balanced two bit feedback and then use that in a black box manner. However the new bounds only have a global budget balanced guarantee." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although the algorithms are natural and interesting, I am unable to distinguish where the new ideas are and how much of the paper is using known tools to a new setting. I would appreciate more explanation on what the new ideas are in both the two bit setting and the one-bit setting. \\o" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can you elaborate a bit more about the comparison with the related work, \"A contextual online learning\ntheory of brokerage. arXiv preprint arXiv:2407.01566, 2024\"? The setting is very similar, however, it seems the valuations of two traders in their paper share the same expected value.\n\n2. For the one-bit feedback model, if we want to maintain per-round budget balance, is it still learnable?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written and analyzes a very interesting theoretical problem. The authors did a good job to describe the problem and how the algorithm handles the challenges. \n\nThe theoretical guarantee of the paper is sound. The authors provide a complete story for the setting with two-bit feedback model. In addition, the reduction from one-bit to two-bit by sacrificing budget balance constraint is very interesting and elegant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors investigate online contextual bilateral trade problem, where the valuations of two traders are modeled by different (unknown) linear functions. The authors focus on two different feedback models: (1) two-bit feedback model, where the learner can observe the binary feedback of both traders (2) one-bit feedback model, where the learner can only learns the binary information of whether the trade happens or not. For (1), the authors propose an online learning algorithm to set trading price at each round (that satisfies the strong budget balance constraint), which achieves $O(T^{2/3})$ regret. The authors also show a matching lower bound. For (2), the authors provide a reduction from one-bit feedback model to two-bit feedback model by sacrificing per-round budget balance to global budget balance and show the algorithm used in (1) can be applied to (2) and still achieves sublinear regret guarantee." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "There is no matching lower bound for the one-bit feedback setting. I also have some questions regarding this setting." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. How do you check whether the strong budget balance condition holds or not?\n\n2. Could you please conduct numerical experiments to show the real performance? Also, what is the computation complexity of your algorithms." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper derives strong $O(\\log T)$ regret, though under stronger conditions.\n\n2. The paper derives $O(T^{2/3})$ regret upper bound for their algorithm and shows that there exists a matching lower bound." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the bilateral trade model which involves the challenge of enabling transactions between a seller and a buyer who both hold private valuations for the item. This paper considers specifically the online scenario, where at each step there is a fresh seller and buyer entering the system and the pricing decisions for both parties must be made immediately without prior knowledge of their valuations. The paper further restricts to the contextual setting where the private valuations for the seller and buyer are linearly featured by a context. A two-bid feedback is considered where for both the seller and the buyer, it can be observed whether the posed price has exceeded its value or not. By further assuming a strong budget balance between the buyer and the seller, the paper is able to derive a $O(\\log T)$ regret. Without the budget balance conditions, the paper achieves a $O(T^{2/3})$ regret, which is minimax optimal. The paper further discusses the one-bit feedback and shows the potential to obtain a sub-linear regret." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The main results of the paper rely on the two-bid feedback setting, where both the seller and the buyer reveal to the decision maker whether they want to sell the product or buy the product. This is quite a strong condition, and the paper would benefit from a more detailed discussion on whether this condition happens or not in reality.\n\n2. Though the theoretical guarantee is provided, there are not numerical experiments in the paper showing the empirical performances. Also, the computation complexity of the proposed algorithms has not been discussed.\n\n3. The algorithmic idea and the proof technique mainly build upon the previous work Cohen et al. (2020) and it has not been discussed which part of the proof is novel.\n\n4. The $O(\\log T)$ regret depends on some strong conditions that are hard to justify in practice.\n\n5. The paper is overall theoretical and it is not clear how to apply their algorithm in practice." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Two bit feedback:\n- Do we gain anything by removing the budget balance constraint in the two-bit feedback case (simplifying the algorithm maybe)?\n- Can the authors provide more intuition of how combining the ETC, and Scouting strategy works? (maybe along the line -> with $O(T^\\beta)$ exploration regret we reduce the 'Range of Delta_t' = $O(T^{-\\alpha})$ and then the Scouting results in $O(T^{2/3})$ regret)\n- As we do not rely on the exact reward feedback, will approximately linear reward functions work? \n\n\nOne bit feedback:\n- In the one-bit feedback case, is the knowledge of $\\alpha$ essential? \n- Can the authors discuss if the $\\alpha$ dependency a side effect of selecting the specific strategy of collecting the profit in the first phase? Can we improve/remove such dependency by adaptively collecting the budget or by leveraging improved exploration strategy?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- This work initiates the feature based online bilateral trade problem\n- With budget balance and two-bit feedback they establish a tight $O(T^{2/3})$ regret bound with by combining Scouting and Explore-or-Commit strategies. \n- They extend the results to the one-bit feedback setup while maintaining the regret guarantees under the budget balance constraint." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors study the feature based online bilateral trade problem. Here, the buyer's valuations are given by a linear function. The seller's valuation is similar. In each round, the buyer and seller see a new context that along with their private parameter vector determines the parameterized part of the reward. The noisy version of the problem adds a i.i.d. random variable (not necessarily zero mean) with the parameterized reward. The authors study the problem in both 2-bit and 1-bit feedback under strong and global budget balance constraints, respectively. For the deterministic version they adopt existing EllipsoidPricing policy and show a log(T) regret bound. For the noisy, version they propose an Explore-or-Commit algorithm that achieves a $O(T^{3/4})$ regret, which is further improved to $O(T^{2/3})$ (which matches the lower bound). Some tradeoff between budget and regret is established for 1-bit feedback case." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- See questions for more discussions around improving the paper, and my own curiosity." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024featurebased,\ntitle={Feature-Based Online Bilateral Trade},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xnF2U0ro7b},\nnote={under review}\n}" }, "abstract": { "value": "Bilateral trade models the problem of facilitating trades between a seller and a buyer having private valuations for the item being sold. In the online version of the problem, the learner faces a new seller and buyer at each time step, and has to post a price for each of the two parties without any knowledge of their valuations. We consider a scenario where, at each time step, before posting prices the learner observes a context vector containing information about the features of the item for sale. The valuations of both the seller and the buyer follow an unknown linear function of the context. In this setting, the learner could leverage previous transactions in an attempt to estimate private valuations. We characterize the regret regimes of different settings, taking as a baseline the best context-dependent prices in hindsight. First, in the setting in which the learner has two-bit feedback and strong budget balance constraints, we propose an algorithm with $O(\\log T)$ regret. Then, we study the same set-up with noisy valuations, providing a tight $\\widetilde O(T^{2/3})$ regret upper bound. Finally, we show that loosening budget balance constraints allows the learner to operate under more restrictive feedback. Specifically, we show how to address the one-bit, global budget balance setting through a reduction from the two-bit, strong budget balance setup. This established a fundamental trade-off between the quality of the feedback and the strictness of the budget constraints." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "bilateral trade", "online learning", "contextual bandits" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ab27327a82ec2f4bb196b7ad51c9d9b9723eeefa.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Feature-Based Online Bilateral Trade" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xnWikQRJBR
M3CoL: Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification
main
Active
Contrastive learning;multimodal learning;representation learning;mutlimodal classification
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;5;5;5
4;3;2;3
2;3;3;2
2;2;2;2
3;3;3;2
4.5
3
2.5
2
2.75
-0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see the Weaknesses" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1.) M3CoL's use of mixup-based contrastive learning to capture shared relations in multimodal data, offering a new perspective on multimodal representation learning.\n2.) The theoretical analysis of M3CoL, including contrastive loss and the integration of unimodal and fusion modules, contributes to the theoretical understanding of multimodal learning.\n3.) The paper is well written, with clear explanations of the methodology, experiments, and results, making it accessible to readers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces M3CoL, a novel multimodal learning approach that leverages mixup contrastive learning to capture nuanced shared relations across modalities, going beyond traditional pairwise associations. The key contribution is a mixup-based contrastive loss function that aligns mixed samples from one modality with corresponding samples from others. The work highlights the importance of learning shared relations for robust multimodal learning and has implications for future research." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.) The paper does not deeply address how M3CoL scales with very large datasets, which could be a limitation given the increasing size of real-world datasets.\n2.) There's a potential risk of overfitting with mixup, especially in early training stages. More analysis on balancing generalization and overfitting would be valuable.\n3.)M3CoL's effectiveness relies heavily on the quality of mixed samples. Discussion on how data quality variations across modalities might affect performance is lacking." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could the authors kindly discuss the following related work ([6] being concurrrent): \n\n[4] Wang, Teng, et al. \"Vlmixer: Unpaired vision-language pre-training via cross-modal cutmix.\" International Conference on Machine Learning. PMLR, 2022.\n\n[5] Georgiou, Efthymios, Yannis Avrithis, and Alexandros Potamianos. \"PowMix: A Versatile Regularizer for Multimodal Sentiment Analysis.\" arXiv preprint arXiv:2312.12334 (2023).\n\n[6] Bafghi, Reza Akbarian, et al. \"Mixing Natural and Synthetic Images for Robust Self-Supervised Representations.\" arXiv preprint arXiv:2406.12368 (2024)." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Clarity: The paper is well-structured, with clear explanations of the methodology, including detailed descriptions of the Mixup-based contrastive loss and the unimodal and fusion modules. \n\nSignificance: M3CoL advances multimodal classification by addressing the limitations of traditional contrastive methods, offering improved generalization across domains. Its contributions are valuable for future research in multimodal learning, especially nuanced multimodal relationships like medical datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces M3CoL to capture complex shared relationships in multimodal data by aligning mixed samples from one modality with corresponding samples from others. This method leverages a Mixup-based contrastive loss with controlled mixup factor, extending beyond typical pairwise associations. A SoftClip-based loss is also adopted to enable many-to-many relationships between the two modalities. M3CoL also incorporates a novel multimodal learning framework that integrates unimodal prediction modules and a fusion module to improve classification. Experimental results show that M3CoL outperforms state-of-the-art methods on N24News, ROSMAP, and BRCA, and achieves comparable performance on Food-101." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Originality: Incorporating Mixup in contrastive learning is not new [1-3], even in a multimodal setting ([4-6], see Questions.) The reviewer would truly appreciate the authors’ further discussions on [4-6].\n\nSignificance: datasets, especially the non-medical datasets, are relatively small. The effectiveness of the method is yet to be seen from larger, real-world datasets. Since this method is relatively straightforward, larger-scale experiments will improve the significance of the submission.\n\n[1] Zhao, Tianhao, et al. \"MixIR: Mixing Input and Representations for Contrastive Learning.\" IEEE Transactions on Neural Networks and Learning Systems (2024).\n\n[2] Liu, Zixuan, et al. \"ChiMera: Learning with noisy labels by contrasting mixed-up augmentations.\" arXiv preprint arXiv:2310.05183 (2023).\n\n[3] Bandara, Wele Gedara Chaminda, Celso M. De Melo, and Vishal M. Patel. \"Guarding Barlow Twins Against Overfitting with Mixed Samples.\" arXiv preprint arXiv:2312.02151 (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The ACC for the Body section in N24News has not been provided.\n2. Samples from modality 1 (x_i^1,x_j^1) and modality 2 (x_i^2,x_k^2), along with their respective mixed data, are fed into encoders to generate embeddings. How were samples j and k selected, and why can’t they both be j?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.\tM3CoL uses a smart technique to find and learn common patterns across different data types. It’s like having a tool that can spot similarities in things that might not look alike, making it good at understanding complex data relationships.\n2.\tThe experiments and analysis are extensive, involving multiple datasets with various types of data and analyses." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This study introduces M3CoL, a deep multimodal learning method for capturing complex relationships in real-world data. M3CoL captures shared multimodal relationships by employing a contrast loss based on mixed samples and introduces a fusion module for multimodal classification tasks for supplementary supervision." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The reasons for the training sample selection strategy are not explained, and some experimental results are incomplete." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to Weaknesses for related questions." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Inovative way to perform contrastive learning: The use of Mixup in a contrastive learning setting for multimodal data is quite novel and experimentally illustrate to have positive effects.\n- Experiments regarding Attention map between text and image regions provide a good illustration for the effectiveness of alignment process." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces M3CoL (Multimodal Mixup Contrastive Learning), a method aimed at capturing shared, non-pairwise relationships within multimodal data. The framework includes a mixup-based contrastive loss to align mixed samples across modalities, facilitating more robust representations for multimodal classification tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **The motivation of the manuscript is not strong**. The process of aligning Positive couplets and Negative couplets in pairwise manner do not necessarily ignore the shared relational\ninformation exist between samples. There are lines of contrastive learning work (e.g. [1]) which align representations of sample within the same class together. Why does the mixup can better improve the performance compared to these approaches?\n- **The rationale of using MixUp technique is not well stated**. Is there any reason behind the choice of MixUp as a way to combine samples? Additional ablation studies can be provided to strengthen the choice empirically.\n- Beside the idea of MixUp contrastive learning strategy, **the rationale of applying unimodal downstream loss** is also short of explanation. While it does show improvement via Ablation study, why is it the case that it can indeed help the overall system?\n\n[1] Zhang, Shu, et al. \"Use all the labels: A hierarchical multi-label contrastive learning framework.\" *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2022." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce a novel Mixup-based contrastive learning method to capture shared relations inherent in real-world multimodal data, improving SOTA multimodal classification performance." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024mcol,\ntitle={M3CoL: Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xnWikQRJBR},\nnote={under review}\n}" }, "abstract": { "value": "Deep multimodal learning has shown remarkable success by leveraging contrastive learning to capture explicit one-to-one relations across modalities. However, real-world data often exhibits shared relations beyond simple pairwise associations. We propose M3CoL, a Multimodal Mixup Contrastive Learning approach to capture nuanced shared relations inherent in multimodal data. Our key contribution is a Mixup-based contrastive loss that learns robust representations by aligning mixed samples from one modality with their corresponding samples from other modalities thereby capturing shared relations between them. For multimodal classification tasks, we introduce a framework that integrates a fusion module with unimodal prediction modules for auxiliary supervision during training, complemented by our proposed Mixup-based contrastive loss. Through extensive experiments on diverse datasets (N24News, ROSMAP, BRCA, and Food-101), we demonstrate that M3CoL effectively captures shared multimodal relations and generalizes across domains. It outperforms state-of-the-art methods on N24News, ROSMAP, and BRCA, while achieving comparable performance on Food-101. Our work highlights the significance of learning shared relations for robust multimodal learning, opening up promising avenues for future research." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Contrastive learning", "multimodal learning", "representation learning", "mutlimodal classification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/dac38f94862f5cae02c7c3a68b780a449efb89c9.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/7053615acf5266eeebda2f88a61381cd978175f8.zip" }, "title": { "value": "M3CoL: Harnessing Shared Relations via Multimodal Mixup Contrastive Learning for Multimodal Classification" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xnssGv9rpW
SymmCD: Symmetry-Preserving Crystal Generation with Diffusion Models
main
Active
Crystals;Symmetry;Materials;Diffusion;Generative Models;Equivariance
applications to physical sciences (physics, chemistry, biology, etc.)
6;8;8
3;3;4
3;3;3
2;3;4
4;3;3
7.333333
3.333333
3
3
3.333333
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please consider the questions raised in the Weaknesses section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "•\tThe manuscript's structure and clarity are excellent overall.\n\n•\tThe manuscript includes a well-written and comprehensive introduction, with a clear and well-developed motivation for the crystal generation problem as an application of diffusion models.\n\n•\tThe method is well-formalized and understandable even to non-experts in crystal generation.\n\n•\tExperimental tasks and evaluation: The authors assess their method and the baselines on relevant additional tasks, such as S.U.N. structure prediction and other proxy metrics, which highlight the proposed method's strengths." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The submission, \"Symmetry-Preserving Crystal Generation with Diffusion Models,\" proposes a method for generating single-crystal structures with precise symmetric properties. The authors use asymmetric units and site symmetry representation, followed by a diffusion model for generation. This method explicitly addresses the generation of crystals with respect to their symmetry group.\nThe method performs on par with existing approaches but has a lower computational footprint." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "•\tIntroduction: From my perspective, the problem of generating symmetric crystals is closely related to other structure generation tasks in general representation learning. For instance, in biological applications, such as neuron structure generation or vascular structure generation, it would be beneficial if the authors discussed the relation to other domains in structure generation and the types of methods that have been developed. For example, I see certain similarities to diffusion methods in molecule generation [https://ieeexplore.ieee.org/abstract/document/10419041] or graph generation [https://arxiv.org/abs/2209.14734], which are partly mentioned in the methods since they are used; however, a discussion of how these applications relate to the context of representation learning would be valuable.\n\n•\tReproducibility: I did not find a link to an anonymous repository or source code in OpenReview, hindering the evaluation of reproducibility for this submission.\n\n\n•\tExperimentation: There are only minor performance gains (if any) compared to the state of the art. What are the practical uses of crystal symmetry generation in academia or industry? Is the computational gain truly relevant, considering the regular applications and scenarios in which crystal symmetry generation methods are used?\n\n•\tExperimentation: \"We withhold 20% of the dataset as a validation set, and 20% as a test set\" (Line 377). The experimental setup suggests that the authors do not use a form of cross-validation or cross-testing. Is there a specific reason for this choice? Given that the authors describe their computational efficiency as a strength, extensive cross-validation across experiments would seem reasonable.\n\n•\tExperimentation: Hyperparameter Selection (Section E.2). The authors briefly describe their final hyperparameters: “These hyperparameters were chosen using a sweep” (Line 919). Without code availability and the validation issues mentioned earlier, this appears to be a limited experimental description. What was the hyperparameter search space/budget? How were the hyperparameters for the four baselines tuned exactly? The results show very small differences in performance, so a fair description of hyperparameter search is crucial for reproducibility.\n\n\nI am not an expert in crystal generation and potentially some of my questions are atypical in the field, I am curious to hear the authors and other reviewer comments and willing to change my rating accordingly." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Table 2, why is CDVAE bold instead of SymmCD (10 SGs) for Validity Comp.? I understand an argument could be made that comparing the 10 SGs version to the other methods might not be entirely appropriate, but then I would consider dropping this version from tables 1 and 2, and only discuss it in the context of table 3, where the S.U.N. shines. Please clarify the logic behind inclusion of SymmCD (10 SGs) for Table 1 and Table 2.\n2. The evaluation presented in Table 3 involves random subsampling of 10% of the generated crystals, followed by two predictive models to evaluate stability and S.U.N. properties. At the same time, the SymmCD shows only a marginal improvement compared to DiffCSP and DiffCSP++. Are these results statistically significant? Please provide details on the robustness of this evaluation.\n3. Which model appears in the Table 4 as Conventional Unit Cell? Please provide a citation and clarify how this model was included in other comparisons as well.\n4. Is it possible to provide a link to the anonymized repository reproducing experiments?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The developed method for vectorizing crystalline structures, which explicitly accounts for both the spatial symmetry of the crystal and the point symmetry of the orbits, is to my knowledge the first of its kind, therefore unique, and holds a great promise for application in crystal structure prediction (CSP) for both inorganic and organic crystals.\n- The article is well-structured and clearly conveys information, allowing individuals unfamiliar with this field to understand the crystallographic features of the problem with some investment of time.\n- I believe this work could be highlighted at the conference as a fine example of how rational design of vector representation can influence the overall effectiveness of the developed deep learning model." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present the generative model SymmCD, which allows for the generation of datasets of crystalline structures of non-molecular crystals while explicitly considering symmetry. The results obtained exhibit both high symmetry diversity and a significant percentage of thermodynamically stable structures, making SymmCD a solid choice for crystal structure prediction systems or virtual screening of crystalline materials." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The major weaknesses of the paper lie in the discussion of the obtained empirical results. Addressing these will significantly enhance the presentation of the work accomplished:\n\n1. Please indicate in the introduction that the initial focus is on non-molecular/inorganic crystals.\n\n2. Please mention in the conclusion remarks that your method of structural representation seems to be well-suited for molecular crystals as well. For the latter, the presence of intrinsic point symmetry and its interaction with the point symmetry of orbitals is one of the key factors determining the crystal structure.\n\n3. In your conclusions, when you state \"go beyond single crystals, and consider generating multi-component crystals and alloys,\" please clarify what you mean. \"Single crystal\" is a broad term contrasting with polycrystalline materials and does not directly relate to crystalline structure. A multi-component crystal refers to a crystal composed of multiple chemical substances; for instance, this includes pharmaceutical co-crystals. Clearly, your approach should be applicable to these systems.\n\n4. Please consider rewriting conclusions to emphasize advantages (applicability to molecular crystals, including co-crystals) rather than deficiencies (inapplicability to non-crystalline systems), but also, to provide a deeper discussion of the limitations of SymmCD along with practical implications for the actual industrial problems." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Since I am not a researcher in this field, I don’t know much about the specific background, so I am not very clear about the process shown in Figure 3. Figure 3 and the training pipeline section could benefit from additional annotations to improve readability for those unfamiliar with diffusion models in this context." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Innovation in Representation: The paper introduces a physically motivated representation based on crystallographic symmetry, using a binary matrix to encode symmetry, which addresses data fragmentation and enables generalization across symmetry groups.\n2. Computational Efficiency: By focusing on asymmetric units rather than full crystal structures, the model demonstrates significant improvements in memory usage and training speed, an aspect well-supported by experimental evidence.\n3. Diversity and Validity of Generated Structures: SymmCD shows impressive results in generating diverse, valid, and symmetry-conforming crystal structures across multiple symmetry groups, even those that are less common in training data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a novel diffusion-based generative model, SymmCD, for symmetry-preserving crystal generation. The proposed approach explicitly incorporates crystallographic symmetry into the generative process, using a unique representation that decomposes crystals into asymmetric units and symmetry transformations. This design enhances both computational efficiency and the diversity of generated crystal structures, addressing some limitations of existing models in terms of symmetry and structural validity.Overall, the paper presents a strong contribution to the field of crystal generation. By explicitly incorporating symmetry into a generative diffusion framework, SymmCD addresses critical limitations of prior methods and provides a promising tool for materials discovery." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.Comprehensive Evaluation of Generated Crystal Properties: While the model’s ability to generate symmetric and diverse crystals is demonstrated, additional quantitative evaluations of properties such as thermodynamic and mechanical stability would further solidify the model’s applicability to real-world scenarios. Metrics that reflect physical applicability, such as structural stability under various conditions, could significantly strengthen the evaluation section.\n2.Efficiency on Larger Datasets: SymmCD’s efficient crystal representation is highlighted as a key advantage. However, a more comprehensive analysis of its computational efficiency on larger datasets, or under different hardware setups, could provide a more complete understanding of its scalability and practical utility in materials science applications.\n3.Clarification of the Binary Symmetry Encoding: The binary matrix representation for symmetry is an intriguing solution to data fragmentation, yet further explanation on why this approach outperforms traditional encodings in practical settings would be beneficial. Additional details in the architecture and experimental sections could clarify how the representation is effectively utilized in training.\n4.It may be helpful to provide a clearer explanation of the training algorithm, particularly in how the diffusion and denoising processes maintain symmetry." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024symmcd,\ntitle={Symm{CD}: Symmetry-Preserving Crystal Generation with Diffusion Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xnssGv9rpW},\nnote={under review}\n}" }, "abstract": { "value": "Generating novel crystalline materials has potential to lead to advancements in fields such as electronics, energy storage, and catalysis. The defining characteristic of crystals is their symmetry, which plays a central role in determining their physical properties. However, existing crystal generation methods either fail to generate materials that display the symmetries of real-world crystals, or simply replicate the symmetry information from examples in a database. To address this limitation, we propose SymmCD, a novel diffusion-based generative model that explicitly incorporates crystallographic symmetry into the generative process. We decompose crystals into two components and learn their joint distribution through diffusion: 1) the asymmetric unit, the smallest subset of the crystal which can generate the whole crystal through symmetry transformations, and; 2) the symmetry transformations needed to be applied to each atom in the asymmetric unit. We also use a novel and interpretable representation for these transformations, enabling generalization across different crystallographic symmetry groups. We showcase the competitive performance of SymmCD on a subset of the Materials Project, obtaining diverse and valid crystals with realistic symmetries and predicted properties." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Crystals", "Symmetry", "Materials", "Diffusion", "Generative Models", "Equivariance" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/f5e5567b698efbd38b904e2f61f1708573b9f77d.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SymmCD: Symmetry-Preserving Crystal Generation with Diffusion Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xoIeVdFO7U
Can a MISL Fly? Analysis and Ingredients for Mutual Information Skill Learning
main
Active
unsupervised learning;reinforcement learning;mutual information;successor feature
reinforcement learning
5;8;8;8
3;3;3;2
4;4;3;4
1;4;3;4
4;4;3;4
7.25
2.75
3.75
3
3.75
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Can the authors elaborate on the assumptions made in the theoretical analysis, and how they might affect the generalizability of the results? It is not clear from the main text if these assumptions are sound and/or restrictive in any significant way." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper provides a thorough theoretical analysis of METRA, reinterpreting it within the mutual information skill learning (MISL) framework. This helps demystify the method and connects it to well-established concepts like contrastive learning and information bottlenecks.\n\nThe presentation is clear and to the point. The writing is excellent, I did not find typos or mistakes.\n\nThe paper includes extensive empirical evaluations, comparing CSF with existing methods across various tasks. This robust experimental setup strengthens the validity of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper critiques a recent method (METRA), which optimizes a Wasserstein distance for skill learning, and argues that its benefits can be explained within the existing framework of mutual information skill learning (MISL). The authors propose a new MISL method called Contrastive Successor Features (CSF), which retains METRA's performance with fewer complexities (namely fewer hyperparameters but same performance).\nThe paper highlights connections between skill learning, contrastive representation learning, and successor features, and provides insights through ablation studies." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While I appreciate that the paper is presented as an improvement on METRA, I'd have enjoyed more a reading that was presenting a new method that is then shown to be equivalent to METRA under certain conditions.\n\nGiven that the presented method performs are par with METRA, it would also be nice to show where (if anywhere) one fails when the other succeeds. Perhaps partially observed MPDs, more interactive objects or discrete actions spaces would be key in identifying where exactly both methods stand." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Are there any theoretical insights supporting the assertion that parameterization is key for contrastive successor features? Did you experiment with different kernel functions?\n\n- Any insights or rough ideas on how to further scale the framework to more complicated cases, perhaps through large-scale pre-training or using foundation models to replace the representation learning component, would be beneficial to include in the main paper" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **[Technical soundness and novelty]** The technical soundness is robust; this work provides a thorough in-depth analysis of the METRA method, finding approximate equivalences with contrastive objectives and the information bottleneck. The analysis leads to a novel method that simplifies METRA, and I found no technical flaws; the method is both novel and solid.\n\n\n- **[Evaluation]** The empirical evaluation effectively validates the hypotheses and theoretical analysis, enhancing the overall persuasiveness of the work.\n\n- **[Presentation]** The presentation is clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work makes two major contributions. First, it establishes an approximate equivalence that bridges the representation objectives of the state-of-the-art method METRA with contrastive loss, specifically similar to InfoNCE. It shows that the actor objective in METRA is equivalent to the information bottleneck of $I(S, S'; Z) - I(S, S'; \\phi(S') - \\phi(S))$ (lower bounded). Essentially, it uses mutual-information-based skill discovery to elucidate METRA. Building on the analytical framework, the authors propose contrastive successor features to simplify METRA, employing contrastive objectives for representation learning and successor features for policy learning. Results indicate that the proposed method is empirically competitive with METRA. \n\nOverall, I like the analytical framework that unifies METRA with the mutual-information-based skill discovery method, and the theoretical foundation appears solid. Additionally, the results support the hypotheses and propositions, suggesting that the proposed method is even more flexible than METRA. Given these strengths, I would recommend an accept in this initial review." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **[About performances]** A significant question arises especially in the Quadruped experiments, where performance still shows room for improvement compared to METRA. Given that the proposed framework has a similar objective function, fewer hyperparameters, and avoids complex min-max optimization, why does the empirical performance (or at least the rate of convergence) not exceed that of METRA? Any discussion on this would be beneficial.\n- **[About demonstrations]** It would be advantageous to include demonstrations or other forms of visualization for the skills learned, as I did not find this in the appendix code." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "The statement line 278 does not hold for all cited methods (like DIAYN), as the \"omitted\" is sometimes a useless constant. This is actually illustrated by Eq 2 and 3.\n\nI also find the argument that the method removes 5 hyper-parameters compared to METRA to be bad faith: \"(1) the ϵ slack variable, (2) the norm constraint value in Eq. 4, (3) the dual gradient descent learning rate, (4) the dual gradient descent optimizer, and (5) the choice of discrete or continuous skills z.\" The choice between discrete/continuous skills is actually a positive thing, the norm constraint value is not a hyper-parameter, and the Lagrange multiplier seems (as far as I understand) to use the same optimizer/learning rate as the rest of the parameters in the original paper. Could you clarify which hyper-parameter actually require tuning and how many hyper-parameters you introduce ?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "The paper is easy to follow and well-written, the analysis is sound and the experiments are relevant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a new approach for learning different locomotor skills without supervision. The paper explains relation between SOTA method (METRA) with mutual information maximization between skills and state transitions, an objective shared by most related works. Following the analysis, it proposes modifications to METRA to make it explicitly maximize such a mutual information. Experiments show that the new approach matches METRA.\n\nOverall, the paper is easy to follow and well-written, the analysis is sound and the experiments are relevant. The main limitation is the novelty of the final approach, which is, in the end, very close to METRA and CIC. It also does not show strong improvements.\n\nI'm balanced but a lean towards borderline reject, I did not find sufficiently strong arguments justifying the proposed approach, compared to METRA (see Questions)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The final model is very close to previous work and do not present substantial improvements on the different environment compared to METRA. This is overall acknowledged by the authors." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Where did the \"fixed coefficient ξ = 5\" come from/does it have any interpretation? How sensitive is CSR to this hyperparameter?\n- There is a note that there are many possible parameterisations for the optimisation of a lower bound on I(S, S′; Z), but CSR (and METRA) use (ϕ(s′)−ϕ(s))⊤z; an ablation study shows that this is a crucial choice. Can the authors provide a further comment on the importance of the temporal distance metric?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This paper is incredibly well-written. The message/purpose is clear. There is an extensive literature survey, and the authors' discussion of the relevant material and methodology is informative. Unlike some papers that resort to mathematics unnecessarily, all components seem necessary, and are meaningfully explained in text. The authors perform a relatively large set of experiments to both show the performance of their algorithm, but also to back up other claims (such as the properties of representations learned). I also commend the authors for the extended information in the appendices, and also for providing code (I checked these briefly but not extensively)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce a novel self-supervised skill learning algorithm in the RL setting. Their work is motivated by recent work (METRA) that suggests moving away from the typical MI setting, which they analyse, and show that it could be reinterpreted in the familiar MI setting. In doing so, they create a simplified version of METRA, CSF, which achieves the same performance as METRA. The authors combine both theoretical and empirical evidence to support their claims." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I do not believe there are any substantive weaknesses of this work, but there are some questions the authors could address. As acknowledged by the authors in their own Limitations section, it is unclear how well these algorithms scale beyond the relatively simple MuJoCo benchmarks, but the authors have performed a significant amount of experiments on these domains." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Through careful analysis of a prior method, we develop a new method called Contrastive Successor Features (CSF) that illustrates mutual information skill learning can be made highly effective." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024can,\ntitle={Can a {MISL} Fly? Analysis and Ingredients for Mutual Information Skill Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xoIeVdFO7U},\nnote={under review}\n}" }, "abstract": { "value": "Self-supervised learning has the potential of lifting several of the key challenges in reinforcement learning today, such as exploration, representation learning, and reward design. Recent work (METRA) has effectively argued that moving away from mutual information and instead optimizing a certain Wasserstein distance is important for good performance. In this paper, we argue that the benefits seen in that paper can largely be explained within the existing framework of mutual information skill learning (MISL).\nOur analysis suggests a new MISL method (contrastive successor features) that retains the excellent performance of METRA with fewer moving parts, and highlights connections between skill learning, contrastive representation learning, and successor features. Finally, through careful ablation studies, we provide further insight into some of the key ingredients for both our method and METRA." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "unsupervised learning", "reinforcement learning", "mutual information", "successor feature" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9aeac8e98461c231f3b6dde06cadb6f9e1ef90c0.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/0c83260a1572afc8e568a8feea3ac51bb51fbde8.zip" }, "title": { "value": "Can a MISL Fly? Analysis and Ingredients for Mutual Information Skill Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xoUUCS9IGl
PoseCheck: Generative Models for 3D Structure-based Drug Design Produce Unrealistic Poses
main
Active
generative models;drug design;benchmarks
applications to physical sciences (physics, chemistry, biology, etc.)
3;5;5;6
4;5;4;3
2;3;3;3
1;3;3;2
2;4;3;3
4.75
4
2.75
2.25
3
-0.324443
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. We know that the size of generated molecules has a significant impact on the docking score. For instance, in pocket2mol, as the sampled molecules increase in size, the Vina score tends to improve. It would be valuable to experimentally verify whether these new metrics are influenced by molecular size. Intuitively, the number of clashes is likely related to molecule size. A robust metric should ideally be unbiased concerning molecule size.\n2. In the interaction analysis, it was observed that methods like diffsbdd, ligan, and decompdiff show a reduction in the number of hydrogen bond donors and acceptors after redocking. This is somewhat unexpected. What could be the possible reasons? Could this indicate that SBDD models are more sensitive to hydrogen bonding than redocking software?\n3. Why does pocket2mol achieve similar performance to CrossDocked in terms of strain energy, despite not showing significant advantages in other metrics?What could be the possible reasons?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper effectively highlights a critical issue in the current SBDD field, the evaluation metrics. The authors propose a two-tier approach to assessing SBDD-generated molecules: evaluating the intrinsic quality of the generated molecules and examining their structural conformations. For the latter, they introduce new metrics to address current limitations. While, from an application standpoint, the primary requirement is for SBDD models to produce effective small molecules capable of interacting with the binding pocket, supervising the quality of generated 3D conformations could offer valuable insights for further model refinement." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new set of metrics for evaluating SBDD molecular generation tasks. Four new metrics are introduced from the perspective of physical constraints, including redocked RMSD, steric clashes, interaction profile, and strain energy. In the experimental section, these metrics are tested on several important existing SBDD methods, and recommendations are provided based on the results." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. On the rationale of the strain energy metric: The energy change should ideally be observed as a whole, considering both the protein and the small molecule before and after binding, rather than focusing solely on the strain energy of the small molecule. \"..., generally speaking, lower strain energy results in more favourable binding interactions and potentially more effective therapeutics.\" This assumption is inaccurate. When evaluating whether a protein and a small molecule are likely to bind and form a complex, the energy change of the protein cannot be neglected. Furthermore, as the authors pointed out, the generated poses are often problematic. The strain energy introduced by the authors may be influenced more by the generated pose itself than by the intended evaluation of the binding affinity or stability of the protein-ligand complex.\n2. Regarding interaction fingerprinting, the authors present a distribution analysis of interaction fingerprints for several current models, noting significant deviations from the CrossDocked benchmark. However, the sheer number of hydrogen bond donors and acceptors does not determine the quality of the generated molecules. For instance, an excess of hydrogen bond donors and acceptors may reduce the selectivity of the small molecule, while selectivity is crucial in real-world pharmaceutical applications.\n3. These metrics cannot directly guide drug discovery in real-world scenarios. Regarding redocked RMSD, the importance of this metric depends on how we define the function of SBDD models and what we expect for them. From a practical application perspective, the role of an SBDD model is to provide potential candidate molecules, without necessarily ensuring the accuracy of their binding conformations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In terms of agreement with docking scoring functions, is minimizing the RMSD with docking software considered optimal? As far as I know, docking software often has its own biases and errors. Is it possible that a method could actually yield conformations that are closer to the true structure but perform worse according to this metric?\n2. Regarding the strain energy metric, should we only consider the strain energy of the individual molecule, or do we also need to account for the strain energy associated with conformational changes in the protein target?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The current standard evaluation metrics for SBDD tasks are insufficient for adequately assessing the quality of models, which is a consensus within the community. This paper makes a valuable attempt to propose new evaluation metrics by incorporating insights from biophysical knowledge.\n2. The evaluation and analysis of current methods highlight several issues present in existing deep learning approaches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes new evaluation metrics for SBDD task, including redocked RMSD, steric clashes, interaction profile, and strain energy. Additionally, it conducts tests and analyses on several methods using these newly introduced metrics." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although the proposed evaluation metrics have been comprehensively assessed on several existing methods, to my knowledge, some of the more advanced SBDD methods developed in the past two years have not been included. \n2. Regarding interaction fingerprints, I agree that it is meaningful and interpretable to observe the interactions present in the generated complex conformations. However, the analysis and conclusions in this section are not sufficiently clear or thorough. For instance, I still do not understand how to evaluate a method in relation to the distribution of interaction counts. Is a higher number better, or is it more favorable for the distribution to be closer to that of the test set?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Innovative Concept: The paper introduces interesting concepts related to antibody design.\n2. Potential Applications: The work has the potential for real-world applications in drug design, which is a valuable area of research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an approach to antibody design and optimization. However, the evaluation lacks comprehensiveness and fails to engage with several recent state-of-the-art methods in the field. This omission significantly impacts the robustness of the claims made regarding the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. As a benchmark, the evaluation in this article is far from comprehensive. Many recent works on antibody design and optimization are missing from the comparison, including GraphBP[1], VoxBind [2], MolCraft [3], and D3FG [4]. To the best of my knowledge, [2] and [3] are state-of-the-art methods, and all of the codes for these methods are open-sourced which I have tested. The lack of discussion and comparison with such a substantial body of related work is a major weakness.\n\n2. Additionally, I believe the evaluation method in this article is far from comprehensive. First, aside from interactions, evaluating the molecular topology and structure in both 2D and 3D dimensions is essential. If the generated molecule is merely in a low-energy state (as described by DiffBP, where larger molecules can produce more interactions, thus lowering affinity) but significantly deviates from real drug data in terms of structure and chemical functional groups, can we truly consider that the generated molecule has an advantage?\nHere, I have listed several methods and their examples in evaluating 3D geometric properties and 2D structural properties, refered to recently proposed benchmark paper [5], none of which have been considered in this article. \n| Method | Substructure | Geometry |\n|--------------|---------------------|---------------------------|\n| LIGAN | Figure. S6 | Figure. S7, S8, S9 |\n| POCKET2MOL | Table. 2 | Table. 3; Figure. 4 |\n| GRAPHBP | - | Table. 2; Figure. 5 |\n| TARGETDIFF | Table. 2 | Table. 1; Figure. 2 |\n| DIFFBP | Table. 3 | - |\n| DIFFSBDD | - | Figure. 8, 9 |\n| FLAG | Table. 3 | Table. 2; Figure. 4 |\n| D3FG | Table. 1, 3; Figure. 3 | Table. 2 |\n| DECOMPDiff | - | Table. 1, 2; Figure. 3 |\n| MOLCRAFT | Table. 1 | Table. 2 |\n| VOXBIND | - | Figure. 7 |\n\n3. The conclusions in this paper also have significant issues. For example, in line 376, it states, “Interestingly, DiffSBDD and TargetDiff, which are considered state-of-the-art based on mean docking score evaluations.” However, DiffSBDD performs poorly in terms of the Vina score compared to other methods, indicating that its generated initial conformations are quite unstable. Yet, its final redocked energy is very low. Could this be due to the fact that the molecules generated by DiffSBDD have a higher molecular weight than those generated by other methods? If so, is this comparison truly fair? The Vina score is an essential metric for evaluating the quality of the generated initial conformations, and it should not be overlooked. Overall, the conclusions and analyses are overly simplistic and lack comprehensiveness. \n\n4. The writing and presentation of this article are quite mediocre. For instance, the generation strategies, detailed introductions of methods include the model architecture and generative models, and classifications of these methods are not reflected in the tables, making the table arrangement appear rather arbitrary and unstructured.\n\nIn summary, I believe the evaluation in this article is incomplete, as it lacks many essential and reproducible methods. The conclusions are not sufficiently in-depth, and for existing methods, the article merely conducts a single test, followed by evaluations from various angles. In terms of both workload and quality, this paper falls short of the standards required for a high-level conference like ICLR. Therefore, I recommend rejection.\n\n[1] Meng Liu, Youzhi Luo, Kanji Uchino, Koji Maruhashi, and Shuiwang Ji. Generating 3d molecules for target protein binding. ArXiv, abs/2204.09410, 2022.\n\n[2] Pedro O. Pinheiro, Arian Jamasb, Omar Mahmood, Vishnu Sresht, and Saeed Saremi. Structure-based drug design by denoising voxel grids, 2024a.\n\n[3] Yanru Qu, Keyue Qiu, Yuxuan Song, Jingjing Gong, Jiawei Han, Mingyue Zheng, Hao Zhou, and Wei-Ying Ma. Molcraft: Structure-based drug design in continuous parameter space. ICML 2024, 2024.\n\n[4] Haitao Lin, Yufei Huang, Haotian Zhang, Lirong Wu, Siyuan Li, Zhiyuan Chen, and Stan Z. Li. Functional-group-based diffusion for pocket-specific molecule generation and elaboration. ArXiv, abs/2306.13769, 2023\n\n[5] Haitao Lin, Guojiang Zhao, Odin Zhang, Yufei Huang, Lirong Wu, Zicheng Liu, Siyuan Li, Cheng Tan, Zhifeng Gao, Stan Z. Li, CBGBench: Fill in the Blank of Protein-Molecule Complex Binding Graph" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- Why is 3DSBDD and LiGAN even better than the CrossDocked dataset in terms of steric clashes? I feel that it does not make that much sense for generative models maximizing the data likelihood to surpass the dataset they've been trained on.\n- What does the \"a molecular weight ¡ 1000 Da\" mean in Line 655?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- PoseCheck highlights a key aspect in 3D molecule generation, i.e. the quality of generated poses as a prerequisite.\n- This paper is generally well-written and easy-to-follow.\n- The new metrics proposed faithfully evaluate the generated pose quality, sheding light on the fact that SBDD models might still have to cope with accurate pose modeling, which is informative to the SBDD community." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed PoseCheck to directly evaluate the quality of molecular poses generated by 3D models. The authors introduced a number of metrics to SBDD, including interaction fingerprints, sterich clashes, strain energy, and redocking RMSD, and benchmarked the evaluation on CrossDocked dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Major:\n- Recent strong baselines are missing [1][2]. A more comprehensive evaluation would be needed for a higher score.\n- I was wondering if the authors could provide a more indicative metric, similar to the PoseBusters passing rate [3]. Since metrics like steric clashes and strain energies are distributed within a certain range, it is not directly evident how those baseline models perform when they are not very significant outliers. \n\nMinor:\n- Citation format needs more careful handling. Misuses of \\citet (\\cite) are common in this paper which should be \\citep instead.\n- Line 653: by ”pocket similarity’ via Pocketome -> by ``pocket similarity'' via Pocketome\n\n[1] Protein-Ligand Interaction Prior for Binding-aware 3D Molecule Diffusion Models. https://openreview.net/forum?id=qH9nrMNTIW\n\n[2] MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space. https://proceedings.mlr.press/v235/qu24a.html\n\n[3] PoseBusters: AI-based docking methods fail to generate physically valid poses or generalise to novel sequences. https://pubs.rsc.org/en/content/articlehtml/2024/sc/d3sc04185a" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024posecheck,\ntitle={PoseCheck: Generative Models for 3D Structure-based Drug Design Produce Unrealistic Poses},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xoUUCS9IGl},\nnote={under review}\n}" }, "abstract": { "value": "Deep generative models for structure-based drug design (SBDD), where molecule generation is conditioned on a 3D protein pocket, have received considerable interest in recent years. These methods offer the promise of higher-quality molecule generation by explicitly modelling the 3D interaction between a potential drug and a protein receptor. However, previous work has primarily focused on the quality of the generated molecules themselves, with limited evaluation of the 3D poses that these methods produce, with most work simply discarding the generated pose and only reporting a “corrected” pose after redocking with traditional methods. Little is known about whether generated molecules satisfy known physical constraints for binding and the extent to which redocking alters the generated interactions. We introduce POSECHECK, an extensive benchmarking suite for state-of-the-art SBDD methods and find that generated molecules have significantly more physical violations and fewer key interactions compared to baselines, calling into question the implicit assumption that providing rich 3D structure information improves molecule complementarity. We make recommendations for future research tackling identified failure modes and hope our benchmark will serve as a springboard for future SBDD generative modelling work to have a real-world impact." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "generative models", "drug design", "benchmarks" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/204e27df70a36fee9f7922b25f945c0a9eb38df8.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "PoseCheck: Generative Models for 3D Structure-based Drug Design Produce Unrealistic Poses" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xoW1Cb4MkP
ANYTEXT2: Visual Text Generation and Editing with Customizable Attributes
main
Withdraw
Text-to-Image;Visual Text Generation;Visual Text Editing;Customizable Attributes
generative models
Yuxiang Tuo;Yifeng Geng;Liefeng Bo
~Yuxiang_Tuo2;~Yifeng_Geng2;~Liefeng_Bo1
3;5;5;5
4;4;3;3
4;3;2;3
3;2;2;2
3;3;3;3
4.5
3.5
3
2.25
3
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The authors should provide quantitative experiments to demonstrate whether there is a performance decline in the model's general generative capabilities. If there is a performance decrease, then the necessity of the research focus of this paper should be considered more carefully.\n2. How can you ensure that the CLIP text encoder has a good understanding of your rephrased longer prompts? What do you do in cases where the prompt exceeds the maximum length?\n3. In Section 3.1, the authors need to provide quantitative changes in generation accuracy and FID scores during the variation of the strength coefficient to observe whether alpha has an impact on the accuracy of the generated content.\n4. Please see weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. AnyText2's WriteNet+AttnX architecture effectively decouples text rendering tasks from image content generation while integrating them effectively through learnable attention layers (AttnX), improving inference speed and enhancing image realism.\n2. By extracting font and color information from real images and using dedicated encoders for feature encoding, AnyText2 allows users to customize the font and color of each line of text, which is an innovative point in open-domain scenarios." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces AnyText2, a novel method for generating and editing multilingual text with customizable attributes within natural scene images. AnyText2 achieves precise control over text features, including glyphs, positions, fonts, and colors, through an efficient WriteNet+AttnX architecture and a Text Embedding Module. The method enhances the realism of generated images while improving the accuracy of text generation and allowing users to customize the font and color of each line of text. AnyText2 outperforms its predecessor, AnyText, on multiple evaluation metrics and plans to open-source code and models to foster the development of text generation technology." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Many current generative models can already generate text quite well without such complex operations. Is the generative research presented in this paper unnecessary?\n2. The color encoding will interfere with the model's ability. Why was it still added? In actual use, we do not need to control the color values very precisely.\n3. Similarly, how does font encoding specify the same type of font when there are many fonts that are very similar, and some characters can even be interchanged between them. In such cases, what font do they each belong to? This is a question.\n4. The current open-source optical character recognition (OCR) tools, such as DUGUANG, still have relatively low accuracy rates. For example, to my knowledge, the Chinese character recognition accuracy on the test set is only slightly above 80%, and it should be even lower on non-test sets. Therefore, using this OCR tool for testing may introduce excessive noise that could disrupt the test results. However, the results from the experiments in the paper appear to be quite regular. The authors should analyze this issue further, such as the relationship between the uncertainty of precision and the accuracy rate of the recognizer." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Does the method employ two separate models to support Chinese and English, or is a single model capable of handling both languages?\n2. Why is AnyText not included in the comparison in Figure 5? \n3. Is the model capable of generating large amounts of text within an image and what performance?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed WRITENET+ATTNX architecture enhances text generation capabilities.\n2. Improved control over various text attributes is achieved through the introduction of a Text Embedding Module.\n3. The method demonstrates better evaluation results compared to previous works, indicating improvements in text accuracy and image effects.\n4. Figure 4 presents visually appealing results, showcasing the integration of visual text generation with style brush." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces AnyText2, advancing text-to-image generation by providing fine-grained control over text attributes (font, color, position) within images. The proposed WriteNet+AttnX architecture enhances realism and speeds generation, while the mixed Text Embedding Module improves text accuracy. Results validate AnyText2’s strong performance in realistic T2I text control." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method does not differ significantly from previous works, particularly AnyText and Glyph-SDXL.\n2. Figure 5 does not demonstrate substantial qualitative improvements over other models in comparison.\n3. The paper lacks quantitative metrics for evaluating distinct text attributes, such as color, font, and position.\n4. The text-heavy descriptions of different encoders are not entirely clear; additional figures would be beneficial to improve understanding.\n5. A speed improvement over AnyText is mentioned in the abstract, this is not adequately discussed or compared with other methods in the main text." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How well this model works without the glyph and position information in Fig 2. This can bring the model to the same setting as most T2I models.\n- How’s the text rendering accuracy compared with most recent T2I models, such as Flux?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- a new writenet+attx model, which is a controlnet like module to better decouple image generation from text generation. A self-attention and a cross-attention layer are inserted, denoted as AttnX layers to model text residual signals from the background, which are combined together with an output from each AttnX layer multiplied by a strength coefficient and combined with the output from the previous layer through a shortcut connection.\n- Font, color and location are all separately encoded and conditioned." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper extends AnyText work for more fine grained control of text attributes for visual text generation conditioned on text prompt and layout glyph mask. The papers proposes WriteNet+AttnX architecture that encodes text features and injects these intermediate features into the U-Net decoder via learnable attention layers. Text Embedding Module is used to employ multiple encoders to separately encode the glyph, position, font, and color of the text. Thorough evaluation was done on both Chinese and English cases." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- the controlnet style architecture is not new\n- the color control example in Figure 4 still leaks into background objects\n- the model is same as AnyText in most aspects, especially it’s conditioned on text layout mask which is not very general." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The model outputs complete long prompts. How are these prompts divided into text prompts and image prompts? Do the text prompts and image prompts require fixed templates, similar to the examples that start with 'Text are' as markers?\n2. Could you explain the font encoder further? Which OCR model is used, and is it the same OCR model used for the glyph encoder?\n3. In Sec3.1, ‘Notably, setting α = 0 and multiply the WriteNet output by 0 enables AnyText2 to generate images without text’,but as shown in the images, α = 0 means generating text only without rich background information. Why did you say that α = 0 generates images without text? How should this be understood?\n4. Besides English and Chinese, is there an improvement in generation quality for other languages as well? Are there any illustrative examples of generated text images in multiple languages, such as Korean and Japanese, similar to those shown in AnyText?\n5. The Glyph-ByT5 also achieves the text attributes customization as latest work, but why it is included in the qualitative results but not in the quantitative ones? \n6. In WriteNet, is it reasonable to remove the noisy latent zt, and if so, does this affect the generation quality of the background content? Could you further explain the rationality behind this choice?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The text embedding module separately encodes the glyph, position, font and color attributes. Customizable font and color for each text line significantly enhance the visual appearance of the generated text.\n2. The WRITENET+ATTNX architecture encodes text features and injects these intermediate features into the U-Net decoder via learnable attention layers, which decouples text generation from image generation and improves generation quality and inference speed.\n3. By generating more complete and comprehensive descriptions of image details for training and evaluation, the model’s prompt-following capability is enhanced compared to using short captions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper achieves fine-grained control over different text attributes, including glyph, position, color, and font style, through multiple specialized encoders. Additionally, the decoupling of text and image generation via attention blocks enhances the realism of generated images. AnyText2 outperforms existing models, achieving superior accuracy and quality in text generation. Moreover, the use of extended captions has been validated to improve prompt-following capability and image realism. The proposed method not only achieves higher accuracy, enhanced realism, but also provides faster inference speed." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. To some extent, this work mainly builds on the structure of AnyText, with additional modules and architectural adjustments, which slightly limits its perceived novelty.\n2. In the quantitative comparison between AnyText2 and other methods, using models trained on a long captions dataset to compare with previous methods that utilized short captions for training is not entirely fair. This approach obscures the contributions of the other modules in AnyText2." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "AnyText2 introduces a novel approach for precise control over text attributes in natural scene images, achieving faster processing and improved text accuracy compared to its predecessor, while enabling customizable fonts and colors." }, "_bibtex": { "value": "@misc{\ntuo2024anytext,\ntitle={{ANYTEXT}2: Visual Text Generation and Editing with Customizable Attributes},\nauthor={Yuxiang Tuo and Yifeng Geng and Liefeng Bo},\nyear={2024},\nurl={https://openreview.net/forum?id=xoW1Cb4MkP}\n}" }, "abstract": { "value": "With the ongoing development in the text-to-image(T2I) domain, accurately generating text within images seamlessly integrating with the visual content has garnered increasing interest from the research community. In addition to controlling glyphs and positions of text, there is a rising demand for more fine-grained control over text attributes, such as font style and color, while maintaining the realism of the generated images. However, this issue has not yet been sufficiently explored. In this paper, we present AnyText2, the first known method to achieve precise control over the attributes of every line of multilingual text when generating images of natural scenes. Our method comprises two main components. First, we introduce an efficient WriteNet+AttnX architecture that encodes text features and injects these intermediate features into the U-Net decoder via learnable attention layers. This design is 19.8% faster than its predecessor, AnyText, and improves the realism of the generated images. Second, we thoroughly explore methods for extracting text fonts and colors from real images, and then develop a Text Embedding Module that employs multiple encoders to separately encode the glyph, position, font, and color of the text. This enables customizable font and color for each text line, yielding a 3.3% and 9.3% increase in text accuracy for Chinese and English, respectively, compared to AnyText. Furthermore, we validate the use of long captions, which enhances prompt-following and image realism without sacrificing text writing accuracy. Through comprehensive experiments, we demonstrate the state-of-the-art performance of our method. The code and model will be open-sourced in the future to promote the development of text generation technology." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Yuxiang_Tuo2", "~Yifeng_Geng2", "~Liefeng_Bo1" ] }, "authors": { "value": [ "Yuxiang Tuo", "Yifeng Geng", "Liefeng Bo" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Text-to-Image", "Visual Text Generation", "Visual Text Editing", "Customizable Attributes" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "tuo|anytext2_visual_text_generation_and_editing_with_customizable_attributes" }, "pdf": { "value": "/pdf/62e6058c6bce2b28ab8af509d23522643ae6a392.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "ANYTEXT2: Visual Text Generation and Editing with Customizable Attributes" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xoXn62FzD0
Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo
main
Active
Sequential Monte Carlo;Language Models;Semantic parsing;Bayesian inference;Probabilistic programming;SMC
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
5;6;6;8
3;4;3;5
3;3;3;3
2;3;3;3
2;4;3;3
6.25
3.75
3
2.75
3
0.899229
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What are the other (potentially better) proposal distributions? E.g. LLM fine-tuned/prompted with task-specific supervision." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This work solves one important problem with some of the widely adopted constrained/structured generation such as Guidance, SGLang and Outlines: that is, these framework achieves control by masking out next-tokens that would violate the constraint, leading to biased sampling (compared to the ground-truth conditional distribution). By leveraging sequential Monte Carlo, the proposed technique is able to approximate unbiased sampling in a relatively practical/scalable way. Empirical evaluations demonstrate strong performance on challenging real-world problems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Controlling LLM generation to follow logical, syntactical or semantical (soft) constraints is a challenging task. The authors propose to unify controllable generation as sampling from the un-normalized distribution p_{LM}(x) \\phi(x) where \\phi is an energy function specifying the constraints. The authors propose to leverage sequential Monte Carlo to sample from the desired un-normalized conditional distribution. The authors conducted extensive evaluation of their approach on various downstream tasks with different combinations of constraints and have demonstrated significant improvement compared to the LLM baseline. One important ablation study has suggested the positive correlation between approximation accuracy and generation quality, motivating for further research on improving the sampling algorithm or the proposal distribution." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Some detailed analysis/case study on the sample complexity of SMC would provide more insights, especially how much better SMC is compared to naive importance sampling." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses. \n\nAlso minor writing issues for clarity, the authors should specify what is meant by \"some\" in lines 41-44 and clarify the specific applications referenced in lines 157-158." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors propose adapting SMC methods to novel semantic parsing tasks, resulting in notable performance improvements. \n\nThe author conduct a interesting analysis and shows that resampling improves the approximation of the global product-of-experts distribution and approximation quality are consistent with those observed in downstream accuracy evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors apply sequential Monte Carlo to tackle semantic parsing problems involving global constraints. They introduce an enhanced SMC approach, incorporating efficient stochastic approximations of full token-masking distributions and semantic potential to boost performance across five diverse datasets. Additionally, they estimate the KL divergence between each method’s output distribution and the global product-of-experts distribution, demonstrating that the latter is well-calibrated." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In terms of experiments: \n- The authors do not emphasize their unique algorithmic contributions within the experiments. The authors could also report the performance of LM with grammar constraint, weight correction and resampling as a regular SMC baseline to further show the effectiveness of semantic potential. Additionally, the authors lack a detailed comparison between their method and the highly relevant SMC method in https://arxiv.org/pdf/2306.03081, and should report it as a baseline, e.g., including without-replacement resampling. \n- For ablation studies, how the number of particles will affect the final performance should be analyzed.\n\nThere is no mention of the computation cost, it would be very useful if the authors could evaluate the efficiency of the proposed algorithm." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "Please see the questions in the Weaknesses above." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality\n- To my knowledge, the extension of sequential Monte-Carlo to the task settings in the paper, and the specific generation receipe (re-weighting, resampling) are new. However, I am not closely familiar with [Lew et al 2023] or its subsequent papers (which are mentioned several times by the authors). Therefore, my evaluation of novelty may be slightly off. \n- Placing ideas such as token-masking, filtering out partial sequences, and selecting partial sequences to explore next in a probabilistic framework is a nice contribution (with the same caveats in the point above).\n\nQuality\n- The experimental evaluation presents a controlled study that ablates each component in the model.\n- Derivations and the divergence analysis seem to be of high quality.\n\nClarity\n- Once the reader becomes familiar with the terminology, the paper is written clearly and precisely. \n\nSignificance\n- The method could potentially be useful in settings where token-level and partial-sequence level constraint functions are available (e.g., those in the experiments). This has some generality (though could also be viewed as a limiting factor).\n- Placing more domains and settings into the probabilistic framing from [Lew et al 2023] helps to further the probabilistic perspective on sequence generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper concerns inference-time approaches for controlled generation with language models. Building on [Lew et al 2023], the authors develop a method based on sequential Monte-Carlo. The method proceeds segment-by-segment, first extending the segment (e.g. generating a next-token) subject to a token-level score (e.g., arising from a grammar constraint), then reweighting the candidates based on a partial sequence score (e.g., whether the code-so-far has a runtime error), then determining the next set of candidates by resampling candidates based on their weights.\n\nThe authors evaluate the method using prompted Llama 3 (base or instruct, varied by task) on 4 tasks. In each task, they construct task-specific token-level and partial-sequence level potentials, and provide an ablation of each of their three proposed components. The authors study the divergence between the target distribution and the distribution induced by running the algorithm, finding that each component leads to a lower KL divergence. Additionally, they provide a nice visualization of the sampling distributions and target distributions for their molecular generation task, and show that samples from their method improve along various dimensions. \n\nIn general the paper is written quite precisely, and several intuitive ideas (e.g., token masking or filtering out a sequence if its code doesn't run) are placed into a probabilistic framework." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My primary concerns were on the experimental validation. The paper performs a self-contained, controlled experiment using one Llama 3 model on a set of tasks. As a result, it was unclear how the findings generalize to other models, or how they compare in terms of performance to other methods in the literature.\n\n1) For example, taking the example of DS-1000, the absolute numbers are quite low: the DS-1000 paper reports up to 39.2 performance (with Codex-002) versus 28.5% here (with Llama 3). These are *not* comparable since they use different models, but it would be nice to see how this method performs for models closer to the state of the art. Similarly, Lever [1] reports numbers on Spider from previous work that range from 67.0% to 81.9% [1]. The reason this is important is that the exact experiment setup can lead to different conclusions on the performance of methods, so it was concerning that the absolute numbers seemed low. However, the authors could potentially clarify this.\n\n2) It was also unclear why 10 particles was selected, since in these sampling methods the number of samples can impact performance, and we often want to understand how performance varies with the sampling budget. How does the method vary as the number of particles varies? Is there a sample-and-rerank approach that could outperform this method if it drew a large number of samples?\n\n[1] LEVER: Learning to Verify Language-to-Code Generation with Execution, Ni et al ICML 2023" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "-Is there a difference between the right part of Figure 1 and Table 2? I would suggest the authors keep only one to avoid redundancy.\n\n-Why was the instruct version of Llama 3.1 used for the SQL task but not the others?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "-Good motivation from analysis of weight formulations for importance sampling.\n\n-Benchmarks validate claims that the proposed algorithmic components improve downstream performance. Additionally, authors chose a sensible set of benchmarks.\n\n-Weight correction and resampling seem to be novel components." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a Sequential Monte Carlo (SMC) approach for constrained generation in language models to address semantic and syntactic constraints that traditional methods fail to handle effectively. The novel elements include integrating domain-specific constraints via potential functions, weight correction to mitigate the bias from locally greedy decoding, and adaptive resampling that focuses computational effort on promising sequences. The method was tested on four challenging tasks: Python code generation, text-to-SQL translation, goal inference, and molecular synthesis. The experiments compared the SMC-based approach to several baselines and showed that incorporating weight correction and semantic potentials significantly improved performance, while adaptive resampling further enhanced results by reallocating resources to better particles. The total SMC method outperformed the ablated SMC variants on the selected task." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "-The method was not benchmarked against alternative methods. While the ablation study is useful, how does the method compare against other SMC-based techniques such as the ones cited in the related works section that are particularly relevant to this work? E.g. comparisons against the method in Lew et al. and Zhao et al. would be beneficial. There are other non-SMC-based methods that could also be benchmarked against.\n\n-Only Llama 3.1 8-B was evaluated. The manuscript would benefit from benchmarks on additional LLMs to see if results are consistent across similar sized LLMs. I would be curious to see if the benefits are as substantial on larger models, but I understand the authors may have limited computational resources for such analyses.\n\n-There is a lack of theoretical grounding as to the benefits of the components. E.g., a theorem rigorously showing the reduction in KL-Divergence shown in Figure 2 would strengthen the manuscript.\n\n-Notation can be difficult to follow at times. Exposition can be a bit drawn out in certain places, e.g. section 2. I appreciate the authors trying to point out the inefficiencies in each component of MC in order to justify their approach, but I think the exposition would benefit from a condensed explanation of, e.g., the computational burdens of IS." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024syntactic,\ntitle={Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xoXn62FzD0},\nnote={under review}\n}" }, "abstract": { "value": "A wide range of LLM applications require generating text that conforms to syntactic or semantic constraints. Imposing such constraints nontrivially alters the distribution over sequences, usually making exact sampling intractable. In this work, building on the Language Model Probabilistic Programming framework of Lew et al. (2023), we develop an approach to approximate inference for controlled LLM generation based on sequential Monte Carlo (SMC). Our SMC framework allows us to flexibly incorporate domain- and problem-specific constraints at inference time, and efficiently reallocate computation in light of new information during the course of generation. We demonstrate that our approach improves downstream performance on four challenging domains---Python code generation for data science, text-to-SQL, goal inference, and molecule synthesis. We compare to a number of alternative and ablated approaches, showing that our accuracy improvements are driven by better approximation to the full Bayesian posterior." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Sequential Monte Carlo", "Language Models", "Semantic parsing", "Bayesian inference", "Probabilistic programming", "SMC" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1ba9c8b9256b5ad6cca4f4b55ae9dafeb45b7eb5.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xof0bvftR1
Knockout: A simple way to handle missing inputs
main
Active
Applied Machine Learning;Marginalization;Missing inputs;Multi-modality
applications to computer vision, audio, language, and other modalities
3;3;6;6
3;4;3;3
3;3;3;3
2;2;3;3
3;3;3;4
4.5
3.25
3
2.5
3.25
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- The choice of the placeholder value seems crucial for Knockout's performance. While the authors provide recommendations, have you explored techniques to automatically learn or optimize the placeholder values during training?\n- In scenarios with limited training data or low-capacity models, how well does Knockout perform compared to simpler baselines like mean/mode imputation? Are there any modifications or variations of Knockout that could improve its effectiveness in such cases?\n- Have you explored the potential of Knockout to handle distribution shifts in the presence of missingness? For example, if the missingness patterns or distributions change between training and inference, how would Knockout perform compared to other methods?\n- While the paper covers a diverse set of tasks and data modalities, it would be interesting to see Knockout's performance on more complex tasks like language modeling or multimodal learning, where missing modalities could be prevalent." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The theoretical analysis is well-reasoned, and the empirical evaluation is comprehensive, covering a diverse set of tasks and data modalities. The authors have taken care to compare against appropriate baselines and provide ablation studies (e.g., structured vs. unstructured Knockout).\n- The paper is well-written and clearly explains the core idea, theoretical justification, and experimental setup. The authors have provided sufficient details to facilitate reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces \"Knockout,\" a simple yet effective data augmentation strategy for handling missing inputs during inference in machine learning models. Knockout randomly replaces input features with appropriate placeholder values during training. At inference time, using the placeholder value corresponds to marginalization over the missing variables.\n\nThe key contributions are:\n\n1) Theoretical justification showing that Knockout implicitly maximizes the likelihood of a weighted sum of the conditional estimators and all desired marginals in a single model.\n\n2) Analysis and recommendations for choosing appropriate placeholder values for different data types (categorical, continuous, structured).\n\n3) Extensive experiments on synthetic and real-world datasets (images, tabular data) demonstrating Knockout's effectiveness in handling missing inputs across various scenarios, outperforming common baselines like mean/mode imputation and ensemble methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The idea of randomly masking/corrupting inputs during training is not entirely new, many papers in related work section essentially use the same approach, eg PartialVAE, VAEAC, ACFlow, \n- While the authors provide theoretical justification for Knockout, the analysis relies on the assumption of using a very high capacity, non-linear model trained on large data. It is unclear how well Knockout would perform in scenarios with limited data or low-capacity models.\n- The comparison against strong baselines trained specifically for certain missingness patterns is missing. In practical scenarios where the missingness patterns are known or limited, such specialized models could potentially outperform Knockout." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "I believe this paper is well-written and organized, so I don’t have any direct questions. However, I am curious if the authors have considered applying the Knockout method to every hidden layer of the neural network, rather than just the input layer. This method seems somewhat similar to dropout, which is typically applied across all layers." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The manuscript is generally well-written, demonstrating quality and clarity.\n2. The author presents a comprehensive review of related work.\n3. The new Knockout method is evaluated against multiple strong baselines.\n4. The author thoroughly discusses various types of missing data mechanisms and evaluates the performance of Knockout and common baselines on them." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents Knockout, a strategy for handling missing inputs in complex models. Knockout randomly replaces input features with placeholders during training, enabling a single model to learn conditional and marginal distributions. This method is theoretically sound and intensive simulation and real data application were used to evaluate the performance of Knockout." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Figure 2 shows that selecting an appropriate placeholder value has a strong impact on Knockout. While the author emphasizes the importance of this choice, a general guideline for choosing placeholder values is lacking, leaving it to be determined on a case-by-case basis.\n2. The simulation results appear somewhat limited. The input dimension of X is only 9, and the number of missing features ranges from 0 to 3. It would be beneficial to include simulations that better align with real-world dat. Specifically, those with high dimensionality and higher missing rates." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does Knockout conceptually and empirically compare to related data augmentation methods like Dropout and DAMP [1], particularly since all these methods involve randomly masking inputs during training?\n- How does the method perform with limited training data or when using simpler model architectures?\n- What is the computational overhead during training compared to standard training and other baselines?\n- How sensitive is the performance to the choice of placeholder values in practice?\n- Could the random knockout process affect model calibration or uncertainty estimates?\n- When would traditional imputation methods be preferable to this approach?\n- Have the authors considered extending this to sequential data or other structured input types?\n\n1. Trinh et al. Improving robustness to corruptions with multiplicative weight perturbations. NeurIPS 2024." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- Elegant and practical solution that balances simplicity with theoretical soundness\n- Strong theoretical foundation with rigorous mathematical analysis \n- Impressive versatility across different data types and applications\n- Practical single-model solution compared to existing multi-model approaches\n- Comprehensive empirical evaluation with meaningful baselines\n- Clear and actionable implementation guidelines for practitioners" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces \"Knockout,\" a simple yet theoretically-grounded method for handling missing input features in deep learning models. The key idea is to randomly replace input features with appropriate placeholder values during training, which enables a single model to learn both the conditional distribution using full inputs and the marginal distributions for cases with missing inputs. The authors provide theoretical justification showing that Knockout can be interpreted as an implicit marginalization strategy and demonstrate its effectiveness across diverse scenarios including synthetic data, clinical predictions, noisy label learning, tumor segmentation, and multi-modal classification tasks. Compared to existing approaches like marginalization (computationally expensive), imputation (potentially inaccurate for high-dimensional data), or training multiple models (costly and requires prior knowledge of missing patterns), Knockout offers a more efficient and flexible solution. The authors also carefully analyze how to choose appropriate placeholder values for different types of inputs and show how Knockout can handle both complete and incomplete training data under various missingness mechanisms (MCAR, MAR, MNAR)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Limited theoretical analysis for finite-capacity models and small datasets, as theory assumes high-capacity models and large data\n- Missing comparison against specialized models trained for specific missingness patterns\n- No detailed ablation study on optimal placeholder value selection, despite its importance\n- Lack of exploration into computational overhead during training compared to simpler approaches\n- Limited discussion of failure cases or scenarios where the method might underperform\n- No investigation into potential impact on model robustness or calibration" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I explained my concerns in the weaknesses section.\nI may change my rating after rebuttal and discussion with other reviewers." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I appreciate the authors efforts to run experiments on various datasets.\nThe paper is also interesting and practical, proposing a simple and straightforward technique." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a simple technique, Knockout, which randomly replaces input features with appropriate placeholder values during training. The main contribution/novelty of this paper is to introduce specialized placeholder values for different types of features: categorical, bounded vs unbounded continuous, structured vs unstructured." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although this paper is interesting and practical. But, it is very incremental in terms of research novelty considering the expectations from an ICLR paper. For these types of papers, it is required to have thorough experimental studies and solid comparisons to show the applied contributions. But, I think this paper has lack of comparisons with important baselines or prior works.\n\n* The main missing baseline for comparison is the dropout method. Actually, the comparison between knockout and knockout* shows that most of the advantage comes from random replacement. In fact, since the model has seen more missing data during training, it has become more robust to missingness and performs better on test data. An intuitive baseline is to apply dropout with the same missing rate and compare with knockout. Likewise, it may be possible that by applying the same rate of random missingness and use any other baseline (e.g. mean/media imputation or other methods), their performance also improves. In general, I think the comparisons performed in the paper were not fair since the other methods did not see the same amount of missing values during training\n\n* The paper misses to cite some important related works:\n * Why not to use zero imputation? correcting sparsity bias in training neural networks\n * Debiasing Averaged Stochastic Gradient Descent\n * Learning from data with structured missingness\n\n* Further, the paper cites some important papers such as \"Ipsen et al, How to deal with missing data in supervised deep learning\" but does not perform any comparison. I understand this paper proposes a complicated model. But, for an ICLR paper, I think stronger baseslines are expected." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024knockout,\ntitle={Knockout: A simple way to handle missing inputs},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xof0bvftR1},\nnote={under review}\n}" }, "abstract": { "value": "Deep learning models can extract predictive and actionable information from complex inputs. The richer the inputs, the better these models usually perform. However, models that leverage rich inputs (e.g., multi-modality) can be difficult to deploy widely, because some inputs may be missing at inference. Current popular solutions to this problem include marginalization, imputation, and training multiple models. Marginalization can obtain calibrated predictions but it is computationally costly and therefore only feasible for low dimensional inputs. Imputation may result in inaccurate predictions because it employs point estimates for missing variables and does not work well for high dimensional inputs (e.g., images). Training multiple models whereby each model takes different subsets of inputs can work well but requires knowing missing input patterns in advance. Furthermore, training and retaining multiple models can be costly. We propose an efficient way to learn both the conditional distribution using full inputs and the marginal distributions. Our method, Knockout, randomly replaces input features with appropriate placeholder values during training. We provide a theoretical justification of Knockout and show that it can be viewed as an implicit marginalization strategy. We evaluate Knockout in a wide range of simulations and real-world datasets and show that it can offer strong empirical performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Applied Machine Learning", "Marginalization", "Missing inputs", "Multi-modality" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e63e58aade1193569f41d4188ee4fe1b2016ee8b.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/6e96376241a9c8dab5f5541770a065b406cd902d.pdf" }, "title": { "value": "Knockout: A simple way to handle missing inputs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xom3YUQfbK
A Language Model based Model Manager
main
Active
Large Language Models;Model Manager;Verbalization;Differentiation
interpretability and explainable AI
3;3;3;5
5;3;4;4
3;2;1;2
2;2;2;2
3;3;2;2
3.5
4
2
2
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Why is Diabetes missing from Figure 2?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "The develops an LLM based model manager that could be a viable way to get at model differences in a human understandable way. So the core idea is not without merit." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an LLM based model differencing mechanism. Given two models with (differing) classification outputs, the aim of the LLM is to verbalise the differences in the model outputs. The idea is to analyze the differences in outcomes and provide human-readable text. To evaluate said verbalization, the same LLM is fed the verbalization and asked to predict the output of the second model. If this synthetic label matches the output of the second model, this is considered success. Experiments with standard classifiers are presented and differences artificially induced." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "A substantive assessment of the weaknesses of the paper. Focus on constructive and actionable insights on how the work could improve towards its stated goals. Be specific, avoid generic remarks. For example, if you believe the contribution lacks novelty, provide references and an explanation as evidence; if you believe experiments are insufficient, explain why and exactly what is missing, etc.\n\nThe execution of the key idea is fraught on several fronts. Primarily the choice of the difference levels is entirely unrealistic. Generally, one does not consider models that are this far apart in decision outcomes. I'd expect that the accuracies are also minimally 10 percentage points different (performance measures of the baseline models should be reported as well). A better experiment would be if models differ in the order of a few percentage points - instead of artificially changing models to be so far apart.\n\nWhile much is said about verbalisation, the paper does not provide a single example of verbal outcomes that are human-reviewable. The paper should do more to describe the verbalisations and perhaps even consider a user study on the efficacy of these to understand model differences.\n\nThe evaluation is entirely focused on label outcomes from the verbalisation and $M2$. This is surrogate measure, as there is no guarantee that the verbalisation is reflective of the decision logic. Second by translating to a classification task for the LLM, you induce several artifacts like position bias etc (as you are using the LLM as a classifier). Attributing this to the quality of verbalisation is tenuous." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As well as addressing the weaknesses described above, it would be useful for the authors to consider the following suggestions and questions: \n\n- \"While these methods provide critical insights into individual models and datasets, they do not explicitly dive into verbalizing the differences in model predictions across the feature space. Addressing these limitations and providing interpretable verbalizations is essential for enabling more informed decisions when selecting or developing new and effective models.\" This statement needs more evidence to support it. How do you know this is essetnial?\n\n- For some references full bibliographic info is not provided. Eg Mu & Andreas 2021 and nostalgebraist 2020.\n\n- It would be worth expanding on whether or F on line 150 is exclusively simple feature names or more detailed feature explanations. \n\n- Not clear what the \"verb split\" referred to on line 159 is. \n\n- For reproduceability it would be useful to provide more details on how \"the datasets were scaled\" and the \"preprocessing steps\" that were applied. Maybe best in an appendix. \n\n- I am not sure that \"stratefied\" is the correct word to use on Line 276 \"we stratified the experiments based on the ...\" . Maybe \"categorised\" would be better? \n\n- Space for more in-depth analysis could be gained by removing the statement of results from the tables shown in Lines 353 - 357 and 374 - 377. \n\n- The reader should be provided with more support to navigate the results. For example \"For LR, the inclusion of coefficients results in either performance remaining within the error margin or showing a modest increase (3-5%) across all datasets\" is stated but without pointing to a specific table or graph to help the reader understand where this conclusion comes from. \n\n- The following statement in the conclusions \"these indicate that the Model Managers can be extended to verbalizing the differences between Deep Neural Networks, especially incorporating approaches that describe the models’ internals (e.g., mechanistic interpretability). \" is not well supported by the results presented (which focus on small models and relatively simple datasets). \n\n- The title is somewhat misleading. Model Manager has a specific meaning in the MLOps community (e.g. SAS Model Manager, Siemens AI Model Manager) which is quite different to what is being presented in this paper. This paper is quite specifically about a technique to generate text explanations of the differences between models and a title more reflective of this would be better. \n\n- The presentation and discussion of the results is not clear or consistent - e.g. Why is the Diabetes dataset missing from Figure 2? Overall Accuracy is also missing from this figure? Similarly kNNs are left out of this figure and would be useful to include. There seems to be plenty of space to include all or at least some of these. \n\n- Not clear if the results table (Table 2, 3, and 4) are part of the main body of the paper (referenced as such but then introduced later as additional results). This should be clarified. they are really needed." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The main strengths of the paper are:\n\n- The paper addresses an interesting problem that is becoming more important in the community. \n- The methodology proposed is original and promising. \n- The proposed approach is clearly presented and described.\n- The evaluation methodology is interesting and facilitates large scale evaluation of text generation capabilities. \n- Based on the evaluation performed the method performs well.\n- Some interesting discussion of the performance of the approach for different kinds of machine leanring models is provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper describes an approach to verbalizing the differences in the decision boundaries learned by pairs of machine learning models. The verbalizations are generated by presenting samples of instances and their predictions from the two models, along with some relevant context, to an LLM prompted to describe the differences between the models. The approach is described and then evaluated using three well known tabular datasets and logistic regression, decision tree, and k-NN models. The performance of the system is evaluated using a novel approach that measures the ability of another LLM to generate predictions made using the second model based on the predictions made by the first model and the verbalization of differences between the two models. Based on this evaluation the developed method is shown to perform well with differences between its abilities for different machine learning models explored." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weaknesses of the paper are:\n\n- No examples of the verbalizations generated by the approach are provided. This is frustrating for a reader as the value of the approach relies on these being useful to a reader and a small set of examples would easily demonstrate this (or not). \n\n- The motivation of the work is to provide model explanations, but the technique developed generates explanations of differences between models. The authors never explain how this approach achieves the overall aim as a reference model of some kind would always be needed. \n\n- The title is somewhat misleading. Model Manager has a specific meaning in the MLOps community (e.g. SAS Model Manager, Siemens AI Model Manager) which is quite different to what is being presented in this paper. This paper is quite specifically about a technique to generate text explanations of the differences between models and a title more reflective of this would be better. \n\n- The work focuses on explaining non-neural network models (e.g, logistic regression and decision trees) but there is a mismatch with the related work covered, which all focuses on neural network models. \n\n- The presentation and discussion of the results is not clear or consistent - e.g. Why are the Diabetes dataset, Overall Accuracy and k-NNs missing from Figure 2? Also not clear if the results table (Table 2, 3, and 4) are part of the main body of the paper (referenced as such but then introduced later as additional results)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Could you please include some example verbalization outputs? It is hard to see how this would generalize to complex model families such as XGBoost without the insight into how these outputs look like." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The task is novel and interesting.\n2. The evaluation approach and metrics are sound.\n3. The paper is well-written and includes enough details of the evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a new task to verbalize the differences in two machine learning models trained on the same dataset. The approach described in the paper uses in-context learning of LLMs to achieve this. The output of LLMs is evaluated by using it to predict one model's output given the output of the other model and the verbalized difference. Experimental evaluation was done on 3 popular classification datasets and 3 popular classification algorithms (Logistic Regression, Decision Trees, and KNN). Two ablation studies aim to understand the effect of including different types of information in the prompt." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the task is described clearly, it is not clear whether it is sufficient to be called a Model Manager.\n2. The LLM-based in-context learning approach is not novel as one can imagine any text based task be formulated that way with some reasonable performance.\n3. Given the small set of experiments with simple datasets and model families, it is hard to see how this would generalize to more complex real world tasks and models.\n4. The code and instructions provided do not seem complete/right. For example, the README says `python lm_manager.py --llm [LM_name] --subject [subject_name]` is the command to run one of the experiments. However neither llm_manager.py nor main.py look aligned with this.\n\nMinor writing issues:\n1. spacing near line 166\n2. \"While our experiments do not focus on classification tasks, we include the feature names to improve interpretability\" - confusing as the experiments do seem to focus on classification tasks." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Q1. I wasn’t sure how robust the results are; for example, what if you swap the two models you’re comparing? Do you get the same output? \n\nQ2. An additional ablation that would be useful is to evaluate explanations generated using model internals alone (e.g. weights of a logistic regression), without including any predictions on individual examples.\n\nQ3. Is there some guidance on which model should be used for evaluating the quality of the explanation? You mention that you use the same LLM because of “the bias introduced when LLMs process the outputs of the other language models.” — could you provide a citation for this and reason about the implications in your set-up?\n\nQ4. The introduction notes that the models have to be trained on the same dataset; the prompt repeats the same constraint. Do you mean the same task? If not, is there a reason the models need to be trained on the same dataset?\n\nQ5. The manuscript states “While our experiments do not focus on classification tasks” in the related work, but it appears the experiments do focus on classification tasks?\n\nQ6. Certain terms are worth defining; for example, it was not clear what is meant by a “logit lens” in the related work.\n\nQ7. “rather than utilizing the language model head directly, we employ an external LLM to serve as the \"model manager,\" providing a novel means of interpreting and explaining model behaviors.” → could you provide intuition as to why using an external LLM is better than using a language model head directly? Is it because not all models have language model heads, and creating one would be prohibitively expensive?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors tackle a common yet understudied problem and develop a unique solution. The paper was generally clear and easy to follow. With expanded experimentation and detail around methodology, I believe it could make a strong contribution." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on helping practitioners distinguish between models trained to accomplish the same task. Specifically, the authors design a method to produce natural language explanations of differences between models using model predictions on a set of examples. The method is validated in the context of pairs of logistic regressions, decision trees, and KNNs, and they measure the quality of the natural language explanation by measuring how an LLM can reconstruct one model’s predictions given only the difference explanation and the other model’s predictions. Experiments show that the quality of the explanations depends on model type, model description, and the extent to which the models differ." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I see a few high-level limitations (L1-L3) and minor limitations (L4-L6) with the experimentation.\n\nL1. The presented experiments are limited to models that are not typically associated with the “model lake” phenomenon. I’m also not sure that results describing model differences using logistic regressions and decision trees immediately generalize to deep learning models, since the space of verbalizations is likely much more complex with the latter. \n\nL2. The prompt implies there exists some verbalization that explains differences in model predictions. But sometimes, model disagreements could be due to random chance, rather than some systematic, explainable difference. The same could happen if the verbalization dataset is too small. I’d be interested to see the verbalizations for two models which exhibit the same behavior, but produce slightly different predictions because of some other source of randomness (e,g. being trained on two distinct datasets). Is this a case the proposed approach handles? What do verbalizations look like in this setting?\n\nL3. The absence of baselines made it difficult to contextualize results. You could replace LLM_{verb} with an interpretable-by-design model; for example, what if you trained a logistic regression to predict Model 2’s predictions on an example given the example’s features and Model 1’s predictions? How does that do in comparison to reconstructing Model 2’s predictions given the verbalization using LLM_{verb}?\n\nL4. The reliance on feature names makes it difficult to generalize the methodology to settings in which models make use of high-dimensional inputs with no natural feature names (e.g. images).\n\nL5. Changing both LLM_{verb} and LLM_{eval} at the same time makes it difficult to assess whether results are different because a certain LLM is a better verbalizer or a better evaluator. I’m not as familiar with the biases of passing one LLM’s outputs as input to another, but it may still be useful to hold LLM_{eval} constant to isolate the performance of each LLM as a verbalizer.\n\nL6. It’s worth clarifying early on that the examples used to produce a verbalization are distinct from the examples used to train each model. I was a bit confused by “It does so by serializing a representative sample of input instances (from the dataset) and the corresponding model outputs in a JSON format.“ in the introduction, but later clarified my confusion when reading the problem setting. I’d move this information so that it appears earlier.\n\nThere are a few citations you might consider including; the following seem relevant:\n\n1. https://arxiv.org/abs/2201.12323 – a method to describe differences in text distributions. Seems like such a method could be used to describe differences between LLMs applied to a given task.\n2. https://arxiv.org/abs/2110.10545 - early work on choosing the “best” model from a pre-trained model hub\n3. https://arxiv.org/pdf/2404.04326 - the authors here use an LLM to generate hypotheses; one could frame the task LLM_{verb} attempts to solve as hypothesis generation, where the model is developing a natural language hypothesis to describe Model 2’s predictions given the features and Model 1’s predictions.\n4. https://arxiv.org/pdf/2410.13609 - also focuses on model selection, but under limited labeled data" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "The \"Model Manager\" framework uses a large language model to clarify differences between machine learning models, enhancing transparency and aiding selection." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024a,\ntitle={A Language Model based Model Manager},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xom3YUQfbK},\nnote={under review}\n}" }, "abstract": { "value": "In the current landscape of machine learning, we face a “model lake” phenomenon: a proliferation of deployed models often lacking adequate documentation. This presents significant challenges for model users attempting to navigate, differentiate, and select appropriate models for their needs. To address the issue of differentiation, we introduce Model Manager, a framework designed to facilitate easy comparison among existing models. Our approach leverages a large language model (LLM) to generate verbalizations of two models' differences by sampling from two models. We use a novel protocol that makes it possible to quantify the informativeness of the verbalizations. We also assemble a suite with a diverse set of commonly used models: Logistic Regression, Decision Trees, and K-Nearest Neighbors. We additionally performed ablation studies on crucial design decisions of the Model Managers. Our analysis yields pronounced results. For a pair of logistic regression models with a 20-25\\% performance difference on the blood dataset, the Model Manager effectively verbalizes their variations with up to 80\\% accuracy. The Model Manager framework opens up new research avenues for improving the transparency and comparability of machine learning models in a post-hoc manner." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Model Manager", "Verbalization", "Differentiation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a93bacde732f253f7267e59bf288ad2c5c98b1c0.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/1648635c36a26ec27b0894c64ff885e3102922ff.zip" }, "title": { "value": "A Language Model based Model Manager" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xpmDc76RN2
Understanding Optimization of Operator Networks with Variational Loss for Solving PDEs
main
Active
Restriced Strong Convexity;Operator Learning;Variational Loss;Scientific machine learning
applications to physical sciences (physics, chemistry, biology, etc.)
1;3;3
3;3;4
2;2;2
2;1;2
2;2;1
2.333333
3.333333
2
1.666667
1.666667
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please, see the Weaknesses question." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-motivated. It uses a newly introduced optimization framework to provide guarantees on a largely unexplored problem.\n- The paper's organization is suitable for the presentation of the results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper uses the recently introduced framework of optimization based on RSC to apply it to a new problem: the training of a neural network that approximates the solution of a specific class of PDE problems. Besides the optimization guarantees, which is the main contribution of the paper, the paper also highlights the role of the preconditioning on the linear system of equations that represents the learning problem, and also proposes an adaptive weight algorithm. Experiments are done to show performance comparisons between different settings of preconditioning and the proposed algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Regarding the theoretical results, I have serious concerns about two specific areas that have not been proved and which imply the proofs are incomplete and therefore not publishable yet.\n- In Lemma 3.2, it has not been proved that the set $Q_q\\\\cap B^{Spec}_{\\\\rho,\\\\rho_1}\\cap B^{Euc}_{\\\\rho_2}$ is **non-empty**. I haven’t found the proof for this in the Appendix. If this can’t be proved, then there is no existence of $\\\\theta’$ and so the equation in Lemma 3.2 is void of meaning, as well as the optimization guarantees in Theorem 3.4 that uses this result—i.e., the main contribution of the paper. The proof is needed urgently for the main theoretical results to make sense.\n- There is a big issue in Theorem 3.4. When looking at its proof, it is important to prove that $\\\\delta_t$ is less than $1$. However, in line 954, when trying to prove this, the authors simply claim that because of the “definition of $Q_{q_t}(\\\\theta_t)$”, we have that “$\\\\theta_{t+1}\\\\in Q_{q_t}(\\\\theta_t)$”. This claim is not evident and requires a proof—in my opinion, it is a baseless claim. The RSC set $Q_{q_t}(\\\\theta_t)$ proposed by the authors is different than the RSC set established in Banerjee et al., 2022 (the work that the authors based their results on), and so the results for the RSC set in Banerjee et al., 2022 cannot be applied “immediately” to solve this problem. A proof is needed.\n\nThere is a lot of notation problems that may have strong repercussions in the understanding of the paper and possible of the validity of its results. It also shows the paper does not seem to have been proofread. As it is, the paper is not in a suitable state for publication, mainly for the issues I have found below:\n- Line 100 mentioned that $M$ is the dimension of the input feature (or data) $\\\\omega$; however, in equation (6) and later in the same paper $M$ denotes the sample size! This is confusing. Moreover, in Assumption 3, $M$ is used again as saying that it is a constant that simply “exists” for every $\\\\omega$.\n- Another big problem is the use of the symbol $\\\\omega$. In line 100 it is defined to be an “input feature” which seems to be just data, since it is used in the equation of a neural network. The problem is that then around line 152, it is said to be a member of the “parameter space”, which is a confusing term given that before the paper related $\\\\omega$ with data. Could the authors make more precise what $\\\\omega$ represents? I also want to suggest explicitly stating that $\\\\omega$ is a vector.\n- Many typos: for example, delete “through” in line 031; in line 108 it should say “of all layers”; the “(\\\\mathbf{x})” missing in equation (1); the dimension of $\\\\bar{\\\\theta}$ in line 234 should be $p$ instead of $m$; the subscript $2$ must be added to the norm notation in Definition 3.1; remove the $M$ in the denominator from the fraction in lines 1019-1024. The authors should afterwards proofread the whole paper and correct all writing errors.\n\nFinally, another considerable issue is that the simulations are missing a stronger connection with the guarantees established in Theorem 3.4 and the RSC framework. Theorem 3.4 shows that the convergence rate depends $r_t$, which depends on a different between $q_t$ and a quantity that has $m$ in its denominator. Thus, there seems to be a dependency of the convergence rate on the neural network’s width $m$. Simulations should be done for every of the five trial settings described in Section 5 where now the width values $m$ are changed. The authors should then comment on any observed change in the convergence rate. This would make the simulations explore further the implications of the theoretical derivations.\n\nOther issues:\n- The symbol $\\\\mathbf{A}$ and $q_t$ is introduced in both the abstract and Introduction without any explanation of what they really are. I would recommend to either remove them or give a more anticipatory explanation of what they are.\n- Add the Banerjee et al., 2022 citation to the end of the first sentence in line 071. \n- Update the Banerjee et al., 2022 citation since I found a published version in ICLR 2023. \n- In the contributions, both “training efficiency” and “training performance” are mentioned: what are the differences between those two terms? They must be explained.\n- In Section 2.1, specify that $\\\\phi$ is applied entry-wise when it has a vector as an argument.\n- What does “uniformly elliptic” mean in line 121?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Abstract appears to refer to A before it is defined. Clasically, A is referred to as the stiffness matrix - so should be replaced.\n* Line 99 - you say that input features live in Omega - what is Omega? Is it a set in a vector space? Is it Euclidean? Later on you say it has to be compact. Even later you claim that its L_2 norm must be bounded - does that mean it is a compact subset of euclidean space? \n* Line 99 - do you mean to say that $m_0$ is constant $M$? What does 'to be determined later' mean?\n* Equation (2) you seem have just dropped the boundary condition in the variational formulation - why is that?\n* Assumption 1 appears to have a typo - should be $\\beta_\\phi$ instead of $\\beta_\\sigma$. \n* Assumption 2 - what happened to the boldening of W and V?\n* Definition 3.1 - $\\bar{\\theta}$ is meant to live in $\\mathbb{R}^{p}$, not $\\mathbb{R}^{m}$?\n* Theorem 3.4: Assumptions A1 and A2 appear to not be discussed at all - what is their significance, why must they hold in practice? Why do you denote the minimization as arginf and not argmin - these are compact sets, and loss is lower bounded? \n* Theorem 3.4 $q_t$ is not defined!\n* Line 305: \"closely related to the convergence rate\" - what does that even mean? Either explicitly state what you mean or dont mention.\n* Line 334: broken reference\n* Line 375 what is \\Omega((NM)^2)? \\Omega is the feature space?\n* Line 380 - you claim that it has been shown that controlling this is challenging in practice - if that is so, please do provide references. \n* Section 3 and 4: It appears that things are repeated a lot.\n* Algorithm 1 appears to not be written properly, and all actual calcualtions are ommited. What is more, the gradient descent itself is never stated. \n* The preconditionings B and C do not appear to be mentioned in the main text at all. \n* Why does the relative L2 error for Trial A appear to be 1e-4 while the training loss appears to be 1e-2? \n* Why does Trial A begin with a much larger loss value? \n* Are the plotted losses always $L^M$ for all runs or $\\hat{L}^M$ for some?\n* It appears that Trial B-E all operate within the same margin of error, especially for convection diffusion - it seems that adaptive weighting only provides marginal improvement?\n* Why is the training loss curves for Helmholtz so different to the original ULGnet paper?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper introduces a new theoretical framework for analyzing the optimization of operator networks with variational losses using the Restricted Strong Convexity theory. This provides a clear image of impact of the condition number on convergence of the algorithm which appears to be in agreement with numerical experiments. The paper further proposes an adaptive preconditioning strategy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper analyzes the optimization of operator networks for solving elliptic PDEs with variational loss functions using the Restricted Strong Convexity framework. It proves convergence, establishes the impact of the condition number on the convergence rate. An adaptive algorithm is proposed to further enhance convergence by adjusting weights in the variational loss function." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper is extremely difficuly to read, written sloppily, and appears to be significantly underdeveloped. The theory section appears to provide almost no new insights, while the numerics are lacking in both quality and quantity. I expand below: \n\nThe effect on the optimization process of the condition number of a linear system for minimizing a linear regression task has been well-known and does not provide any new insights. The proposed adaptive weight algorithm can be interpreted simply as time-dependent pre-conditioning. However, once such a time-dependent pre-conditioning is introduced, theorem 3.4 no longer holds as the algorithm is no longer that of gradient descent, and a new result is required for the updated algorithm - however such an adaptation is not present in the paper. \n\nThe numerical section appears to cover only one (there appears to be another one in the appendix, which paints a different picture - showing that adaptive weighting does almost no better than other preconditioning strategies) experiment, reporting only a single relative L2 error (is this over a test set or a single solution?) - making it extremely unclear whether the results are generalizable. The table trial errors do not appear to agree with the graphs provided, however the training curves also do not appear to agree with the original ULGNet of Choi et al. 2023. What is more - the Trial A appears to begin at a much larger loss value, raising the question of whether the loss values reported always correspond to the same loss or different losses - i.e. is it $L^M$ for all runs or $\\hat{L}^M$ for some? \n\nI further highlight other issues in detail in the questions section." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Is there a way to get some expressive bounds on $r_t$? In this current state of Thm 3.4, I can't really say that this characterize the behaviour of the optimization process since all the constants have been hidden in this quantity.\n- The approach is only compared to one other approach (ULGNET) but not on standard baseline models (like FNOs or DeepONets)\n- Eq. (1): the equation is not self-adjoint due to the term $\\nabla u$ contrary to what is stated above\n- There should be some regularity condition on the coefficient function (e.g. L^p) to guarantee existence of a solution $u$ in a suitable space.\n- Eq. (1), define $d$.\n- Paragraph of line 145-155 is very confusing. What's Omega?\n- line 179, \"classical numerical theory\"?\n- The beginning of section 3 lack motivation and discussion of previous works.\n- Line 334: Section number reference lacking\n- line 432-464: all of this is speculative, there is no proof of a uniform lower bound for q_t and this is just a simple consequence of the assumption\n- Figure 1: can we not improve the optimization behaviour by tuning the learning rate scheduler?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The topic is important and to the best of my knowledge there has not been many works analyzing preconditioning for operator learning techniques. The authors provide some theory and experiments that demonstrate that preconditioning yields better convergence rates and training stability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper analyzes the optimization of neural operators for learning solution operators of elliptic PDEs using a variational loss function. The authors use restricted strong convexity (RSC) theory to provide theoretical guarantees for convergence and training stability, and investigate the role of the condition number of the diffusion coefficient $A$ in the optimization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- There is some confusion in the introduction between physics-informed neural networks (for solving known PDEs by minimizing the residual), and neural operator (for learning solution operators without PDE knowledge)\n- The paper is not very well structured, and the relevant parts are not very well explained whereas space is wasted on formally introducing neural networks\n- The paper is not self-contained and very difficult to understand without reading the paper of Banerjee et al. 2022\n- There is some concern about the novelty for the results concerning the condition number of A missing. It is well-known in numerical PDE solvers that smaller condition number improves convergence and preconditioning helps to get smaller condition number\n- The main results on convergence rate in Thm 3.4 are not very useful as the convergence rate $r_t$ depends on the quantity $q_t$, which itself depends on the gradient of loss function w.r.t parameters. Hence, one cannot really characterize this as a convergence result given that we have no control on $r_t$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024understanding,\ntitle={Understanding Optimization of Operator Networks with Variational Loss for Solving {PDE}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xpmDc76RN2},\nnote={under review}\n}" }, "abstract": { "value": "In this paper, we analyze the optimization of operator networks for solving elliptic PDEs with variational loss functions. While approximation and generalization errors in operator networks have been extensively studied, optimization error remains largely unexplored. \nWe apply Restricted Strong Convexity (RSC) theory to rigorously examine the optimization dynamics of operator networks trained with variational loss, providing theoretical guarantees for convergence and training stability. \nWe further investigate the role of the condition number of $A$ in optimization and demonstrate that preconditioning strategies significantly improve convergence rates, establishing a solid theoretical basis for the empirical benefits of preconditioning. We also address the lower bound of a key quantity, $q_t$, which ensures convergence. \nTo prevent $q_t$ from vanishing, we propose an algorithm that adaptively incorporates additional weights into the variational loss function, leveraging values already computed during training, thereby avoiding any extra computational costs.\nFinally, we validate {our theoretical assumptions through numerical experiments, demonstrating their practical applicability} and confirming the effectiveness of preconditioning, with significant improvements in training performance and convergence rates." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Restriced Strong Convexity", "Operator Learning", "Variational Loss", "Scientific machine learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/41ecaccb7d6203c507529556246014ea80b667aa.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Understanding Optimization of Operator Networks with Variational Loss for Solving PDEs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xqEeGja6zq
Components Beat Patches: Eigenvector Removal for Robust Masked Image Modelling
main
Active
Self-supervised Representation Learning; Unsupervised Representation Learning; Visual Representation Learning
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;8;8
4;2;4;4
2;1;3;4
1;1;3;3
3;3;4;4
5.5
3.5
2.5
2
3.5
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "In the experiments exploring effects of masking ratios, as I understand it in the $PMAE_{rd}$ case, a random percentage of variance between $[10,90]$ is masked. I'm a bit confused as to why in Figure 5, which presents the impact of masking percentage, the classification accuracy results only shown for percentages between $[10, 50]$. What happens when a higher percentage of variance is masked?\n\nHow do you anticipate PMAE would perform with nonlinear transformations, e.g. kernelized PCA with an RBF kernel?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Originality\n\nThe proposed method is a relatively straightforward combination of previously established approaches, but integrated in a novel, clever, and elegant way. \n\n\n- Quality\n\nThe authors provide a thorough review of previous research leading to and motivating this work, as well as a nice discussion of the broader context of related research. \n\nResults support claims made throughout the paper.\n\n\n\n- Clarity\n\nThe paper is very well written and easy to follow; the foundations are solid and motivation is clear.\n\n\n- Significance\n\nThe proposed method avoids extensive hyperparameter tuning, known to be challenging in established MIM/MAE regimes, as ratio / size of image patches masked strongly influences performance on different downstream tasks. Experiments presented using MAEs parameterized with range of masking ratios highlight this -- training models with the standard convention of masking $75$% of image patches often results in poorer classification accuracy, suggesting the necessity of tuning beyond this accepted norm. \nIn contrast, PMAE doesn't require much hyperparameter tuning and seems to perform quite well straight off the shelf.\n\n\nThe idea is simple yet solid, even drawing intuitions from early research in image processing / Eigenfaces, which I can imagine inspiring future/new directions in representation learning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of Masked Image Modeling, which classically entails masking out patches of pixels within input images and training a model to reconstruct these missing values based on the visible pixels in a self-supervised fashion. The authors propose a novel alternative to the masking of pixel patches, instead suggesting that the data first be transformed into a latent subspace i.e. projected onto its principal components, and masking operations be done on the component level instead of pixel level. The idea is that because the principal components can represent global correlations, masking individual components still retains information at some level of all pixel locations, as opposed to the masking of entire patches of pixels, which could remove e.g. an entire object. In essence, this allows the model to still be exposed to information from all pixel locations during training, leading to robust representations that are more likely to contain meaningful information needed for downstream tasks, e.g. classification. The approach, termed PMAE, is evaluated against the vanilla Masked Autoencoder in an extensive set of experiments for image classification on multiple natural and medical image sets, showing clear and often significant improvement in nearly all settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I wonder about the scalability of PMAE when applied to larger images, with the potentially prohibitive costs of the eigendecomposition, e.g. $o(pixels^3)$. Could the method still perform well with low-rank approximations?\n\nSome of the presented results are a bit unclear to me, see 'questions'." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "This paper is well organized and written. The perspective of masking principle components is interesting." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This p" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "It does not make sense to me that we can recover removed principal components from other components. This is because unlike patches, these principal components are independent to each other. It is unreasonable to say we can recover a random variable X from another independent random variable Y. This is my main concern for me to give rejection, but I am open to change my score if I am convinced. \n\nBesides, reporting experimental results on ImageNet would be more convincing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- The method uses a lossless scenario for the PCA. The rationale for this is understandable, but I am wondering if this leads to practical problems (e.g. for very small eigenvalues, their ordering becomes random, etc.)?\n- Why not evaluate other similar transforms (e.g. Fourier or wavelet transforms)? \n- How would the pre-training perform on large, high-resolution images?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The idea for the masked image modelling strategy is simple and intuitive. It is presented in an easy-to-follow fashion.\n- The paper is generally well-written and clear (except for the points below).\n- The evaluation strategy is solid.\n- The use of SOTA baselines is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new approach to pre-training neural networks in the context of masked image modelling. Instead of masking out random patches in the image, the authors propose a masking strategy in which the principal components of the image are masked out and the networks are then trained to recover these principal components. The paper empirically demonstrates the advantage of this approach on natural and medical images in the context of image classification." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The idea of using PCA as an invertible transformation into a latent space is sensible, however there many alternative choices for such a transformation (e.g. Fourier or wavelet transforms). \n- The evaluation scenario focuses on classification as a downstream task. This is typically a task that relies on global information. However, other downstream tasks, such as object detection or semantic segmentation, require local information, and here, the proposed may perform less well. However, this is not tested.\n- The datasets used for evaluation contain very small images (e.g. CIFAR10, MedMNIST)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Can you explain why it makes sense for MAE to achieve near-SOTA performance on ImageNet1k in the original paper, but performs so poorly on the simpler datasets CIFAR10 and TinyImageNet?\n\nAs stated in \"weaknesses,\" I think we need to see PMAE's performance on ImageNet1k to reach a conclusion about it's superiority over MAE." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is well-written, and provides a clear and agreeable motivation for masking principal components instead of image patches." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors present a variant of the masked autoencoder where instead of masking a subset of image patches, they mask a subset of principal components. They assume that the masked and unmasked principal components are likely to be correlated in a way that is pertinent to the class, which improves downstream classification performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main purpose of self-supervised learning is to learn a representation that can be fine-tuned to achieve **strong** downstream performance. This is not demonstrated in this paper. Putting it bluntly, it is not compelling to achieve ~60% accuracy on CIFAR10 and ~20% accuracy on TinyImageNet. Instead of using significant resources to train a vision transformer with this approach, it's more practical to train a small convolutional net to achieve higher performance.\n\nIn the original masked autoencoder paper, the authors demonstrate very strong downstream performance on ImageNet1k (not the tiny version), which makes their approach compelling. In light of this, the fact that MAE allegedly performs so poorly on CIFAR10 and TinyImageNet is a bit suspicious. If it is indeed the case that MAE performs poorly on smaller images, we still need to see PMAE do similarly well on ImageNet1k, but these experiments are not in the paper." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel masking strategy for Masked Image Modelling approaches; The proposed method operates on principal components rather than spatial patches leading to significant improvement on downstream image classification performance." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024components,\ntitle={Components Beat Patches: Eigenvector Removal for Robust Masked Image Modelling},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xqEeGja6zq},\nnote={under review}\n}" }, "abstract": { "value": "Masked Image Modeling has gained prominence as a powerful self-supervised learning approach for visual representation learning by reconstructing masked-out patches of images. However, the use of random spatial masking can lead to failure cases in which the learned features are not predictive of downstream labels. In this work, we introduce a novel masking strategy that targets principal components instead of image patches. The learning task then amounts to reconstructing the information of masked-out principal components. The principal components of a dataset contain more global information than patches, such that the information shared between the masked input and the reconstruction target should involve more high-level variables of interest. This property allows principal components to offer a more meaningful masking space, which manifests in improved quality of the learned representations. We provide empirical evidence across natural and medical datasets and demonstrate substantial improvements in image classification tasks. Our method thus offers a simple and robust data-driven alternative to traditional Masked Image Modelling approaches." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Self-supervised Representation Learning; Unsupervised Representation Learning; Visual Representation Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/282ad0b069faeebf349eec01d0f59a51d572ee39.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/d226e9e7479ae3121b5d12e5835f87b818b6c403.zip" }, "title": { "value": "Components Beat Patches: Eigenvector Removal for Robust Masked Image Modelling" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xrWOR5wSOz
Replacing Implicit Regression with Classification in Policy Gradient Reinforcement Learning
main
Active
reinforcement learning; policy gradient RL; actor-critic
reinforcement learning
3;5;5;8
4;4;3;3
2;3;3;3
1;2;2;3
2;3;3;2
5.25
3.5
2.75
2
2.5
-0.70014
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Why is the performance not competitive?\n\nWhy do you consider the method as classification, instead of just thinking of it as changing the policy representation to a discrete one, and applying policy gradients with this new representation?\n\nSmall comment:\nEquation 4 requires a stop gradient on $A$. Otherwise it is not an unbiased estimator of equation 1." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The recent results about critic learning with classification are intriguing, so it is interesting to think about whether they are also applicable to policy-based learning.\n- Related work appears adequately discussed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The recent work by Farebrother et al (2024) showed that instead of training value functions by regression, discretizing the space and using classification instead can yield gains in terms of scalability and performance. Following along these lines, the current work aims to perform something similar for policy-based reinforcement learning. In particular, if we consider a Gaussian policy, the policy gradient w.r.t. $\\mu$ is obtained by differentiating a surrogate loss that roughly looks like $A(x-\\mu)^2$, which seems like a weighted regression loss (if we consider having target samples at all $x$). Along these lines, the authors change the objective to a weighted classification loss by discretizing the space, and swapping out the $(x-\\mu)^2$ bit with a cross-entropy loss.\nThey performed experiments on applying this idea together with the A2C algorithm on continuous control tasks such as acrobot, mountain car, halfcheetah and ant.\nThe work also included some theoretical derivations showing that the gradient magnitude is smaller compared to a Gaussian policy.\nThere were also some other technical additions, e.g., the discretization is performed coordinate wise separately for each action dimension, the logits in each bit affect the probability in nearby bins as well etc. The performance on HalfCheetah went up to around ~2000 (on the Gymnasium version)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The performance of the proposed method on HalfCheetah is only around ~2000, whereas typical good performance on the Gym version is over 10000. Performance on other tasks like Acrobot and Ant, also does not seem close to competitive with good performance on these tasks. Moreover, whilst in the current paper, the A2C benchmark achieves around ~1000 on HalfCheetah, there exist other implementations (e.g., https://tianshou.org/en/v0.4.8/tutorials/benchmark.html) where the plain A2C also achieves ~2000, similar to the proposed method in the current paper. Furthermore, e.g., in Ant the performance of the proposed algorithm in the current paper is ~1500, whereas the tianshou benchmark A2C gets over 3000. From this point of view, the result is not convincing both in terms of whether it really improves over A2C, and also in terms of whether it would improve performance on algorithms that achieve more competitive performance, e.g., PPO.\n- I was not convinced by the theoretical results. These results showed that the gradient magnitudes become smaller; however, gradient magnitudes themselves may not tell us about the optimization difficulty. Even just rescaling the objective function will change the gradient magnitudes, but will not change the optimization problem. Perhaps some other metric like the condition number, etc. would have been a more convincing theoretical result to me.\n- I didn’t fully see why the method is described as a weighted classification instead of just saying that you are using a discrete policy representation and computing the policy gradient with your representation. The cross entropy is basically the log of the picked actions, but this is the same as in the policy gradient, so couldn’t the method just be seen as switching the Gaussian probability distribution to a discrete one, and applying the standard policy gradient methods?\n- The experimental work is not thorough, and is mainly looking at return curves. Providing other kind of experimental evidence in addition to reward curves would be more convincing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is intellectually interesting, as it proposes some novel angle to think about continuous control based on Gaussian distribution;\n2. The presentation is clear;\n3. The empirical results look promising." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The study introduces an innovative approach to the policy gradient (PG) objective utilized in Gaussian-based policy models within the field of reinforcement learning (RL). It redefines the conventional PG objective by framing it as a weighted squared loss function. In this formulation, the squared loss measures the discrepancy between the chosen action and the action prescribed by the policy, while the weighting factor is derived from the advantage measure, augmented by an additive constant. This conceptual reconfiguration enables the authors to devise a new surrogate function that transitions the methodology from a regression-based to a classification-based model. The paper substantiates the practical utility and enhanced performance of these proposed methods through a series of empirical validations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I do not have many comments for this paper; so I will keep it short. My primary concerns are below. \n\n1. The reason for why softmax could be beneficial is not clear. Softmax, similar to sigmoid function, could have saturated gradient issue. This has been pointed out by existing work. The author cited one by Shivam et al. in the conclusion, but note that this issue is not restricted to nonstationary learning setting. \n\n2. The paper present some theoretical argument regarding bounded gradient norm, I doubt if it really supports the claim of improved sample efficiency. Note that a strong gradient signal could intuitively improve learning efficiency. \n\nNote that the cited paper by Ehsan et al about histogram loss, they claim improved generalization, rather than sample efficiency. \n\n3. Given the popularity of the deterministic policy, it might be interesting to see how the proposed method compares against deterministic control. Though this is not necessary to support the main claims of this paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Did you run 20 trials with different seeds, or were the seeds the same for each trial?\n\nI expected the method to perform poorly in higher-dimensional settings, yet the experiments show the opposite. Is there a theoretical justification for this result?\n\nI will increase my score if you address my concerns in the weaknesses part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written and effectively discusses its connections with other methods in the literature, explaining the motivations behind its approach. It provides a thorough theoretical analysis of gradient norms, showing why the classification-based surrogate loss might offer advantages. Additionally, experiments on environments (such as continuous Mountain Car, HalfCheetah, and Ant) demonstrate that the classification-based approach generally improves data efficiency. The paper also includes thorough sensitivity and exploration ablation studies, along with well-documented hyperparameter tuning." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates whether replacing implicit regression in policy gradient can improve the training efficiency of policy learning. It introduces a surrogate loss used to reformulate the implicit regression of continuous actions as a classification of discrete actions. They empirically investigate the use of cross-entropy in the introduced loss as an alternative to the Gaussian policies" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Discretizing action spaces may lead to performance limitations in domains that require precise control or have high dimensionality, which the authors acknowledge.\nThe contributions appear incremental, as the method builds heavily on prior work, particularly Imani and White (2018). The term \"novel\" used in the text may be an overstatement, as the core methodology closely follows established ideas.\nThe analysis focuses primarily on deriving bounds for gradient norms, suggesting that a smaller bound should lead to greater stability. However, this relationship is indirect, and the analysis does not rigorously connect the smaller gradient norms to concrete improvements in convergence rate. Additionally, key assumptions underlying the theoretical results are not clearly and explicitly stated, making it difficult for readers to assess the validity of the analysis. Furthermore, the propositions are not self-contained or fully descriptive\n\nMinor weakness: The paper lacks clarity in section 5.2.1. It is unclear whether the 20 trials use different seeds, which is important for interpreting the robustness of the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The author assumes that the policy network’s output is $l$-Lipschitz, which seems to be a strong assumption. Could you give some insights why the assumption holds in the experiments?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "*Theory*: The paper's proposal to replace regression with classification in policy gradient algorithms is compelling, backed by theoretical bounds showing reduced gradient norms for the cross-entropy-based loss. This theoretical contribution offers insights into the policy optimization algorithms in continous control.\n\n*Experiments*: The paper presents comprehensive experimental results across several continuous control environments, effectively showcasing the advantages of the proposed classification-based surrogate loss. The consistency of the performance gains in data efficiency, stability, and convergence across diverse tasks solidifies the approach's applicability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents an innovative approach to enhancing the efficiency of policy gradient reinforcement learning by reformulating the implicit regression typically used with Gaussian policies commonly used in continous control as a classification problem. The authors introduce a novel surrogate loss, leveraging cross-entropy loss and softmax policies over discretized actions, and provide both theoretical analysis and empirical evidence supporting this new approach. Overall, the paper addresses a relevant challenge in reinforcement learning, with convincing results and a clear methodology." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The notation $ l $, used initially to represent the Lipschitz constant of the policy network’s output, is later redefined as an index number (line 194). This dual usage is confusing and could benefit from more consistent notation.\n\n2. Certain variables, like $c$, could be recalled where they are used for better readability. For instance, $c$ (defined in line 196) reappears in line 233, but its earlier definition is difficult to locate due to its inline placement in the formula." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "The policy gradient surrogate loss can be interpreted as a weighted regression problem; we show that reformulating as a weighted classification problem leads to improved policy gradient learning." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024replacing,\ntitle={Replacing Implicit Regression with Classification in Policy Gradient Reinforcement Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xrWOR5wSOz},\nnote={under review}\n}" }, "abstract": { "value": "Stochastic policy gradient methods are a fundamental class of reinforcement learning algorithms. When using these algorithms for continuous control it is common to parameterize the policy using a Gaussian distribution. In this paper, we show that the policy gradient with Gaussian policies can be viewed as the gradient of a weighted least-squares objective function. That is, policy gradient algorithms are implicitly implementing a form of regression. A number of recent works have shown that reformulating regression problems as classification problems can improve learning. Inspired by these works, we investigate whether replacing this implicit regression with classification can improve the data efficiency and stability of policy learning. Toward this end, we introduce a novel policy gradient surrogate objective for softmax policies over a discretized action space. This surrogate objective uses a form of cross-entropy loss as a replacement for the implicit least-squares loss found in the surrogate loss for Gaussian policies. We extend prior theoretical analysis of this loss to our policy gradient surrogate objective and then provide experiments showing that this novel loss improves the data efficiency of stochastic policy gradient learning." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "reinforcement learning; policy gradient RL; actor-critic" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a8af9f33ae89b3f11dec03be614f63321b94aa1c.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Replacing Implicit Regression with Classification in Policy Gradient Reinforcement Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xrXci5YGm7
Emergent properties with repeated examples
main
Active
transformers;learning on repeated examples;emergence
foundation or frontier models, including LLMs
3;5;5;6
4;3;3;3
2;2;3;3
1;2;3;3
3;3;3;3
4.75
3.25
2.5
2.25
3
-0.927173
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses part." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The experiments are conducted on three algorithmically generated datasets of math tasks, which is an ideal setup of controlled experiments. The experiment setups are clearly described, and the results are well presented and explained. \n\n2. The empirical findings of the beneficial of repeated examples have decent practical meanings. The success of two set training is interesting and novel to me." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper empirically shows that the repetition of training data can be beneficial in certain setups. The authors conduct experiments on three algorithmically generated datasets, and show that model trained with small data budgets outperforms model trained on larger data budgets. The authors also identify a two-phase training algorithm that gives faster training and better performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My major concern is that the experiments are only conducted on algorithmically generated synthetic data, instead of real world datasets. This makes me not fully convinced that the experimental findings are universal and can be transferred to more realistic setups. \nSpecifically, I am wondering whether the main findings, especially the success of two set training, also applies to real language datasets. Do you have empirical results to support that? Besides, the three dataset are all math problems. Do you think that this specific type of training data might have a positive influence on the performance of repeated data? \n\n2. Does the emergent property in the title mean that the observed phenomena only happens for training at scale? Do you have found any criteria that can indicate under what circumstance can repeated data or two-set training be beneficial?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "List of questions (including some repetitions of the above):\n- Do the authors believe that their findings hold in tasks that do not require learning an algorithm? For example in vision-related tasks where memorization could be more useful (and sufficient in some cases)?\n- Do they know of an example where repetition or the two-step procedure is not beneficial? One could notably think of tasks where curriculum learning is decisively effective, then shouldn’t random repetitions at least fare worse than curated ones? In other words, can the two-step procedure work well when curriculum learning works well?\n- Can the two-step procedure be performed by showing only the repeated samples in a first learning phase, and then moving on to the more diverse samples in a second phase? i.e. focus first on discovering some ‘rules’, before generalizing these rules? I believe that finding good results in such a procedure would go in the direction of a loss-landscape-based interpretation of the phenomenon, in the sense of the Dandi et al. paper mentioned above. This would also further clarify the difference between what the authors observe and grokking (in addition to the different timescales and sample sizes they mention in Sec. 2).\n- Can the authors clarify how accuracy is never affected by overfitting in the GCD task? How general do they expect this behavior to be?\n- Can the authors clarify what they mean by a transformer being able to identify deja vu and jamais vu examples? Why should the architecture ‘care’?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Overall, this paper explores an important aspect of machine learning—understanding why algorithms are usually expected to suffer from repetitions whereas human and animal learning appear to be highly dependent on them for learning—and could therefore be a significant contribution to this puzzle as it provides counter-examples that deserve to be better understood. By relying on the transformer architecture, which has become ubiquitous in many practical implementations, the authors also maximize the chance that their findings, and the training protocol they propose, may be taken advantage of in practical settings. The study of mathematical tasks provides a well-controlled environment where memorization can be identified, while properly learning an algorithm is necessary to achieve good generalization. The paper is well-written, and the author’s conclusions are clearly stated." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The present paper deals with the essential question of the role of repetition in the training data of machine learning models. More specifically, the work delves into the effect of introducing repeated examples when training transformers on three mathematical problems: finding the greatest common divisor between two numbers, performing modular multiplication, and computing the eigenvalues of (small) matrices. Through extensive numerical experiments, the authors show that repetition during training is very valuable for transformers to perform well on these tasks, and that might be necessary for learning to emerge in the case of modular multiplication. In light of these results, they propose a two-set training procedure, in which training examples may be drawn from either a large set of examples that will be seen only a handful of times or from a much smaller set of examples repeated many times during training. Consistent with their initial experiments, they find that this procedure improves the performance of trained networks for given data budgets, and in some cases enables learning. The authors finally consider variations of this procedure, notably the natural idea of curating the repeated set to further improve performance. Surprisingly, they find that curating does not provide further gains, or may be detrimental." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I believe the main limitation of this work lies in the limited number of tasks it considers, undermining the generality of its conclusions. In particular, the authors repeatedly make statements such as ‘models trained on smaller sets of repeated examples outperform models trained on larger sets of single-use examples’ (abstract); ‘smaller data budgets and more frequent repetition allow for faster learning, but also for much better performance’ (p. 5); ‘learning emerges through repetition’ (p. 5) etc. Based on the presented results however, it appears that these claims should be slightly tempered: there is a tradeoff between the repetition and the diversity that is necessary to avoid overfitting, as clearly illustrated in Fig. 2. While the authors show that repetitions at a given data budget may be beneficial, this relation is strongly non-monotonous, and in the greatest common divisor task one notably still requires a large number of unique samples to perform correctly (and as apparent in Fig. 4 the balance between repetition and diversity is hard to strike in this problem!). Clarifying this point and emphasizing that it is a tradeoff would make the paper more convincing, and would add to the relevance of the two-set procedure which precisely tries to provide the ideal tradeoff. \n\nIt also remains unclear whether the two-set procedure is relevant in settings where the transformer must not necessarily learn a clear-cut algorithm. For instance, could such a procedure be effective in image classification and more generally vision-related tasks? It seems plausible that the need for repetition is related to the hardness of the task at hand, which is not discussed in the paper. \nIn that respect, the difference in hardness of the three tasks exposed is not straightforward to understand: the authors notably state that the computation of eigenvalues is the hardest task they consider, yet the smallest model (in number of parameters) is used to tackle it. Therefore, it is not evident how the eigenvalue problem contributes to the clarity of the paper and its conclusion, also considering the relatively poor results that the authors find (4/30 as the maximum trained model success rate if I understand correctly) and the lack of clear plots dedicated to this third problem, despite it being expected to be the most difficult to tackle.\n\nFrom a more technical standpoint, the accuracy metric chosen by the authors is not easy to interpret, notably in Fig. 2. While Appendix A provides some explanation as to how the chosen accuracy may stay fixed while the test loss explodes, the fact that it is on the eigenvalue computation and not the GCD problem for clear interpretation of Fig. 2 makes it still unsatisfactory. Besides, Fig. 6 does show some examples where the accuracy decreases as the model overfits.\n\nFinally, the conclusion of the authors that the transformers should somehow be able to distinguish between already seen and unseen examples is puzzling. Could the role of repetition not be simply explained by the fact that there needs to be some form of symmetry breaking in the direction of the loss to follow and that repeating examples allows for a clear direction in the loss landscape to emerge? In that respect, I point the authors towards the papers: \nDandi, Y., Troiani, E., Arnaboldi, L., Pesce, L., Zdeborová, L., & Krzakala, F. (2024). The benefits of reusing batches for gradient descent in two-layer networks: Breaking the curse of information and leap exponents. arXiv preprint arXiv:2402.03220. \nArnaboldi, L., Dandi, Y., Krzakala, F., Pesce, L., & Stephan, L. (2024). Repetita iuvant: Data repetition allows sgd to learn high-dimensional multi-index functions. arXiv preprint arXiv:2405.15459\n\nA more in-depth discussion of the different mechanisms that could explain the presented phenomenology, could significantly improve the paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why did you choose these experiments? I understand it is easier to control the data and the distribution of repeated examples. However, it is hard to see why conclusions drawn on algebraic problems should extend to other more natural data and NLP problems. \n\n2. Can you try your ideas and run the experiments on non-synthetic datasets? It would be interesting to see if the two-set training works well for other data like CIFAR-10 or other natural datasets that you may find more appropriate.\n\n3. Can you provide some discussion on continual learning etc. as I mentioned in the above section?\n\n4. In Figure 5, the results are quite unstable as a function of your parameters, you need to be very careful in choosing the repeated set size etc. In practice, iterating over the possible set sizes and repeated set probability and training given each choice is very expensive. Do you have any intuition for how to choose these hyper-parameters in general? There does seem to be a general common \"shape\" where training works better." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is written clearly and is easy to follow and understand. The problem posed is important and if answered can be helpful in guiding training methods, fine-tuning and data gathering for complex problems. The results for the two-set training method bring up interesting questions about the role of memorization that would be interesting for future research. All experiments are very thorough, showing good evidence for their claims on the datasets chosen." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work investigates the effect of training with repeated examples (like training over the training data over multiple epochs) compared to training on very large datasets only once. The authors run experiments on three large-scale math problems, to compute GCD, modular multiplication, and computing eigenvalues using varying sizes of transformers and varying data and training budgets. The authors also propose a two-set training method in which they repeat one small subset of data throughout training and otherwise train on new data continually." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "From my point of view, this is not a particularly new idea. Training over many epochs over the training data is standard in most applications. This is perhaps not common in new applications such as in NLP as the authors cite for large language models. Given this, it would have been more interesting to see applications within NLP instead. The problems considered (algebraic problems) are very different, and the connection to other applications like NLP is not well-motivated. \n\nIn terms of related work, it may be interesting to investigate the connection to continual learning and catastrophic forgetting. This is more geared towards learning new tasks, and not about revisiting old examples from the same distribution. However, some of the work done here may provide valuable insight and would be nice to have some discussion comparing the methods in this area with yours.\n\nMoreover, your idea of two-set learning sounds similar to spaced repetition in human learning -- being shown old instances right before you are about to forget them. There is not much work in this direction for ML training applications, although I did find this paper [1] which seems to have similar ideas to yours, proposing an order and frequency to the examples being trained on. \n\n\n[1] Repeat before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks, Hadi Amiri, Timothy A. Miller, Guergana Savova." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the \"Weakness\" section above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper presented the idea clearly and conducted controlled experiments to study the question. It also proposed new training paradigms based on the observations that could potentially help improving model training." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Modern language models are typically trained with only one or at most a few epochs. This paper studies the potential benefit of training for more than a few epochs. With experiments on three synthetic problems (GCD, modular multiplication and matrix eigenvalues), the paper shows that for certain compute budget (or \"training budget\" as termed in the paper), repeating the same data many times can actually be better than using completely fresh data (i.e. 1 epoch or \"online\" training). Based on this intuition, this paper further proposed a \"two-set training\" paradigm where only a subset of training examples is repeated, to get the benefit of repeated training while also mitigating the potential overfitting issue." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper is making very big claims (e.g. of \"emergent properties\" or \"emergent learning\") based on very synthetic settings. While the models used are transformer models, they are tasked with specialized tokenizers for numbers and trained to perform a single synthetic task (e.g. given two numbers, output the greatest common divisor of the two numbers). In other words, those \"language models\" do not have any language capabilities. I think the paper could be better either (A) frame this as a general machine learning problem and have comparison studies covering many different neural network architectures, or (B) focus on LM but have additional experiments on more realistic settings. For example, in terms of relevancy to LMs, I think a study of fine-tuning with a proper LM would be more \"transferable\" to our understanding of how LM learning works than this synthetic pre-training setting.\n\n2. There was the classical decomposition of test accuracy to training accuracy + generalization gap. It seems especially relevant when the paper talks about training with or without repetitions on difficult math / arithmetic tasks. Many of the observations might be more intuitively explained by undertraining --- i.e. the training accuracy itself is already quite bad as the model struggle to fit the problem (with small data repetitions). Adding studies from this angle could potentially make the paper more clear, but in the meantime, the observations would be less surprising if it is mostly explained by such classical decomposition.\n\n3. The paper proposed two-set training algorithm, but there is no comparison to any previous curriculum learning baseline algorithms.\n\n4. It is not clear if the experiments are comprehensive enough to support some of the big claims. For example, the paper talks about \"emergence\", *a task inaccessible to models trained with large or unlimited DB is learned with small DB*. The conclusion may change depending on the model architecture or even model sizes, and if may even be unclear if each setting allows to choose their own optimal hyperparameters. This paper is making those conclusions based on experiments with a single transformer model with fixed number of parameters and a fixed set of hyperparameters. For example, with repeated examples, later training steps would have potentially smaller updates because the gradients are smaller on seen examples. If this is benefiting the learning, could a similar effects be achieved by a better learning rate decaying scheme?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "In three controlled experiments with generated data we show that models trained on smaller sets of repeated examples outperform models trained on larger sets of single-use examples and introduce two-set training to show the benefits of repetition." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024emergent,\ntitle={Emergent properties with repeated examples},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xrXci5YGm7},\nnote={under review}\n}" }, "abstract": { "value": "We study the performance of transformers as a function of the number of repetitions of training examples with algorithmically generated datasets. On three problems of mathematics: the greatest common divisor, modular multiplication, and matrix eigenvalues, we show that for a fixed number of training steps, models trained on smaller sets of repeated examples outperform models trained on larger sets of single-use examples. We also demonstrate that {\\em two-set training} - repeated use of a small random subset of examples, along normal sampling on the rest of the training set - provides for faster learning and better performance. This highlights that the benefits of repetition can outweigh those of data diversity. These datasets and problems provide a controlled setting to shed light on the still poorly understood interplay between generalization and memorization in deep learning." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "transformers", "learning on repeated examples", "emergence" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e4d48504ce7f7a40fbae21b56ba201b34d4b1c7c.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Emergent properties with repeated examples" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xrazpGhJ10
SemCLIP: Aligning vision-language encoder models to semantic spaces for stability in retrieval
main
Active
Semantic-preserving queries;Vision-language encoder models;Stability of retrieval;joint embeddings
applications to computer vision, audio, language, and other modalities
5;5;6;6
4;4;4;4
2;2;3;3
2;3;3;3
2;2;3;3
5.5
4
2.5
2.75
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The alignment transform of image embeddings from VLM to SEM space is confusing. Image embedding and text embedding are directly equated, which may overlook the semantic differences between images and text.\n\n2. How to determine the distance of semantic similarity, gamma_j ?\n\n3. The Transformation Mapping stage in Fig. 3 lacks arrows connecting to other stages, making it difficult for the reader to intuitively understand how the output vector is utilized in the subsequent stages.\n\n4. Why STE perform worse in antonyms dataset?\n\n5. Long-tail or rare synonyms that may not be well-represented in WordNet could affect semantic text embedding." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper identify and address the issue of instability in VLMs when dealing with synonymous queries.\n\n2. The authors develop a dataset of linguist-curated similarity lists , followed by an alignment transformation to map existing VLM embeddings to the semantics-preserving textual embedding. \n\n3. Abound experiments provides extensive empirical evidence to support the effectiveness of SemCLIP model, including comparisons with multiple benchmark datasets and other CLIP variants." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors developed A database of 114,000 linguists-curated similarity lists of words from a constrained traversal of Wordnet thesaurus to cover all English language nouns and use a representation to capture their sense context explicitly. And then a semantics-preserving textual embedding was trained to discover expanded synonymous relations between terms. A method was developed to align a VLM embedding." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The SemCLIP model is developed only for nouns in the English language. This limitation narrows the applicability of the model to other parts of speech.\n\n2. The database of similarity lists, while valuable, may introduce bias based on the linguists' perspectives and may not capture the diversity of language use across different domains.\n\n3. WSD tool has an accuracy rate of around 80%, which could introduce errors in the alignment process." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. From methodologies, it seems that only the text encoder is updated for better alignment between synonymous pairs, while in Figure 2, image embeddings of J1 and J2 are pulled, how? Is the original VLM also fine-tuned?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper is well presented, with clear figures and organizations.\n2. This paper develops a dataset for synonymous text understanding." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the problem of image retrieval with synonymous text. It develops a dataset of linguists-curated similarity lists of\nwords and trains a semantics-preserving textual embedding (STE) to which the VLM embedding is aligned. Experiments on 13 benchmark datasets demonstrated the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The description of how to construct the similarity list dataset is not clear enough.\n - It would be better to give out a figure for a more intuitive illustration.\n - Is the synonymousness defined for both nouns and phrases? Is the similarity defined with binary values (0/1) or continuous values between 0-1? Any difference?\n - It is concerned that are there any cases in two worlds with the same meaning on single nouns but different meanings in sentences?\n - How to ensure that the items in the dataset are diverse enough, reasonable, and frequently used?\n2. Lack of important experiments.\n - Does the learning of synonymous text degenerate the retrieval of general words? Tables 2 and 3 only show the results of querying using synonyms of nouns but no general words." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper proposes a framework to transform the VLM embeddings of semantically close terms and their associated images to place close together to ensure retrieval stability, which is practical for vector-database-based data managing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on establishing stable association between images and texts, to make synonymous queries bring up the same images or have a high degree of overlap, and propose a SemCLIP framework, which consists of two main step, semantic text embedding and alignment transform. Besides, the paper develops a database of linguists-curated similarity lists of words. Performance comparison on multiple benchmark datasets show the effectiveness of the proposed framework." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty is limited. The proposed SemCLIP framework is mainly composed of semantic text embedding and alignment transform. However, they are all implemented through existing simple algorithmic ideas, without introducing in-depth perspectives on semantic alignment.\n\n2. In Modeling SemCLIP transformations (page 4), in the computation of alignment transform of images (i.e., equation 4), the determination of image representations replies on the distance between the image and its nearest text in VLM space. This implies that SemCLIP acknowledges the validity of relative distance between images and texts in VLM space, which conflicts the main claim about the loss of sensitivity to linguistic similarity in VLM space, in paper.\n\n3. The writing should be improved. First, the notation in this paper needs further refinement and standardization. Secondly, some of the statements in paper are confusing and need more in-depth explanation. For example, lines 192-193 “Note that the distance is a function of the word itself, since some words have more synonyms than others.”, lines 290-291 “We developed a contrastive embedding model that captures the essence of the similarity in the curated similarity lists in a numerical formulation”, and so on. Thirdly, The implementation of semantic text embedding and alignment transform in section 4, 5 and the previous section 3 seem to be conceptually separate, and need further re-organization." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I wonder if general CLIP models with larger backbones and training on large-scale datasets (in the billions), such as CLIP ViT-bigG, might not suffer from the synonym problem, as they are more likely to include diverse examples with synonyms in training and may be sufficiently generalized to handle synonym issues. Do you have any experimental results with these stronger VLM models?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper addresses an important challenge in current vision-language embedding models, where synonyms with the same semantics are not well-mapped within vision or language embeddings.\n\n2. Releasing the synonym-focused dataset as an open-source resource would be a valuable contribution to the community.\n\n3. The evaluation on retrieval using images and labels (measured by mAP and NDCG) is also meaningful, providing a more nuanced comparison than traditional image-to-text matching, which may involve overlapping \"same\" semantic text." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the semantic matching of vision and language embedding vectors. Specifically, textual synonyms should match with similar images, but current vision-language models like CLIP often fail to retrieve similar images for synonymous terms. To address this issue, the paper proposes SemCLIP, which includes a new database of linguist-curated similarity lists of words. Additionally, it trains semantic-preserving textual embeddings (STE) to capture synonym relationships and aligns vision-language embeddings with STE to produce semantic-preserving embeddings for improved retrieval." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Figure 2 is somewhat difficult to understand; it would be helpful to include explanations of each component (A, B, C, D, and J1, J2) in the caption.\n\n2. It would be beneficial to report standard image-to-text or text-to-image retrieval metrics, such as Recall scores on MS COCO and Flickr, for comparison with existing CLIP methods.\n\n3. The notations in the methods section are complex; it is unclear at times whether they refer to data samples or embedding vectors. Clarifying these distinctions would improve readability." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "SemCLIP: Aligning vision-language encoder models" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024semclip,\ntitle={Sem{CLIP}: Aligning vision-language encoder models to semantic spaces for stability in retrieval},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xrazpGhJ10},\nnote={under review}\n}" }, "abstract": { "value": "Vision-language models (VLM) bring image and textual representations close together in a joint embedding space to tackle many tasks ranging from image captioning to text-to-image retrieval. For such models to be reliably used in cloud vector stores, it is important to have a stable association between images and text such that synonymous queries bring up the same images or have a high degree of overlap. Current textual representations based on transformer models used to build the VLMs cannot adequately capture linguistic similarities to ensure such stability. In this paper we develop a database of linguists-curated similarity list of words derived from Wordnet, and train a semantics preserving textual embedding. We then train an alignment transformation to map existing VLM (CLIP) embeddings to bring synonymous embeddings closer while also preserving image-text similarities. The alignment transform is learned from textual embeddings alone thus avoiding large-scale retraining of VLMs from image-text pairs. This simple method outperforms other methods of creating image-joint text embeddings including even those by fine-tuning the encoders using the same synonyms lists. Results of analysis and comparison on multiple benchmark datasets is indicating both stable and improved quality of retrieval. The dataset of similarity lists and the semantics-preserve textual embedding itself can be employed in a variety of ways for other downstream tasks and will be made available for other researchers." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Semantic-preserving queries", "Vision-language encoder models", "Stability of retrieval", "joint embeddings" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7c1a45517f950456b014f30364cd0689e0d2e576.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "SemCLIP: Aligning vision-language encoder models to semantic spaces for stability in retrieval" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xreOs2yjqf
EvalAlign: Supervised Fine-Tuning Multimodal LLMs with Human-Aligned Data for Evaluating Text-to-Image Models
main
Active
Text-to-Image Generative Models;Evaluation Metrics;Multimodal Large Language Models (MLLMs);Text-Image Consistency;Image Generation Fidelity;Supervised Fine-Tuning (SFT);Human Evaluative Judgments
datasets and benchmarks
3;5;5;6
5;4;5;4
2;3;2;3
2;3;2;3
3;2;2;3
4.75
4.5
2.5
2.5
2.5
-0.688247
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I am very concerned about Table 8 of this paper. Are the calculated KRCC and PLCC based on the instance level? If it is at the model level, it is recommended that the author modify it according to the content of weakness. If it is indeed at the instance level, I hope the author will focus on the analysis at this step, which is more important than the scores of each model listed in Table 3. Also, the authors can check the Weaknesses, and address them point-by-point in the response, which would be helpful." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This work has fine-grained annotation. Unlike previous datasets, EvalAlign annotates at three levels: animal faces, visibility of hands, and visibility of limbs. This detailed data enables the author to train an effective evaluation model.\n\n2. The author promises open source code. And the experimental details are listed in the supplementary materials, which has strong reproducibility.\n\n3. The writing of this article is quite fluent, and with appropriate illustrations, it is very easy for readers to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes EvalAlign, a metric characterized by accuracy, stability, and fine-grainedness. Evaluation on 24 text-to-image generation models shows that EvalAlign is more in line with human preferences than existing metrics and has certain application value in quality assessment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The experimental part of this article has a big problem. Table 3 seems to have done a lot of experiments, but it is actually evaluated at the model level, not the instance level. This is not a challenging task, because everyone knows that PixArt draws well and SD 1.4 draws relatively poorly. Ranking the strengths of 24 models is far less meaningful than scoring a single image, that is, an end-to-end AIGC quality evaluation tool. In other words, which of the two images from the same model has higher quality is more important.\n\n2. This paper only reviews coarse-grained datasets in related work, but does not consider fine-grained datasets. In addition, some AIGC-related dataset such as [1,2,3] was not considered. These datasets have fewer images but more annotations, and each image contains dozens of fine-grained annotations. Since fine-grained annotations are one of the major innovations of this paper, it is not comprehensive to only review coarse-grained datasets (i.e. only two or three annotations, or even less than one per images).\n\n[1] PKU-AIGIQA-4K: A Perceptual Quality Assessment Database for Both Text-to-Image and Image-to-Image AI-Generated Images\n\n[2] PKU-AIGI-500K: A Neural Compression Benchmark And Model for AI-Generated Images\n\n[3] PKU-I2IQA: An image-to-image quality assessment database for ai generated images" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I would like to ask how long the author's evaluation takes. In my opinion, evaluation should be a task to assist generation. If the generated model is already large, using an estimator with 34B parameters will cost a lot, but only slightly improve the consistency with human subjective perception. I am not sure if it worth.\nI am happy that the author analyzed the impact of different model sizes on performance, but the impact on time consumption also needs further explanation." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Using LLM to evaluate the quality of LLM is a very creative point. The author evaluates the T2I process through the I2T model, which is a new paradigm.\n2. The experiment is relatively detailed, considering 24 generative models. Multiple dimensions are evaluated.\n3. The illustrations are intuitive and beautiful, and Figure 1 reflects the central idea of ​​the article well." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method to evaluate the consistency of images and texts in the T2I generation process. Compared with other evaluation indicators such as ImageReward and HPS, it has higher consistency with human subjective preferences on 24 T2I models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The evaluation is done at the **model level**, not the **instance level**. This is a major flaw. In the actual evaluation process, the community is not only concerned with ranking the strength of the T2I model but whether each AIGC image is good enough. As far as I know, AIGC quality asessment [1,2] uses instance-level, because model-level evaluation is not very challenging (see Vbench [3] Figure 4, the correlation with subjective labels can easily reach 0.9). I hope the author can improve this point.\n2. The author considered 24 models. Although the number is large, they are highly homogenized. For example, DeepFloyd IF uses three different conditions, which are considered three models, but they are the same. Including SD 1.4, 1.5, 2.0, and 2.1, the difference in visual effects is quite limited. However, for open-source models such as DALLE 3 and Midjourney, the dataset does not include them. From my subjective perspective, they are still slightly stronger than PixArt, and ignoring them will result in an incomplete dataset.\n\n[1] Depicting Beyond Scores: Advancing Image Quality Assessment through Multi-modal Language Models\n[2] Descriptive Image Quality Assessment in the Wild\n[3] VBench: Comprehensive Benchmark Suite for Video Generative Models" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The dataset collection and annotation process is technically sound. The explicit prompting strategy (e.g. `Are there any issues with human face in the image, such as facial distortion, asymmetrical faces, abnormal facial features, unusual expressions in the eyes, etc?`) could be useful and scalable to better baseline MLLMs.\n2. The evaluation part presents a benchmark on models, showing the high-correlation between the proposed scorer and human evaluation, which is good. It would become a useful metric." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In short, this paper presents EvalAlign, which collects a human-annotated preference dataset to fine-tune MLLMs to be evaluators for T2I generation. The paper has focused on two dimensions: (1) T2I alignment, (2) faithfulness, which is a well-accepted setting since AGIQA-3K (Li et al, 2023). Overall, the paper is technically sound, but I am a littble bit concerned on some methodology parts. Additionally, discussions for several pioneer works on T2I evaluation and MLLM as scorers are missing." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have some concerns on the paper.\n\nFirst, I am a bit concerned on how the score is derived. From my current understanding, the final scores are derived from an average of the score outputs of several questions. Is ther `Human` column also obtained by so? If so, this might not be a good enough ground truth.\n\nSecond, well in Sec. 5.1 the author states that the test set images do no overlap with train set ones, they do come from the same 16 generation models. As the final evaluation only shows model-wise ranking consistency, this result might not enough exclude overfitting (e.g. memorizing on model specific styles, etc). I would encourage a further testing on several hold-out T2I generators.\n\nThird, a minor question. Using SFT for LMM to score has been discussed by Q-Align (ICML2024), which finds out using logits are better than using `model.generate()` for scoring. It also has the ability for image faithfulness evaluation, please try to compare with it or discuss with it. Furthermore, for faithfulness evaluation (which is actually image quality, am I right?), the compared baselines are similarity-based metrics (which are, from their design, alignment-related metrics). I would suggest the authors to compare with some baselines related to T2I quality evaluation (inc. Q-Align) in this part." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "please see weakness points above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper introduces a dataset that includes explicit question-answer feedback, which could facilitate more detailed evaluation of generated images.\n- Using MLLM with SFT for image evaluation is an interesting approach that could potentially enhance interpretability in assessing generated image quality." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel method and dataset aimed at evaluating the quality of generated images, with a specific focus on image faithfulness and text-image alignment. The dataset was collected using detailed human feedback in a question-answer format, aiming to provide fine-grained insights into image quality. This dataset is then used to train a MLLM with SFT to evaluate generated images effectively. The proposed method is tested on the new dataset and compared against existing approaches, with results indicating its superior performance in terms of image faithfulness and text-image alignment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "## 1. Lack of Justification for Main Contributions\nThe paper's three key contributions are insufficiently supported by experimental validation and theoretical grounding:\n 1. Although the dataset is described as having detailed human feedback covering \"11 skills and 2 aspects,\" the experiments primarily focus on the 2 broad aspects. There is little exploration of the 11 specific skills, which would have been valuable given that the 2 aspects have been widely studied in prior works, such as [a].\n 2. The method is claimed to enable \"accurate, comprehensive, fine-grained, and interpretable\" evaluations. However, the results mostly reflect the 2-aspect performance, with no evidence of superior fine-grained or interpretability-focused evaluation compared to previous methods.\n 3. While the paper emphasizes cost-efficiency in terms of annotation and computation, this claim is questionable. The annotation process requires extensive human annotations, which is labor-intensive. Additionally, the method achieves optimal performance with a 34B MLLM model, which is computationally expensive.\n\n## 2. Unclear Advantages Over Existing Datasets\n- According to Table 1, the primary benefit of the proposed dataset seems to be its focus on the two-aspect evaluation. However, several prior datasets such as ImageReward, PickScore, and HPS(v2) implicitly address these aspects as well. While explicit question-based feedback is used in this work, it is not clear how this approach leads to better evaluation outcomes, especially since **a vast number of questions would likely be required to cover all image aspects comprehensively**.\n- The paper does not adequately compare its approach with previous work like [a], which also includes detailed, multi-aspect human feedback via scoring rather than question-answering. The advantages of question-based feedback over scoring are not clearly demonstrated in terms of faithfulness or alignment evaluation.\n- While the paper emphasizes cost-effectiveness, the dataset requires 130k annotations to achieve optimal results (Table 1). This annotation volume does not appear more economical than previous datasets.\n\n## 3. Weak Experimental Results\n- It is unclear whether models from other methods were trained on the proposed dataset to ensure a fair comparison, especially in Tables 2 and 3.\n- The results in these tables do not consistently support the claims made. For instance, the 500 configuration does not show a clear optimal performance in Table 6, and there is no clear positive correlation between model size and performance improvements in Table 7.\n\n## 4. Writing and Structure Issues\n- Some sentences lack clarity and coherence. For instance, “the utilized synthesized images are treated as real images as they don’t explicitly recognize the problem of synthesized images with low image faithfulness” is confusing, especially since HPS(v2) aims to evaluate generated images.\n- Writing structure should be improved. The key novel contributions of the method is unclear.\n- There are some repeated sentences with similar meanings. It is better to re-write them to make the paper more concise. \n\n## Conclusion\n\nWhile the paper presents a promising approach with potential contributions in the form of a detailed dataset and a new evaluation method, it currently lacks sufficient support for its claims. The advantages over existing work remain unclear, and the experimental validation needs improvement. Therefore, the paper is not yet ready for acceptance in its current form.\n\n[a] Rich Human Feedback for Text-to-Image Generation, CVPR 2024, best paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024evalalign,\ntitle={EvalAlign: Supervised Fine-Tuning Multimodal {LLM}s with Human-Aligned Data for Evaluating Text-to-Image Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xreOs2yjqf},\nnote={under review}\n}" }, "abstract": { "value": "The recent advancements in text-to-image generative models have been remarkable. Yet, the field suffers from a lack of evaluation metrics that accurately reflect the performance of these models, particularly lacking fine-grained metrics that can guide the optimization of the models. In this paper, we propose EvalAlign, a metric characterized by its accuracy, stability, and fine granularity. Our approach leverages the capabilities of Multimodal Large Language Models (MLLMs) pre-trained on extensive data. We develop evaluation protocols that focus on two key dimensions: image faithfulness and text-image alignment. Each protocol comprises a set of detailed, fine-grained instructions linked to specific scoring options, enabling precise manual scoring of the generated images. We supervised fine-tune (SFT) the MLLM to align with human evaluative judgments, resulting in a robust evaluation model. Our evaluation across 24 text-to-image generation models demonstrate that EvalAlign not only provides superior metric stability but also aligns more closely with human preferences than existing metrics, confirming its effectiveness and utility in model assessment. We will make the code, data, and pre-trained models publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Text-to-Image Generative Models", "Evaluation Metrics", "Multimodal Large Language Models (MLLMs)", "Text-Image Consistency", "Image Generation Fidelity", "Supervised Fine-Tuning (SFT)", "Human Evaluative Judgments" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9b5e2a2630f7a2e7b0aedb9641a8be4caf925a93.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/cd05c0fb263b91e80705f51473e4448f9facb437.pdf" }, "title": { "value": "EvalAlign: Supervised Fine-Tuning Multimodal LLMs with Human-Aligned Data for Evaluating Text-to-Image Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xrgXaOV6dK
Can External Validation Tools Improve Annotation Quality for LLM-as-a-Judge?
main
Active
LLM-as-a-Judge;AI annotators;evaluation;tool-use
datasets and benchmarks
3;5;5;8
4;4;5;4
2;2;3;3
2;3;2;4
3;2;3;4
5.25
4.25
2.5
2.75
3
-0.080845
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. For the generalizability issue, one suggestion would be to experiment with more recent and challenging open domain datasets like RMbench (https://arxiv.org/pdf/2410.16184) and external domain specific datasets like RMMath ( https://arxiv.org/pdf/2410.01729) to verify if the RewardBench results are an exception or a fundamental limitation of the technique- helping verify the robustness of the system.\n2. Can the authors compare function calling API based tool-calling system with the existing implementation?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. While the use of tools in AI-based applications is fairly commonplace now, there use for annotation system is an interesting and novel idea and the paper demonstrates fairly well that it works for a few domains at least. \n2. The paper is well-written and presents fair experimental backing to its claims. \n3. The paper introduced 3 novel datasets for evaluating domain specific annotation capabilities of Language models" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes the use of external tools to create higher quality AI annotation systems and introduces a tool using AI annotator that use web-search and code execution to improve annotations. After establishing that existing annotation benchmarks are saturated, they introduce 3 new annotations datasets for fact checking, coding and mathematics. They demonstrate the efficacy of their tool-based AI annotator, by showing better performance on the 3 new datasets over SoTA AI annotators, while performing roughly on par with existing annotation benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the use of toolings for AI annotators is interesting, in the current iteration of the work, it is not very clear if it will scale with more custom toolings. In the agent evaluator discussed in the paper, eventhough it defaults to existing annotations for the no-tool use cases, the system shows a degradation in performance for RewardBench, the only OOD dataset evaluated. This makes me concerned about the generalizability of the system. \n2. Two of the proposed benchmarks don't have baseline human annotation scores, making it hard to quantify the degree of hardness of the datasets.\n3. It is not very clear what are the advantages of using the agentic architecture compared to something like tool-calling API by OpenAI." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why choose a subset from GSM8K rather than selecting more general benchmarks (e.g., AIME, MATH)?\n2. Can you also present experiment results on open-source models?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. clear paper writing\n2. Classifying the input domain and selecting tools accordingly makes sense.\n3. Substantial improvements on certain subsets, particularly APPS." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a tool-augmented (i.e., web search engine and code compiler) method to provide pairwise AI feedback, with a focus on three specific domains: long-form factual, math, and coding tasks. Specifically, for each incoming pairwise response, it would first determine its domain and then select the corresponding tool for quality judgment. Experiments are conducted on three pairwise datasets sourced from LongFact, APPS competition subset, and GSM8K hard subset. Results indicate that by tool-augmentation, AI feedback improves in most, but not all, cases on these three subsets. While on a general pairwise benchmark Rewardbench, AI feedback slightly decreases." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. My main concern is novelty. Several highly related (i.e., tool-augmented AI feedback), published papers have not been cited and clearly discussed ([1][2]). \"Novel framework\" sounds overclaim.\n2. Studying pairwise feedback in domains with clear objective correctness (e.g., fact, code, math) is unjustified.\n3. Mixed results. Performance slightly decreases on general domains (rewardbench) and math when the base model is stronger (e.g., GPT-4o).\n\n[1] https://arxiv.org/pdf/2310.01045\n[2] https://openreview.net/pdf?id=Sx038qxjek" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "I have one question, and one suggestion:\n* Did you check the correctness of the GSM8K hard answers? GSM8K has a small but noticeable subset (<5%) that have incorrect labels, so without any validation, the instances that GPT4o gets \"wrong\" may be mislabeled. I'd recommend checking this, and if some are mislabeled, this may be the source of the mixed results you see on math reasoning. If so, I'd recommend thinking about harder math datasets (like MATH), though this may be more complicated for code execution.\n* I'd be interested to see how this affects best-of-n ranking when using LLMs as a judge for ranking n model outputs -- I'd assume this would noticeably help performance on the domains tested. This may be expensive depending on the setup though, so this is also a reasonable follow up work instead." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This paper proposes a reasonable and interesting framework for improving pairwise judgements using automated annotators. Recent work has shown the strength of strong, automated pairwise annotators, and this work is a valuable extension of that, showing that ground truth information in the responses (that traditional LLM-only systems might not always pick up on) is valuable for making these decisions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the concept of external validation of ground truth information during pairwise judgements -- that is, when judging pairs of responses to a given prompt, before using an automated annotator to judge which response is better, tools are first used to validate pieces of the outputs (code execution & correctness, mathematical reasoning, and factuality) and this information is then provided as additional information to the annotator. On datasets with ground truth information (i.e. pairs where one is confirmed to be better than the other), their method noticeably improves performance on factuality and coding tasks, and has less clear performance gains on math tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While this paper shows strong results on annotation accuracy, it is unclear how well this improves downstream performance. I don't think this is a hard requirement for this work, but I'd be interested to see how model performance changes using this method to either generate preference data, or do best-of-n ranking for model outputs. I do not think this is required for this paper to be accepted, however." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Questions\n- [Q1] Were there any tests on non-benchmark preference training datasets (e.g., Anthropic-HH, Helpsteer2, ChatArena), and the effect of the agent framework on the downstream reward model / policy model performance? \n\nComments/Suggestions (these are nits that don’t weigh a lot in my scoring but I’d appreciate if addressed as it can improve the manuscript):\n- [C1] There are some non-formal words used throughout the text that I would appreciate if corrected:\nPage 6, bullet point #2, last sentence: “till we have failing solutions” -> “until…”\n- [C2] The term agentic was introduced suddenly in p.2 without any introduction / contextualization as to what it means." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- [S1] The contribution is **timely** due to the prevalence of using synthetic preferences from LM judges.\n- [S2] The proposed framework is **interesting** as it provides an approach to ground an LLM judge’s annotations to verifiable and objective facts/ground-truth, using existing and off-the-shelf tools today. \n- [S3] I also appreciate the effort to **extend subsets of RewardBench to create more challenging test sets** due to the saturation of the said benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates whether augmenting LLM-as-a-judge with external validation tools can improve their annotation quality for pairwise feedback. They propose a framework that routes a model response to external tools such as fact-checking, code execution, and math execution. The outputs of these tools are then collated to inform the final decision of the LLM judge.\n\nTo evaluate their proposed framework, the authors constructed benchmarks from existing datasets such as LongFact, APPS, and GSM8k. They measure the **percentage agreement of the LLM judge to the ground-truth annotations of these datasets.** They find significant improvements from baseline annotators on long-form fact checking and coding, but mixed results on math.\n\nThe main contributions of this work are as follows:\nA framework for augmenting LLM judges with external tools to improve judgments on verifiable / objective domains.\nExtension of Rewardbench subsets to create more challenging test sets for fact-checking, coding, and math." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- [W1] One con for this work is the **insufficiency of experiments to show the accuracy / reliability of specific components of the framework.** For example, how reliable is the “Initial domain assessment” component for routing responses to specific tools? \n - [W1.1] In addition, showing the robustness of the framework as new tools are added to the agent can help strengthen the use-case of this framework.\n\n- [W2] **Lack of motivation** as to why the specific tools (SAFE, OpenAI code, OpenAI math) were chosen for each component. Were there any other components tested? How sensitive are the reported results to these tools?\n\n- [W3] The are some **claims that have shallow to no evidence** (a few notable examples):\n - Section 4.3.2 (Observation 4): The claim is that complexity (e.g., in the form of tools) does not always yield better results. The only evidence so far is ArenaHard outperforming the agent framework, but we also see that other simpler methods like pick-best and AlpacaEval underperformed against the agent framework. Perhaps there are other confounders, and there’s a need to disentangle what complexity means.\n - Section 4.3.3 (Observation 6): There is a claim that baseline annotators have bias towards incorrect GPT-4 responses, and it was explained as self-enhancement bias. It was further claimed that the agent framework’s code execution path overcame this bias. The only evidence so far is the empirical results, but how much of this was due to the code-execution tool and how much was from AlpacaEval (baseline annotator)?\n - Finally, I think it’s important to show how each component contributed to the performance of the overall framework. For the strongest results (Math and Fact-checking), how much of the performance is attributed to the tool and how much was from AlpacaEval?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "For some domains it can be tricky to obtain high quality AI feedback: we investigate using external validation tools to improve feedback quality." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024can,\ntitle={Can External Validation Tools Improve Annotation Quality for {LLM}-as-a-Judge?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xrgXaOV6dK},\nnote={under review}\n}" }, "abstract": { "value": "Pairwise preferences over model responses are widely collected to evaluate and provide feedback to large language models (LLMs). Given two alternative model responses to the same input, a human or AI annotator selects the “better” response. This approach can provide feedback for domains where other hard-coded metrics are difficult to obtain (e.g., quality of a chat interactions), thereby helping measure model progress or model fine-tuning (e.g., via reinforcement learning from human feedback, RLHF). However, for some domains it can be tricky to obtain such pairwise comparisons in high quality - from AI and humans. For example, for responses with many factual statements or complex code, annotators may overly focus on simpler features such as writing quality rather the underlying facts or technical details. In this work, we explore augmenting standard AI annotator systems with additional tools to improve performance on three challenging response domains: long-form factual, math and code tasks. We propose a tool-using agentic system to provide higher quality feedback on these domains. Our system uses web-search and code execution to ground itself based on external validation, independent of the LLM’s internal knowledge and biases. We provide extensive experimental results evaluating our method across the three targeted response domains as well as general annotation tasks, using RewardBench data (incl. AlpacaEval and LLMBar), as well as three new datasets for areas where pre-existing datasets are saturated. Our results indicate that external tools can indeed improve AI annotator performance in many, but not all, cases. More generally, our experiments highlight the high variability of AI annotator performance with respect to simple parameters (e.g., prompt) and the need for improved (non-saturated) annotator benchmarks. We share our data and code publicly." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLM-as-a-Judge", "AI annotators", "evaluation", "tool-use" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/8e7644296f035ace1978592fde5f0e4b9f208396.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Can External Validation Tools Improve Annotation Quality for LLM-as-a-Judge?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xriJVaTh4C
Gaussian Loss Smoothing Enables Certified Training with Tight Convex Relaxations
main
Active
Certified Robustness;Adversarial Robustness;Certified Training;Convex Relaxation;Neural Network Verification
alignment, fairness, safety, privacy, and societal considerations
1;3;6
4;3;4
2;2;3
1;1;3
1;3;3
3.333333
3.666667
2.333333
1.666667
2.333333
0.114708
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- There is no comparison with CROWN-IBP in Table 3. Why?\n- \"We remark that scaling to CNN7 used by the SOTA methods is still infeasible due to the high computatinoal cost of evaluating DeepPoly\" How about applying RGS to CROWN-IBP as it is cheaper than DeepPoly?\n- What do the italic numbers mean in Table 3?\n- DP-RGS appears first in L480, but never defined (seems like DP-RGS = DeepPoly-RGS)." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "I think the following strength itself is enough for accept.\n- The paradox of certified training is an important subject to study.\n- The proposed method (GLS) is well-motivated and backed up by theoretical results.\n- The empirical results on small perturbation settings in Table 3 are quite impressive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to use a method (Gaussian Loss Smoothing) to address the paradox of certified training which is caused by the discontinuity/non-smoothness/sensitivity issues of certifiable training with tighter convex relaxations. Moreover, the authors also use a gradient-based method called Randomized Gradient Smoothing (RGS) to scale GLS to larger models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper is well-motivated, but has a minor weakness in the performance (on large perturbation) detailed as follows:\n- Table 3 (Table 5 in Appendix) shows the results for small (large) perturbation settings. The results for large perturbation is not significant. IBP performs well for large perturbations and other tighter methods does well for small perturbations. This is because of the continuity, smoothness and sensitivity of the loss landscape. Thus, to check the effectiveness of GLS, I think it is crucial to check the result of tighter methods on large perturbation settings. However, setups (i) MNIST $\\epsilon=0.3$ and (ii) CIFAR-10 $\\epsilon=8/255$ show that \nGLS (or RGS) (i) does not show a significant performance gain (DP-RGS (IBP) vs MTL-IBP; 88.69 vs 88.68) or (ii) shows a worse performance (29.25 vs 29.62). This should be discussed in the main text (not in Appendix). I don't think the performance itself is a reason for a rejection, but it needs more discussion.\n- In Table 1, \"the more precise DeepPoly bounds now yield the best certified accuracy across all settings, even outperforming accuracy at low perturbation radii'', but not for the large perturbation (IBP vs DeepPoly-PGPE; 77.23 vs 74.28, 25.72 vs 22.19).\n- In GRAD Training, \"IBP dominates the other methods, confirming the paradox of certified training\", but in the original paper of CROWN-IBP, CROWN-IBP outperforms IBP for a larger network (see their Table 2 https://arxiv.org/pdf/1906.06316). \n- (For a larger model,) IBP outperforms the other methods (e.g, CROWN-IBP, CAP) for large perturbation, not for small perturbation (e.g., see Table 1 in Lee et al. (2021)). This implies that the paradox plays more important role for a larger perturbation.\n\n\n\nPlease check the Questions part together. I think unclear presentation is also a weakness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "GLS appears to be related to Sharpness-Aware Minimization (SAM) [1], as both approaches aim to smooth the loss landscape for a more regular surface. It would be beneficial to include a discussion or a related work section to clarify this connection in the paper.\n\n[1] Foret et al., “Sharpness-Aware Minimization for Efficiently Improving Generalization,” ICLR, 2021." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "GLS introduces novelty in using Gaussian smoothing across the loss landscapes, strongly problematic in certified training." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper talks about the variation in certified training; though tighter relaxations sometimes reduce performance compared to looser bounds, it shows the authors introducing Gaussian Loss Smoothing as one of the methods for smoothing the loss landscape to reduce discontinuity and sensitivity—the major hurdles in certified training with tight relaxations.\n\nThey provide two ways of realizing the different realizations of GLS: first is PGPE, a gradient-free approach based on policy gradients with parameter-based exploration; second is RGS, which is described as a gradient-based approach using randomized gradient smoothing. Experimental results on several datasets illustrate that indeed the algorithm GLS outperforms current methods relying on tight relaxations; sometimes its performance is tested along with the DEEPPOLY relaxation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "## Writing Style\n\n- **Introduction**: In some respects, the introduction could be improved about adversarial certified robustness. It is abrupt and feels more work-related.\n \n- **Transitions and Structure**: Transitions from one section to another are missing; hence, making the paper hard to follow. Statements such as, \"While CROWN-IBP is not strictly more or less precise than either IBP or DEEPPOLY,\" necessarily need references or explanations.\n\n- **Undefined Terms**: Terms such as \"soundness\" and \"sensitivity\" are undefined.\n\n- **Results of experiments**: These are presented in a very untidy fashion.\n\n\n## Related Work\n\n- **Definitions**: Some more clear definitions can be provided, for instance, adversarial attack and adversarial robustness because the latter is different from certified robustness.\n \n- **Redundant Subsections**: Sections 2.1 would seem to represent redundant subsections: \"Training for Robustness\" and \"Adversarial Training.\"\n\n\n## Theoretical Findings\n\n- **Lack of Rigor**: Theoretical statements, like Theorem 3.1, are informal and not mathematically precise. References to results, including Stein's Lemma [2], could be good in the proof.\n \n- **Flaw in Proofs in Lemmas**: \nIn the proof of Lemma B.1 terminology and symbols that are important are not defined (for example, \\(\\delta \\theta\\), \\(P_{\\epsilon_1}\\), \\(P_{\\epsilon_2}\\), \\(P_{\\mathcal{N}(0, \\sigma^2)}\\)). The authors seem to take the limit $\\delta \\theta$ to $0$ but it is never stated anywhere in the proof and the integral of the derivative of the loss $L^\\prime$ should be a multi dimensional integral as the input space is multi dimensional.\nIn Lemma B.2's proof the simplifications are excessive, the structure of the proof is highly defective in many places and it needs a thorough revision. \n\n\n\n## Experimental Results\n\n- **Lack of Detail**: Neither the dataset nor the architecture used in Figure 1 are specified.\n \n- **Performance Gains**: Although the experimental results are somewhat improved, they are not very significant compared to previous methods; therefore, this added complexity is questionable in value. Presentation of the standard deviations over multiple runs would have given more robust conclusions since apparent gains are marginal.\n\n\n# Major Concerns\n\n1. **Theoretical Rigor**: The proofs are not rigorous; better structure and formalization are required. The use of Stein's Lemma might improve the underlying framework, giving credibility to the theoretical results.\n \n2. **Marginal Gains and Complexity**: GLS brings in computational complexity especially with PGPE, while performance benefits are marginal.\n\n3. **Readability and Clarity**: The writing style and the structure take away from clarity, making the paper difficult to follow.\n\n\n[2] Stein, “Estimation of the Mean of a Multivariate Normal Distribution,” The Annals of Statistics, 1981." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. DeepPoly-GRAD achieves 90.04% certified accuracy on MNIST (0.1) in Table 1; in Table 2 this number is only 68.47%. Is this a typo? I don't believe that changing the model size can make such a big difference; plus, only DeepPoly-GRAD has such a huge drop in performance.\n2. Is it correct that you are only changing the loss of training the model, but you are not changing the way of certifying the model?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The writing is fine. There is little difficulty understanding the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes to use Gaussian Loss Smoothing (GLS) with certified training algorithm, such as IBP, CROWN, etc. The motivation is that a previous paper pointed out that methods other than IBP, though having tighter relaxation, suffer from discontinuity, non-smoothness, perturbation sensitivity. The authors argue that GLS can make the loss surface smoother using a theoretical result and some plots as evidence. The method is tested on MNIST, Cifar-10 and TinyImagenet." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Contribution: This paper proposes to apply GLS to existing certified training methods such as IBP and DeepPoly, and one can use PGPE or RGS to compute the GLS. Note that, however, all of IBP, DeepPoly, GLS, PGPE and RGS are existing methods. Theorem 3.1 is a direct application of an existing result. So if I understand correctly, the only novelty of this submission is combining these things together, and thus the contribution is incremental.\n2. Performance: How does the proposed method perform? From Table 1, it seems that the proposed method (PGPE) is not significantly better than the standard method (GRAD), if not worse. It is true that PGPE makes DeepPoly better than IBP, but the way it does that is to make IBP worse, not making DeepPoly better. On MNIST (0.3) and CIFAR-10 (8/255), the best PGPE method is worse than the best GRAD method. On the other two settings, it is only slightly better. For all settings, IBP-PGPE is worse than IBP-GRAD (standard). Thus, this result suggests that one probably should not use PGPE. There is no evidence that the proposed method works in practice.\n3. Experimental results: Tables 1 and 3 seem contracting with each other. In Table 1, standard IBP on CIFAR-10 (2/255) has natural accuracy 48.05% and certified accuracy 37.69%; in Table 3, the two reported numbers are 54.92% and 45.36%. Probably this is because CNN5 is better than CNN3, but then what is the point of Table 1? Why not always use a bigger model (CNN7 seems even better). And why not compare with RGS in Table 1? I don't think there is anything preventing you from comparing with RGS in Table 1. The authors report DP-RGS to be the best method in Table 3, but since there are so many problems with the tables I do not trust this result.\n\nOverall, this submission proposes to combine a bunch of existing methods, but the experiments show that this is even worse than the original methods. Thus, I recommend rejecting this submission." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We show that Gaussian Loss Smoothing allows us to overcome the Paradox of Certified Training and yields better networks when training with tighter bounds." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024gaussian,\ntitle={Gaussian Loss Smoothing Enables Certified Training with Tight Convex Relaxations},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xriJVaTh4C},\nnote={under review}\n}" }, "abstract": { "value": "Training neural networks with high certified accuracy against adversarial examples remains an open challenge despite significant efforts. While certification methods can effectively leverage tight convex relaxations for bound computation, in training, these methods, perhaps surprisingly, can perform worse than looser relaxations. Prior work hypothesized that this phenomenon is caused by the discontinuity, non-smoothness and perturbation sensitivity of the loss surface induced by tighter relaxations. In this work, we theoretically show that Gaussian Loss Smoothing (GLS) can alleviate these issues. We confirm this empirically by instantiating GLS with two variants: a zeroth-order optimization algorithm called PGPE which allows training with non-differentiable relaxations, and a first-order optimization algorithm, called RGS, which requires gradients of the relaxation, but is much more efficient than PGPE. Extensive experiments show that when combined with tight relaxations, these methods surpass state-of-the-art methods when training on the same network architecture for many settings. Our results clearly demonstrate the promise of Gaussian Loss Smoothing for training certifiably robust neural networks and pave a path towards leveraging tighter relaxations for certified training." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Certified Robustness", "Adversarial Robustness", "Certified Training", "Convex Relaxation", "Neural Network Verification" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a440113297ea4cafe3561461426089fcb4169809.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/02f243b833c9329e330c71ae2af993e87a968d38.zip" }, "title": { "value": "Gaussian Loss Smoothing Enables Certified Training with Tight Convex Relaxations" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xrtM8r0zdU
Sparse Gradient Compression for Fine-Tuning Large Language Models
main
Active
Machine Learning;Large Language Models;Parameter efficient fine-tuning
foundation or frontier models, including LLMs
3;5;5;5;5
5;3;3;4;5
2;3;2;2;3
2;2;2;2;2
3;3;3;2;3
4.6
4
2.4
2
2.8
-0.559017
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.In Section 4.4, the gradient is chunked during computation. If c is not large, the size of the projection matrix remains significant, leading to high memory consumption. Conversely, if c is large, it introduces c times the matrix multiplications, which may be time-consuming. What is the result of the runtime comparison between MESGC and existing PEFT methods?\n2.What is the performance of MESGC on small datasets, as discussed in Section 5.3?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.The paper introduces two algorithms, MESGC and CESGC, for effectively reducing the memory and computational complexity, respectively.\n2.It presents a well-reasoned approach for determining the hyperparameters of the SGC algorithm in Section 5.2." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a Sparse Gradient Compression (SGC) algorithm to flexibly and granularly control the number of trainable parameters during fine-tuning. By projecting the optimizer states into a subspace, SGC updates and stores these states in a low-dimensional space. Numerical experiments demonstrate that the performance of SGC is comparable to existing parameter-efficient fine-tuning (PEFT) algorithms to some extent." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.The novelty of the proposed approach is limited. The concept of projecting optimizer states into a subspace with a dimension independent of the original model size has been previously discussed in the top-k compressor as shown in [1] and [2].\n2.The paper lacks a theoretical analysis of the relationship between the choice of k and the model’s convergence, a detail that has been explored in [1] and [2].\n3.The idea behind SGC lacks novelty, as both algorithms are quite similar to GaLore. Additionally, MESGC appears to be time-consuming, particularly regarding the practical adjustments outlined in Algorithm 4, and the numerical results in Table 2 do not consistently demonstrate CESGC’s superiority over GaLore.\n\n[1] Stich, S. U., Cordonnier, J. B., & Jaggi, M. (2018). Sparsified SGD with memory. Advances in neural information processing systems, 31. \n[2] Li, X., Karimi, B., & Li, P. (2022). On distributed adaptive optimization with gradient compression. arXiv preprint arXiv:2205.05632." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "What is the practical benefit of increased flexibility and granular control, given that memory usage is primarily dominated by parameters rather than optimizer states in methods like PEFT or GaLore?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. SGC is more flexible compared to previous methods like LoRA and GaLore, allowing more granular control over the dimensionality of the compressed optimizer state.\n2. On commonsense benchmarks, SGC achieves a comparable average accuracy to both GaLore and LoRA while using fewer optimizer state parameters." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Based on gradient sparsity, this paper proposes a flexible gradient compression method to reduce memory usage during training LLMs. By sparsifying and projecting the gradient onto a low-dimensional subspace, the optimizer state is updated, and then, when updating parameters, it is mapped back to the high-dimensional space using the orthogonal matching pursuit (OMP) algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although SGC is more flexible, this advantage is somewhat marginal, as LoRA and GaLore are already quite flexible.\n2. The paper lacks throughput experiments and runtime analysis of OMP.\n3. It does not include empirical experiments comparing memory usage of SGC and baseline methods to validate the theoretical analysis.\n4. There is no information on the error magnitude after gradient compression.\n5. Mischaracterization in lines 350-352 and 368-369, where it mentions GaLore as a type of PEFT method; GaLore is not a PEFT method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Equation (6), $\\boldsymbol{A}$ is initialized randomly as stated in the appendix. How does this randomness affect the model's final performance? Are there significant differences observed across different random seeds?\n2. What are the actual comparisons of wall-clock time per iteration and GPU memory usage when training with SGC compared to LoRA and full Adam fine-tuning? Providing this data for LLaMa-7b in a table or plot format would help clarify SGC's efficiency across contexts.\n3. Does SGC offer any advantages over Adafactor and CAME in terms of performance or efficiency? If so, could you elaborate on these advantages?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The writing is clear and well-structured, with an appropriate balance of detail, making it easy to understand. Most technical choices are well-motivated and thoroughly explained.\n2. The motivation behind the proposed SGC method is intuitive, making the approach conceptually accessible and logical given the challenges in fine-tuning large language models.\n3. The experiments and comparative analyses effectively demonstrate that SGC offers memory savings while maintaining comparable performance, validating the practical advantages of the approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new optimizer, Sparse Gradient Compression (SGC), designed to enhance fine-tuning efficiency for large language models (LLMs). The main innovation of SGC lies in compressing the optimizer states, providing a more flexible and granular tradeoff between memory usage and performance compared to other Parameter Efficient Fine-Tuning (PEFT) methods. The compression leverages the inherent sparsity in gradients, with the recovery of compressed states performed through a greedy algorithm called Orthogonal Matching Pursuit (OMP). Experimental results on LLaMA models demonstrate that SGC achieves performance comparable to or even better than existing PEFT methods on some tasks, while reducing memory requirements for optimizer states." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Applicability**: While the paper claims that SGC offers a more flexible, fine-grained tradeoff, PEFT methods typically target compute-constrained scenarios, where such granular control may require extra tuning that reduces practicality. It would be beneficial to include a plot with sparsity on the x-axis and performance on the y-axis to directly compare the flexibility of SGC with LoRA. This visualization could more intuitively demonstrate whether SGC’s fine-grained control offers practical performance benefits at different sparsity levels.\n2. **Questionable Memory Advantage**: The memory usage for first-order optimization methods largely comes from the model parameters, gradients, activations, and optimizer states. Even with Adam’s two states, optimizer memory costs are typically less than half. SGC, based on Adam, can’t reduce memory below that of simple SGD without momentum, and since it still calculates full gradients, its GPU memory consumption may surpass LoRA, which doesn’t require full gradient computations.\n3. **Subpar Performance**: As seen in Table 2, SGC shows no clear performance advantage over methods like LoRA and GaLore, raising questions about its efficacy as a fine-tuning method.\n4. **Lack of Related Work Comparison**: The paper omits discussion and comparison with relevant optimizers like Adafactor[1] and CAME[2], which also focus on compressing optimizer states. These omissions reduce the context for understanding SGC’s place among similar methods. Including a comparison on task performance, memory efficiency and convergence speed would better contextualize SGC's advantages and place among similar methods.\n\n**References**:\n\n[1] Shazeer, N., & Stern, M. (2018, July). Adafactor: Adaptive learning rates with sublinear memory cost. In *International Conference on Machine Learning* (pp. 4596-4604). PMLR.\n\n[2] Luo, Y., Ren, X., Zheng, Z., Jiang, Z., Jiang, X., & You, Y. (2023). CAME: Confidence-guided Adaptive Memory Efficient Optimization. In *Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pages 4442–4453, Toronto, Canada. Association for Computational Linguistics." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Can you justify in what scenarios is fine-grained control over the training parameters necessary?\n2. If fine-grained control of training parameters is required, are there simpler methods to achieve similar results, such as using different ranks in different transformer layers?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper presents a novel approach that addresses memory efficiency in large-scale fine-tuning tasks. The proposed approach enables more flexible and granular control over the number of parameters to train during finetuning.\n2. Experimental evaluation shows that SGC competes well with and often outperforms existing methods (e.g., LoRA, GaLore) in terms of memory efficiency and accuracy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Sparse Gradient Compression (SGC), a method aimed at reducing memory requirements when fine-tuning LLMs. SGC leverages gradient sparsity to update optimizer states within a low-dimensional subspace, effectively reducing memory usage while preserving performance. Experimental results demonstrate that SGC outperforms traditional parameter-efficient fine-tuning methods, especially in memory-limited settings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The author highlights limitations in the flexibility and granularity of LoRA due to the dependency on model dimensions. However, in practical applications, these constraints may not significantly impact performance. Many real-world tasks do not require extreme reductions in trainable parameters, and the existing flexibility of LoRA is often sufficient. For instance, as shown in Table 2, LoRA fine-tunes only 0.2% of the parameters, meaning the LoRA weights and optimizer states are not the bottleneck—the base model weights and activations are. Reducing this further to 0.08% would likely not yield significant benefits." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Please refer to some questions I listed in the weakness section.\n- Here are a few typos and minor improvements I found in the paper:\n1. Abstract: “dimensionality independent of the original model’s parameters” - it might be clearer to specify “independent of the dimensionality of the original model’s parameters.”\n2. Introduction (line 23): \"exising PEFT methods\" should be \"existing PEFT methods.\"\n3. Section 2 (line 127): \"Adapter-based methods... However, these approaches can introduce latency during inference.\" - The phrase \"these approaches can introduce latency\" might read more smoothly as \"these methods may increase latency.\"\n4. Equation in Section 3: Ensure consistency with spaces around equations and symbols, especially around parentheses and operators.\n5. Section 4.1 (line 217): \"sparisfys(·)\" should be \"sparsifys(·).\"\n6. Section 4.3 (line 265): \"compressed from pt and qt\" should read more clearly as \"compressed forms, pt and qt.\"\n7. Section 5.3: \"boolQ\" should consistently be capitalized as \"BoolQ\" to match standard dataset naming conventions.\n8. Section 5.4: \"As indicated in 3\" should be \"As indicated in Equation 3.\"\n\n- **Time Profiling**: SGC introduces additional computational overhead and extra time costs compared to full fine-tuning. Could the authors provide further discussion on the complexity of this additional computation, along with some time profiling results to illustrate the impact?\n\n\nI would like to discuss the questions I raised regarding the weaknesses and concerns with the authors. If my concerns are adequately addressed, I would be willing to reconsider my rating." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The presentation is clear and easy to follow, with only a few minor typos.\n- The proposed sparse gradient method is straightforward and supported by reasonable theoretical foundations." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The main contribution of the paper is the introduction of the Sparse Gradient Compression (SGC) method for memory-efficient fine-tuning of large language models (LLMs). SGC leverages inherent sparsity in gradients to compress optimizer states by projecting them onto a lower-dimensional subspace, independent of the model's original parameter dimensions. This approach offers a trade-off between memory efficiency and performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Dataset Limitation**: The authors only use a single dataset (Commonsense) in the experimental sections. I strongly recommend adding at least one more dataset to demonstrate the generalizability of the algorithm across different data domains.\n\n- **Comparison to LoRA in Speed**: While SGC effectively reduces optimizer memory costs similar to LoRA, LoRA offers additional advantages by significantly speeding up the fine-tuning process. Through low-rank adapters and fewer trainable parameters, LoRA can accelerate fine-tuning by 12–16 times compared to full fine-tuning. Since SGC performs full forward and backward propagation, it does not offer the same speed benefit and is likely to be significantly slower than LoRA. I suggest the authors include a training time profile for SGC to clarify this difference.\n\n- **Comparison to LoRA in Activation Memory**: LoRA also has the advantage of substantially reducing activation memory costs. For example, in LLaMA-2-7B full fine-tuning, the activation memory cost for a batch size of 128 can reach up to 40GB. With low-rank adapters, this can be reduced to under 1GB. Since SGC performs full forward and backward propagation, it does not reduce activation memory cost and is expected to be comparable to full fine-tuning. I recommend the authors discuss SGC's activation memory usage in more detail.\n\n- **Base Model Limitations**: Although the authors utilize LLaMA-2-7B, LLaMA-2-13B, and LLaMA-3-8B, I recommend including more state-of-the-art models, such as LLaMA-3.1. Additionally, incorporating models outside the LLaMA family, like Phi-2/Phi-3 or Mistral, could further demonstrate SGC's generalizability across different model architectures." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024sparse,\ntitle={Sparse Gradient Compression for Fine-Tuning Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xrtM8r0zdU},\nnote={under review}\n}" }, "abstract": { "value": "Fine-tuning large language models (LLMs) for downstream tasks has become increasingly crucial due to their widespread use and the growing availability of open-source models. However, the high memory costs associated with fine-tuning remain a significant challenge, especially as models increase in size. To address this, parameter efficient fine-tuning (PEFT) methods have been proposed to minimize the number of parameters required for fine-tuning LLMs. However, these approaches often tie the number of optimizer states to dimensions of model parameters, limiting flexibility and control during fine-tuning. In this paper, we propose sparse gradient compression (SGC), a training regime designed to address these limitations. Our approach leverages inherent sparsity in gradients to compress optimizer states by projecting them onto a low-dimensonal subspace, with dimensionality independent of the original model's parameters. By enabling optimizer state updates in an arbitrary low-dimensional subspace, SGC offers a flexible tradeoff between memory efficiency and performance. We demonstrate through experiments that SGC can decrease memory usage in optimizer states more effectively than exising PEFT methods. Furthermore, by fine-tuning LLaMA models on various downstream tasks, we show that SGC can deliver superior performance while substantially lowering optimizer state memory requirements, particularly in both data-limited and memory-limited settings." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Machine Learning", "Large Language Models", "Parameter efficient fine-tuning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/a99c736e97dec116334d6acd3d7482fb007c25b0.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Sparse Gradient Compression for Fine-Tuning Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xsELpEPn4A
JudgeLM: Fine-tuned Large Language Models are Scalable Judges
main
Active
LLM Judging
alignment, fairness, safety, privacy, and societal considerations
6;6;8;8
4;3;4;4
3;3;3;4
3;3;2;3
3;3;3;4
7
3.75
3.25
2.75
3.25
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- did you notice any favoring of GPT-4 answers by the JudgeLM? \n- What tasks are seen in PandaLM that arent seen in the JudgeLM dataset?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The problem setting of building cheaper, scalable LLM judges is an important problem now that LLM-driven evaluation is becoming standard, and the provided benchmark will be incredibly valuable to the community\n- This paper is easy to follow and provides a comprehensive analysis comparing JudgeLM to existing LLM judges on both accuracy and efficiency\n- Improvements over previous LLM judges is impressive and provides a promising alternative to expensive closed source model judges" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper targets evaluating and building LLM's specifically for judging answer correctness on open-ended tasks. To do this, they construct a dataset which consists of llm answers across a variety of tasks along with GPT-4 generated judgements used as a ground truth (ground truth references responses are sometimes supplied). This dataset is used to train smaller LLMs to provide judgements with comparable accuracy to SOTA LLMs. They also discuss the biases that result from the LLM judge finetuning (position bias, knowledge bias, format bias) and propose methods to address them." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Knowledge bias is not a bias but rather a limitation of the LLM, and the proposed solution to this seems to be providing that knowledge via a reference (AKA making the out of distribution task in distribution). While I am not opposed to this solution, I would argue this is not a bias addressed but rather a universally known failure case of the model that the authors try to mitigate by training on more knowledge via reference examples - a universally known solution to address model knowledge gaps.\n- While the validation set was manually checked and corrected by the authors, it does still rely on GPT generated outputs. This provides somewhat of an unfair evaluation as JudgeLM is trained on GPT generated judgements as well. Even with the human validation, there is a reasonable chance that if this dataset where annotated by a different LLM and produced different judgements, humans checking responses would also consider them reasonable. An unbiased way of annotating is for humans to provide judgements *without* knowing what the GPT judgement is. If the agreement between humans and the GPT judgements are similar, than I would consider this evaluation relatively fair across all judge models. \n- The authors did not provide clear evidence that this model is able to maintain good performance across tasks not in the training set. I suspect that the comparison to the PandaLM test set is showing this to some extent, but I did not see any prose on *how* these two datasets differ. What tasks are seen in PandaLM that arent seen in the JudgeLM dataset? If the authors can show that the task distribution is significantly different from the training set I would be satisfied" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Given that JudgeLM reportedly surpasses human-to-human agreement, how does it handle cases where human judgments may rely on subjective or context-dependent insights? Could the authors discuss potential scenarios where JudgeLM might still fall short of human evaluators in nuanced judgments, and possibly include or consider adding complex tasks to explore this?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper demonstrates a novel approach to scaling LLM evaluation by fine-tuning models as judges, creating a comprehensive system that addresses biases inherent in model evaluations. This creative combination of fine-tuning techniques with practical augmentation methods (swap, reference support, reference drop) removes limitations from prior works that struggled with consistent, scalable evaluation in open-ended tasks. \n\nThe quality of the work is solid, backed by a large-scale, carefully curated dataset, including GPT-4 judgments and human validation, which strengthens the empirical basis of the results. In terms of clarity, the paper effectively communicates its methodology and contributions. \n\nGiven the increasing role of LLMs across various fields, there is pressing need for scalable, unbiased evaluation frameworks. JudgeLM’s seems a valuable tool in AI evaluation, with potential impact on future benchmarks and research in LLM development." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a scalable method for evaluating LLMs in open-ended tasks. It leverages a large, diverse dataset with tasks, LLM-generated answers, and GPT-4 judgments to fine-tune models that assess other models effectively. To mitigate biases such as position, knowledge, and format biases, the authors introduce techniques like swap augmentation, reference support, and reference drop. JudgeLM achieves high agreement rates with GPT-4, surpassing human-level consistency, and can judge multiple formats efficiently." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The improvements seem The dataset, although extensive, primarily relies on GPT-4 for initial judgments, which may inadvertently transfer GPT-4’s specific limitations to JudgeLM. A more diverse range of teacher models such as Claude could minimize over-reliance on any single model’s limitations, making JudgeLM’s judgments more adaptable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can JudgeLM’s performance be reliably extended to evaluate answers in different languages or domain-specific contexts (e.g., math, coding, legal or medical)?\n- How does the model handle cases where the reference answer may introduce bias rather than mitigate it?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- JudgeLM’s ability to process up to 5K samples in 3 minutes on an 8-GPU system is impressive, and it supports multiple use cases including single-answer and multimodal evaluations.\n- The swap augmentation and reference-based adjustments offer a nice way to mitigate biases that impact LLM judgments, contributing to more reliable scoring across scenarios.\n- The JudgeLM dataset, with its human-verified judgments and optional reference answers, is a notable contribution." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces JudgeLM, a scalable framework for evaluating LLMs by fine-tuning models as judges, using a dataset of LLM-generated answers and GPT-4 judgments. This framework addresses evaluation biases such as position and format biases through techniques like swap augmentation and reference drop. The authors show that JudgeLM achieves high alignment with the GPT-4 judge and is more efficient than comparable methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The model's robustness under varied task complexities or unseen domains is not extensively tested (e.g. math/coding/reasoning task). Additional benchmarks or diverse human annotations would reinforce its generalizability.\n- The study acknowledges that scaling up the judge dataset is costly and currently relies on GPT-4 outputs. Exploring alternative sources or synthetic judgment data could be beneficial." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. In Table 3, why is JudgeLM able to judge answers in parallel, while PandaLM cannot? The authors mention certain “engineering optimizations for parallel judging”; Please provide specific details about these engineering optimizations, such as the parallel processing techniques used or any architectural changes that enable parallel judging. This would help readers better understand the efficiency advantages of JudgeLM over PandaLM.\n2. In both the Abstract and the Introduction, the authors emphasize that JudgeLM achieves an agreement exceeding 90%, surpassing *human-to-human agreement*. Could you clarify this comparison? My understanding is that this primarily reflects strong internal consistency within JudgeLM’s evaluations, yet the emphasis seems to be on JudgeLM’s superior performance beyond consistency alone. If so, I suggest to provide a direct comparison between JudgeLM-to-human agreement and GPT-4-to-human agreement. This would help clarify whether JudgeLM's performance truly surpasses human-level agreement or if it's primarily a measure of internal consistency. Or if not so, feel free to correct me :)\n3. In the Limitations section, the authors only mention the need to further scale up the judge dataset, which I find somewhat trivial. I recommend considering discuss more improvements to the dataset, such as refining the data distribution to real user quires. This could enhance JudgeLM’s performance in areas where the current data may lack diversity or balance. Please refer to my comments in the weaknesses section for further insights.\n\nI will increase the score if all these concerns are addressed." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Strong Motivation**: Training a judge LLM for efficient, effective evaluation of LLMs in open-ended benchmarks is highly valuable, as it enhances privacy, consistency, and cost-effectiveness.\n2. **Thorough Exploration of Key Research Questions**: The paper addresses significant questions around agreement, consistency, efficiency, scalability, and generalization for LLM-as-judge models.\n3. **Solid Experiments**: A convincing ablation study supports the effectiveness of data augmentation strategies, and the “grading, judging, and reasoning” pattern retains significant agreement and consistency benefits while improving efficiency.\n4. **Open-Sourced Code and Dataset**: The code and dataset are readily accessible and user-friendly, enabling further research and reproducibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "A high-quality judge LLM should ideally demonstrate the following features: strong agreement with ground truth (often human annotations, though GPT-4 judgments are acceptable in some cases), consistency to diverse input formats (resilience to various biases), inference efficiency, scalability with data and parameter size, and generalization across scenarios. \nJudgeLM, as presented by the authors, offers an effective approach to optimizing these aspects:\n\n1. **Achieving High Agreement**: The authors collect a diverse, high-quality dataset at scale through a structured pipeline (though the pipeline itself is not entirely novel).\n2. **Achieving High Consistency**: To mitigate three critical biases—position, knowledge, and format biases—the authors employ three straightforward yet effective data augmentation methods: swap augmentations, reference support, and reference drop. This augmentation not only enhances consistency but also boosts agreement.\n3. **Enhancing Efficiency**: The authors adopt a “grading, judging, and reasoning” pattern, as opposed to an explanation-first (CoT) approach. This method achieves a balance, trading slight reductions in agreement and consistency for increased efficiency and flexibility.\n4. **Scalability**: Experimental results demonstrate JudgeLM’s scalability, as tested across varying model and data sizes. The 33B JudgeLM model even exceeds its generalist teacher, GPT-4’s agreement on a human-labeled validation set.\n5. **Generalization**: JudgeLM exhibits promising generalization across various judging tasks (e.g., math problems, code generation) and diverse benchmarks (e.g., human-annotated, multimodal, and retrieval-based benchmarks)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Assumption of GPT-4 Judgments as Ground Truth**: Although GPT-4 is a common choice for cost-effective labeling, it would be beneficial if the authors could further substantiate its reliability before using it as a standard.\n2. **Suboptimal Training Query Distribution**: While the authors highlight the diversity and quality of the training data by statistics, further optimization of the ideal data distribution for training judge LLMs would add depth. What constitutes a more ideal queries distribution? For example, the distribution of user queries collected in Chatbot Arena reflects a large volume of real user queries, which is one of the reasons Chatbot Arena has become such a well-recognized benchmark. If we align the queries distribution in the training data more closely with the distribution of actual user queries, the resulting Judge LLM would evaluate samples more fairly, drawing on the philosophy behind MixEval/MixEval-X paper.\n3. **Potential Overclaiming**:\n - Bias Analysis: In Section 4, while the paper discusses “position bias” and “knowledge bias,” these concepts are well-established in prior literature. The novel contribution here lies in addressing “format bias,” so the phrase “shed light on three key biases” could overstate the novelty. It might be more precise to note that this work builds on previous discussions of position and knowledge biases while introducing and addressing format bias as a new focus.\n - Scalability Comparison: In Appendix 4, the authors mention JudgeLM’s scalability as a differentiator from PandaLM. While PandaLM also explores different model sizes (up to 70B), JudgeLM offers a more granular analysis of scalability. It would clarify the novelty here to acknowledge PandaLM’s scalability exploration but emphasize JudgeLM’s finer scalability insights.\n4. **Paper Structure**: Despite the solid research content, the paper’s organization could benefit from a clearer structure, such as arranging sections by key research questions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024judgelm,\ntitle={Judge{LM}: Fine-tuned Large Language Models are Scalable Judges},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xsELpEPn4A},\nnote={under review}\n}" }, "abstract": { "value": "Evaluating Large Language Models (LLMs) in open-ended scenarios is challenging because existing benchmarks and metrics can not measure them comprehensively. To address this problem, we propose to fine-tune LLMs as scalable judges (JudgeLM) to evaluate LLMs efficiently and effectively in open-ended benchmarks. We first propose a comprehensive, large-scale, high-quality dataset containing task seeds, LLMs-generated answers, and GPT-4-generated judgments for fine-tuning high-performance judges, as well as a new benchmark for evaluating the judges. We train JudgeLM at different scales from 7B, 13B, to 33B parameters, and conduct a systematic analysis of its capabilities and behaviors. We then analyze the key biases in fine-tuning LLM as a judge and consider them as position bias, knowledge bias, and format bias. To address these issues, JudgeLM introduces a bag of techniques including swap augmentation, reference support, and reference drop, which clearly enhance the judge's performance. JudgeLM obtains the state-of-the-art judge performance on both the existing PandaLM benchmark and our proposed new benchmark. Our JudgeLM is efficient and the JudgeLM-7B only needs 3 minutes to judge 5K samples with 8 A100 GPUs. JudgeLM obtains high agreement with the teacher judge, achieving an agreement exceeding 90% that even surpasses human-to-human agreement. JudgeLM also demonstrates extended capabilities in being judges of the single answer, multimodal models, multiple answers, multi-turn chat, etc." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLM Judging" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/ccab311654194d449b12d9f9942b1214b76f6fea.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "JudgeLM: Fine-tuned Large Language Models are Scalable Judges" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xsmlrhoQzC
Proactive Agents for Multi-Turn Text-to-Image Generation Under Uncertainty
main
Active
Interpretable belief state;uncertainty estimation;information gathering;intelligent agents;question-asking under uncertainty
interpretability and explainable AI
3;5;6;6
3;4;4;3
2;3;3;3
2;2;3;3
2;2;3;3
5
3.5
2.75
2.5
2.5
0.408248
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The reviewer has a question regarding the failure case in Fig. 1:\n* Is the system bottle-necked by the alignment and prompt-following capabilities of the text-to-image model or by the capabilities of the agent?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* In contrast to previous multi-turn T2I systems working on multi-turn user instructions, the proposed system asks question to the users for clarifications, which is a new form of interaction with the users orthogonal to previous works.\n* This work proposes an evaluation pipeline that simulates users, which makes the evaluation of proactive agents human-free and much easier. This pipeline could also benefit the development of future agents that ask questions.\n* The proposed DesignBench supplements COCO-Captions in the artistic images, which is a dataset that benchmarks the capabilities of text-to-image systems tailored to the needs of designers and artists.\n* Writing: the paper has its messages clearly conveyed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work addresses the problem of sub-optimal image generation from text-to-image generators due to under-specified or open-ended user prompts. The work proposes building proactive T2I agents that could ask clarifications questions. Furthermore, the understanding is present as a belief state visible and editable by the user. The work also proposes a scalable automated evaluation benchmark. Experimental results suggest that 90% of the human subjects found agents and the belief states useful for the T2I workflow, and the proposed agents significantly improve the VQAScores of the generation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The work frames the problem as the agent updating beliefs according to a fixed world state in the user's mind. However, the user may not have a clear idea in mind (i.e., the user may not have a pre-defined world state). In contast, the user might want to get some inspirations from the system without constraints. This use case has not been considered in the system design. This limitation might affect users such as artists. This was discussed in L534-539, but no suggestions were proposed.\n* The work uses LLMs in Ag2 and Ag3 without fine-tuning. However, this work does not explore trained LLMs (VLMs) with either image data or trajectories that include asking questions. This indicates that the LLM is purely exploring in text space. The exploration might be sub-optimal since exploration on text space might be different than exploration with images in mind.\n* The output of the system still might not follow user's instructions. For example, one of the generated images in Fig. 1 does not have the rabbit chasing the dog." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Focus on D: implementation details and show your novelty. The main text is too shallow." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The question does arise as to how to make the generation of T2I models more personalised and more responsive to the specific and potential needs of users.\n1. Design and prototypes for T2I agents that adaptively ask clarification questions and present belief states.\n2. An automatic evaluation pipeline with simulated users to assess the question-asking skills of T2I agents.\n3. DesignBench:a new T2I agent benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "A new concept, build in a graph-based symbolic belief state for agents to understand its own uncertainty aboutpossible entities that might appear in the image.\nIt's a nice thing that this article want to do, but the solution is so boring. T2I + MLLM directly." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The solution is too simple and very uninventive. Put in a new concept and then come back with a self-explanatory statement.\nBeliefstate It's a former concept, and there's nothing inherently innovative about it." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Is there a validation process for the ground truth caption? What guarantees that it’s a well-generated caption?\n2. I wonder if 15 turns are essential. The number of turns needed would likely vary for each case, so wasn’t this considered? Why 15 turns specifically?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper's strength lies in its interactive approach to improving text-to-image (T2I) generation. By using proactive agents that ask clarification questions and utilize a graph-based belief state for transparency, the method addresses prompt underspecification effectively. The combination of user-centric design, comprehensive evaluation, and the introduction of the DesignBench benchmark demonstrates significant improvements over traditional T2I models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a proactive approach to multi-turn text-to-image (T2I) generation, addressing the common problem of prompt underspecification, where users struggle to articulate their vision clearly, leading to suboptimal results. The proposed T2I agents actively engage with users by asking clarification questions and updating their understanding through an interpretable belief state that users can edit. The paper details the design of these agents, the use of a graph-based belief state to manage uncertainty, and the development of a new benchmark, DesignBench, for evaluation. Experimental results, including both automated evaluations and human studies, demonstrate that these proactive agents significantly improve image generation quality, achieving higher alignment with user intent compared to traditional single-turn T2I models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. If the approach involves making prompt engineering more detailed than the initial prompt, it seems intuitive that the resulting image would be more accurately generated as intended. Could you elaborate on what aspects of this method are novel or distinct beyond providing more detailed prompts?\n2. How do you determine which characteristics are crucial for accurately generating each image? A clearer and more logical explanation of the criteria or process used for this selection would be helpful." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "It would be appreciated if the authors could elaborate on the technical challenges addressed, as well as the potential insights and future impact of this work in the field of computer vision." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written and easy to understand, presenting concepts and methodologies in a clear and accessible manner. Additionally, the results are strong, demonstrating the effectiveness and reliability of the proposed approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a proactive design for text-to-image (T2I) agents that enhances user interaction by prompting clarification questions when user intent is unclear. The approach enables T2I agents to create an interpretable 'belief state'—a representation of their current understanding—that users can review and adjust as necessary. By allowing edits to this belief state, the system promotes transparency and collaboration, enabling the agent to refine its responses to better match the user’s expectations. Overall, this design aims to make T2I agents more interactive, adaptable, and user-centric." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "However, many previous methods have proposed self-correction techniques to improve generated images, as demonstrated in works such as [1], [2], and [3]. This paper, however, does not include comparisons with these related methods. Given this omission, I question the technical contribution of the paper and am uncertain whether it would be a suitable fit for CVPR.\n\nPrevious related works have proposed iterative improvements to image quality. Therefore, it would be helpful if the authors could address the following points: \n1) What is the difference between the iterative procedure used in the proposed methods and that of related works? There is no direct comparison with all related works. \n\n2) It would be beneficial if the authors could compare their methods with these works and explain why the proposed method has a competitive edge. \n\n3) If users make errors when inputting their data, is the proposed model able to detect and correct these mistakes? \n\n4) Regarding reasoning ability, if a user requests advanced generation requirements, such as \"a robot executes a task,\" is the proposed method capable of handling spatially-aware generation?\n\n[1] Wu, Tsung-Han, et al. \"Self-correcting llm-controlled diffusion models.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\n\n[2] Yang, Ling, et al. \"Mastering text-to-image diffusion: Recaptioning, planning, and generating with multimodal llms.\" Forty-first International Conference on Machine Learning. 2024.\n\n[3] Jiang, Dongzhi, et al. \"CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching.\" arXiv preprint arXiv:2404.03653 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Proactive agents for multi-turn uncertainty-aware text-to-image generation with an interface to ask questions when uncertain and present agent beliefs so users can directly edit" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024proactive,\ntitle={Proactive Agents for Multi-Turn Text-to-Image Generation Under Uncertainty},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xsmlrhoQzC},\nnote={under review}\n}" }, "abstract": { "value": "User prompts for generative AI models are often underspecified or open-ended, which may lead to sub-optimal responses. This prompt underspecification problem is particularly evident in text-to-image (T2I) generation, where users commonly struggle to articulate their precise intent. This disconnect between the user’s vision and the model’s interpretation often forces users to keep refining their prompts.To address this, we propose a design for building proactive T2I agents equipped with an interface to actively ask clarification questions when uncertain, and present their understanding of user intent as an interpretable belief state that a user can edit. We build simple prototypes for such agents and verify their effectiveness through both human studies and automated evaluation. We observed that at least90% of human subjects found these agents and their interpretable belief states helpful for their T2I workflow. Moreover, we use a scalable automated evaluation approach using two agents, one with a ground truth image and the other tries to ask as few questions as possible to align with the ground truth. On both the COCO dataset (Lin et al., 2014) and DesignBench, a more photorealistic benchmark we created, we observed that these T2I agents were able to ask informative questions and elicit crucial information to achieve successful alignment with at least 2 times higher VQAScore (Lin et al., 2024) than the standard single-turn T2I generation." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Interpretable belief state", "uncertainty estimation", "information gathering", "intelligent agents", "question-asking under uncertainty" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bb851eb7b8e4289ffbe33c4f3b4cf1648f0055b7.pdf" }, "presentation": null, "primary_area": { "value": "interpretability and explainable AI" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Proactive Agents for Multi-Turn Text-to-Image Generation Under Uncertainty" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xsx3Fpo3UD
Advantage-Guided Distillation for Preference Alignment in Small Language Models
main
Active
Preference Alignment; Large language model; Knowledge Distillation; Advantage Function
foundation or frontier models, including LLMs
6;6;8;8
3;3;3;4
3;3;3;3
2;3;3;3
2;3;3;3
7
3.25
3
2.75
2.75
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Formula 11, $L_s$ seems to be undefined. Does it refer to $L_{SFT}$?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The two methods proposed in the paper combine RLHF with distillation, providing insights into the preference alignment of small language models and solving problems that previous methods had not addressed. \n- The experimental section is well-designed, with numerous comparative and ablation experiments to verify the performance improvements of the methods. \n- The paper is well-written, with detailed descriptions of the two methods, including algorithm steps and formulas, which are easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores how to enhance the effectiveness of small language models to make their generated outputs more aligned with human preferences through preference alignment techniques. To address the issue that technologies like RLHF do not align well with human preferences on small language models, the paper proposes two methods: Dual-Constrained Knowledge Distillation (DCKD) and Advantage-Guided Distillation for Preference Alignment (ADPA). The experimental results show that both methods can significantly improve the alignment of small language models and narrow the performance gap with large models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- ADPA provides more nuanced guidance signals, but the additional computations introduced by fine-grained signals may increase the computational overhead, and the paper seems to lack specific quantitative metrics on this point.\n- The experimental section utilizes a rather singular evaluation dataset, without providing results on a broader range of models, domains, and data. Additionally, there is a lack of experiments assessing the impact of teachers of varying proficiency levels on the results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. I wonder if directly applying some advanced alignment methods to the smaller language models can present a competitive performance.\n\n2. Could you elaborate more on the computational cost of the proposed methods v.s. directly applying alignment techniques?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper investigates the challenge of aligning small language models, the proposed methods are well-motivated and principal.\n\n2. The ADPA method considers the distribution-level reward signal, which is an advancement compared to the previous KD methods.\n\n3. The KD baselines are comprehensively compared, and the studied teacher-student settings are representative.\n\n4. This paper is well organized and the presentation is decent." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the alignment of small language models. It points out that directly applying alignment techniques to small language models may not work well due to the limited capacity of these models. To this end, the authors present 1) a straightforward approach Dual-Constrained KD by integrating both positive and negative signals, and 2) an enhanced approach Advantage-Guided Distillation for Preference Alignment that involves a distribution-0level reward signal given by an advantage function. Extensive experiments with three teacher-student model settings demonstrate the effectiveness of the proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited baseline of alignment methods. In Table 1, while the KD baselines are comprehensively compared, the alignment baseline only includes DPO.\n\n2. Computation overhead of the KD methods. It is not clear whether the proposed methods (and other baseline KD methods) consume more training or inference resources compared to directly applying alignment techniques to the student models." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why experiment on H2O-Danube3-500M and H2O-Danube2-1.8B-Base, given the availability of more popular models today, such as LLaMA-3.2-1B, Qwen2.5-0.5B . If the experiment on more popular models can be provided, the result will be more solid." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The idea of improving the performance of small model in the preference alignment stage is interesting.\n2. Advantage-guided distillation from the preference-aligned teacher model to the student model is novel for knowledge distillation.\n3. The experiments are detailed and the presentation is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This works combines the concepts of RLHF and knoledge distillation to propose the advantage-guided distillation method for preference alignment. This realizes the impressive the performance improvement of the small models in the preference alignment stage." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This is a good work. The proposed method is simple yet effective. I do not have additional concerns for it." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. How to leverage the preference signals for KD is an important but under-explored problem.\n2. The methods are concise and easy to implement.\n3. The empirical results of ADPA is strong." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper explores knowledge distillation (KD) for LLMs during the preference alignment stage. It first introduces a simple baseline DCKD which applies Vanilla KLD-based KD on both the positive and negative examples. Then, the paper propose ADPA to enhance the contrastive signals of KD. Experiments show the effectiveness of ADPA and its components, suggesting the importance of leveraging the perference signals for KD." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Although ADPA seems effective empirically, it is still unclear how the improvement is related to the motivation of the method. From Section 3.3 and Algorithm 1, it seems ADPA does not need the preference labels: only the prompts and the grouth truth reponses works for ADPA. How is ADPA related to preference alignment?\n2. From Table 2, it seems that the reference teacher model is critical for the effectiveness of ADPA. It would be better to add more explanation on why the difference between $\\pi_{dpo}$ and $\\pi_{ref}$ should be considered, rather than $\\pi_{dpo}$. Furthermore, what if $\\pi_{dpo}$ is removed from Equation (11)? (to show the effect of $-\\log \\pi_{sft}$ alone)" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024advantageguided,\ntitle={Advantage-Guided Distillation for Preference Alignment in Small Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xsx3Fpo3UD},\nnote={under review}\n}" }, "abstract": { "value": "Alignment techniques such as RLHF enable LLMs to generate outputs that align with human preferences and play an essential role in their effectiveness. However, their impact often diminishes when applied to smaller language models, likely due to the limited capacity of these models. Instead of directly applying existing alignment techniques to smaller models, we propose to utilize a well-aligned teacher LLM to guide the alignment process for these models, thereby facilitating the transfer of the teacher's knowledge of human preferences to the student model. To achieve this, we first explore a straightforward approach, Dual-Constrained Knowledge Distillation (DCKD), that employs knowledge distillation with two KL-divergence constraints from the aligned teacher to the unaligned student. To further enhance the contrastive effect, we then propose Advantage-Guided Distillation for Preference Alignment (ADPA), which leverages an advantage function from the aligned teacher to deliver more nuanced, distribution-level reward signals for the student's alignment. Our experimental results demonstrate that these two approaches appreciably improve the alignment of smaller language models and narrow the performance gap with their larger counterparts." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Preference Alignment; Large language model; Knowledge Distillation; Advantage Function" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/417d8d1a6e6f6aed03f573129307ab6006656eaf.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/d3972d136812a4ed80d01dcb494af5cdd9285b5c.zip" }, "title": { "value": "Advantage-Guided Distillation for Preference Alignment in Small Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xt3mCoDks7
Unlocking the Power of Gradient Guidance for Structure-Based Molecule Optimization
main
Active
molecule optimization;structure-based drug design;Bayesian flow network
applications to physical sciences (physics, chemistry, biology, etc.)
3;3;5;6
4;4;2;2
1;2;2;3
1;2;2;2
2;2;2;2
4.25
3
2
1.75
2
-0.96225
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "How reliable is the oracle for real-world drug discovery? Is the cost of calling the oracle a concern? \n\nHow does the backward correction window size impact performance versus computational cost, and what determines the optimal balance?\n\nCan this joint optimization approach extend beyond the three basic properties (Affinity, QED, SA) to handle more complex molecular properties relevant to drug discovery?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Technical Innovation: The paper presents an approach to jointly optimize both continuous (atomic coordinates) and discrete (atom types) variables in molecule optimization.\n\nStrong Performance: The method achieves impressive results, showing a 4× improvement over gradient-based baselines and 2× better \"Me-Better\" ratio than 3D baselines, while maintaining SE(3)-equivariance. The success rate of 51.3% on CrossDocked2020 represents a significant advance.\n\nPractical Application: The method demonstrates strong versatility across real drug design tasks like R-group optimization and scaffold hopping, and effectively balances multiple objectives while generating valid molecular structures. The backward correction strategy also provides a practical way to balance exploration and exploitation during optimization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "MolJO introduces a framework for structure-based molecule optimization that handles both continuous (atomic coordinates) and discrete (atom types) molecular properties through gradient guidance and Bayesian inference. The method achieves state-of-the-art results on the CrossDocked2020 benchmark with a 51.3% Success Rate and 4× improvement over previous gradient-based methods, while maintaining SE(3)-equivariance. Using a backward correction strategy and joint optimization approach, MolJO demonstrates superior performance across various drug design tasks, though currently limited to three main objectives (Affinity, QED, SA)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Presentation: The paper is based on BFNs. A self-contained introduction may help the reader understands the proposed method better. A comparison with discrete diffusion and generative flow net would also be helpful. \n\nLimited Objective Scope: The method is only validated on three objectives (Affinity, QED, SA) despite the wide range of important molecular properties in drug discovery. The paper does not explore crucial biological objectives or demonstrate how the approach would scale to more objectives.\n\nComputational Analysis Gaps: The paper lacks detailed analysis of computational requirements and efficiency. There is insufficient discussion about how the backward correction window size affects computational costs, and no clear comparison of computational resources needed versus other methods.\n\nHyperparameter Sensitivity: The method's performance appears sensitive to key hyperparameters like guidance scale and correction window size, but the paper does not provide clear guidelines for selecting these parameters or analyze their impact systematically. This raises questions about the method's robustness in practical applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* What is the parameter $\\phi$ and how do you learn it?\n* How is the time-dependent energy function learned? Since you only know the property when $t=0$.\n* What are $\\mu$ and $y$ in proposition 4.1? $\\theta$ was first defined as $[\\mu, z]$ but later defined as $[\\mu, z = f(y)]$ while $f$ is not defined.\n* In eq. 8, it is a little confusing gradient over $y^*$ vs proposition 4.1 gradient of $y$.\n* In eq. 8, how is the chain rule performed? What is the dependence of E over $\\mu$ vs $h$?\n* What is $\\sigma'$?\n* One more suggestion is to have a broader discussion of related work about conditional generation in diffusion/flow models.\n* The backward correction idea is interesting, does it also connect to the resampling trick or restart sampling? [1, 2]\n* In eq. 12, by the linearity of Gaussian, what's the difference between predicting $\\hat{x}$ from $\\theta_{i-1}$ vs $\\theta_{i-2}$?\n* I am happy to raise my score if the authors clarify some of my concerns.\n\n[1] Lugmayr, A., Danelljan, M., Romero, A., Yu, F., Timofte, R. and Van Gool, L., 2022. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11461-11471).\n\n[2] Xu, Y., Deng, M., Cheng, X., Tian, Y., Liu, Z. and Jaakkola, T., 2023. Restart sampling for improving generative processes. Advances in Neural Information Processing Systems, 36, pp.76806-76838." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* Equipping molecular generative models, in particular diffusion and flow models, with conditional generation and optimization capability is important yet under-studied.\n* The proposed backward correction method to balance exploration and exploitation is novel.\n* Experiments are conducted on a wide range of optimization setups which show the promise of the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper considers the gradient guidance of generative models in the structure-based drug design problem. Specifically, this paper proposes a new method that handles both gradients to update the discrete atom token space and the continuous coordinate space. An additional backward correction strategy is proposed to improve the efficiency of the optimization process. Effectiveness of the proposed method is validated over a set of optimization setups including structure-based drug design, multi-objective optimization and substructure-constrained optimization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* This paper misses a large chunk of literature on molecular optimization, see this review paper [1].\n* Some claims in the paper are not appropriate, previous structure-based drug design models also optimize both the atom type and coordinate [2, 3] (gradient-based algorithm and evolutionary algorithm). I am not sure if it is appropriate to call it the *first proof-of-concept for gradient-based optimization of continuous-discrete variables* because (1) the scope is very narrow, (2) other papers have worked on it [2, 3], (3) it is unclear why we have to use a gradient-based method, even for discrete probability distributions, and (4) the \"gradient\" for the discrete case is not a gradient but a weighting (which can relate to derivative-free optimization/sampling method such as sequential Monte Carlo). \n* Some notations are unclear.\n\n[1] Du, Y., Jamasb, A.R., Guo, J., Fu, T., Harris, C., Wang, Y., Duan, C., Liò, P., Schwaller, P. and Blundell, T.L., 2024. Machine learning-aided generative molecular design. Nature Machine Intelligence, pp.1-16.\n\n[2] Lee, S., Jo, J. and Hwang, S.J., 2023, July. Exploring chemical space with score-based out-of-distribution generation. In International Conference on Machine Learning (pp. 18872-18892). PMLR.\n\n[3] Schneuing, A., Harris, C., Du, Y., Didi, K., Jamasb, A., Igashov, I., Du, W., Gomes, C., Blundell, T., Lio, P. and Welling, M., 2022. Structure-based drug design with equivariant diffusion models. arXiv preprint arXiv:2210.13695." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "I’m having difficulties following the method’s explanation, what variables represent, and how updates and equations are derived. A few examples below:\n \n- [BFN in preliminaries.] “The receiver holds a prior belief $\\theta_0$, and updates…” Exactly what does $\\theta$ represent? Is it pointwise parameters, is there a distribution defined over them?\n\n- [BFN in preliminaries.] Eq (1) shows a distribution over $\\theta_i$, but eq. (2) shows $\\theta$ as a deterministic variable (given observations y_0, …, y_i). I understand Eq (1) is the posterior given a datapoint $x$, integrating out potential observations, but Eq (2) represents the deterministic updates we get for some given observations at different noise levels?\n\n- The parameters $\\theta$ are fed through a NN to model output distribution over clean data. Why is it sensible to apply guidance over theta (if they are connected to the clean samples through a NN, and the energy is typically defined over clean samples)? Coming back to the question above, what do these latents represent? \n\n- [Proposition 4.1.] Is $\\mu_\\phi$ an output of the NN $\\Phi(\\theta_{i-1})$? Also, in the definition of $\\theta$ (line 201) $\\theta=[\\mu, z]$ you use $z$, but Eq (6) uses $y$, stating “recall $z=f(y)$”. I suppose this comes from Eq (2)? Are $y$ the noisy observations? If so, why is Eq (6) sampling $y$? (Should it be sampling $z$ which is part of $\\theta$?)\n\n- [Proposition 4.1.] Where are the original Gaussians from line 207 coming from (in-line equations, just before Eq (5))? Why are those the correct “unguided kernels” $p_\\phi$?\n\n- After reading section 4.1, it is unclear to me how samples are actually generated by the model without using backward correction.\n\n- [Line 233.] This line uses the notation $e_v$ without definition. $e_v$ is defined later in line 240. I’d suggest to introduce variables before using them in equations. That same paragraph states “Surprising as it may seem, this is mathematically grounded…” in which way?\n\nI know some of these questions are not related to the core method but are more general. But I think it would be good for the paper to be self-contained. Asking for a fully fledged description of BFNs in the main paper may be unrealistic, but introducing the necessary components, even briefly, that lead to the equations being used later on, would be good. Unfortunately, I cannot recommend acceptance for the paper in its current form. I’m open to revisiting my score if the paper is updated addressing these general comments (or if they are clarified during the discussion, if I’m missing/misunderstanding something).\n\nA few additional questions.\n\n- [Prop 4.1.] “it suffices to sample guided … [guided Gaussians]”. These expressions are based on a 1st order Taylor expansion. Would these become increasingly exact as the updates become smaller? (Related to the discretization used?) I think the approximation used here could be briefly discussed in the main text, as it is claimed before, in line 192, that the analytic expressions for the guided kernels are derived.\n\n- [Detail in Line 187.] The definition of $\\pi(\\theta_i | \\theta_{i-1})$ should have a $\\propto$ instead of $=$? If $p_\\phi$ and $p_E$ are both normalized, their product is not necessarily normalized. For instance, consider the product of two Gaussian densities, the resulting thing is not normalized." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The authors tackle the problem of applying gradient guidance jointly for discrete and continuous variables in the context of molecule generation. This is a challenging task, as naively applying gradient guidance to discrete variables is not possible. I think this problem is quite relevant, as proper use of different guidance techniques has been observed to lead to improved performances across many domains.\n\nTo the best of my knowledge, the method proposed by the authors, which relies on Bayesian Flow Networks and applying guidance in some underlying continuous variables, is novel. And empirical results appear to be strong." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper deals with generative models for molecules. Specifically, it proposes a way to improve generation quality by using gradient guidance, based on a specified energy function, for both continuous and discrete variables (atoms’ positions and types, respectively). The paper builds on Bayesian Flow Networks, which operates on some continuous latent variables, which facilitates guidance across both data modalities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "While I understand the core idea and problem in the paper, I find the details hard to follow, including exactly what variables represent and how are the update rules obtained. Please see the “questions” section below for extended details." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* In Line 420 we read \"Note that for fair comparison, we restrict the size of generated molecules by reference molecules so that both generative models and optimization methods navigate the similar chemical space,\" - this in particular means that you use the ground-truth information about the size of the reference molecule? I genuinely appreciate the authors reporting this explicitly. However, this brings a very important question: is this applied to all methods? Were the numbers for all baselines generated by the authors (as opposed to taking them from the publications) with this modification?\n* line 052 I do not see this as a weakness of DecompOpt. Some tasks require specific tools.\n* Line 077 There's no citation around \"gradient guidance\" What do authors mean? Classifier-free guidance? Classifier-based guidance? Something else? Based on the later reference, I assume the authors mean classifier-free guidance. However, based on the later parts of the paper I think it is something else.\n* Line 87 What are the suboptimal results? E.g. MoFlow [1] adopts this approach and achieves very good empirical performance\n* Line 102 what issue of inconsistencies?\n* Line 138 just because something is unusual does not mean it cannot work well in practice. See [1] again.\n* Line 141 - \"often a problematic assumption\" Citations? I am aware of works that use this assumption and work well in practice, i.e. [2] again.\n* Line 366 - Is the purpose of Figure 3 to show that the introduction of guidance results in a distribution shift? I think that's to be expected. To me, that's rather a sanity check than a strong insight. It would perhaps be more interesting to see how this plot compares with optimization-based methods?\n* Figure 5: Why not Vina Dock? I think this one is the most important—also no error bars or statistical significance tests.\n* Table 4: What happened to Vina Dock?\n* I would like to know more about the \"energy proxies\":\n * How large are these models - each energy proxy is as large as the TargetDiff model? This seems to make the baseline comparison unfair.\n * What are their sampling times?\n * How accurate are they?\n * How are they trained? Appendix C I think hints at it, but it seems incomplete (equation 21). How is $\\theta$ defined for the training purposes? I assume that the protein-ligand complexes are sampled according to the $p_{\\text{data}}$ distribution, but how are they then transformed to obtain $\\theta$?\n* Line 522 typo: \"gradiant\" -> \"gradient\"\n\n---\n[1] Zang et al \"MoFlow: an invertible flow model for generating molecular graphs\" (KDD 2020)" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "* I think that the application of gradient guidance to SBMO is a good idea;\n* Comprehensive list of baselines;\n* Comprehensive list of evaluation tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper tackles the problem of structure-based molecule optimization (SBMO), i.e. a problem of generating candidate 3D molecules conditioned on a target protein. In contrast to structure-based drug design (SBDD), the generated molecule is also optimized for certain properties. The authors follow related work and model this problem using Bayesian Flow Networks (BFNs), which are designed to capture mixed modalities (discrete atom types and continuous 3D coordinates). The novelty introduced by the authors are (1) gradient guidance, which allows for explicit optimization of properties of interest, and (2) backward correction strategy, which corrects past estimates based on a current optimized one. The authors perform multiple experiments showing superiority of their method compared to various baselines on a variety of different tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Unclear contributions.** The authors claim two main contributions: (1) Gradient guidance in BFNs with mixed modality applied to SBMO and (2) novel backward correction strategy. Given some of the related work, I do not consider the contributions large enough for a full ICLR paper. Specifically:\n * BFNs have already been applied to 3D molecule modeling [2] and even to structure-based drug design [3, 4] (Note that authors do not cite [4], which is very strongly related to this work and definitely should be discussed).\n * It has been shown that BFNs can be seen as SDEs [1] and therefore it is well-known how to apply guidance there (both classifier-based and classifier-free).\n * The novel backward correction strategy seems to me to be a slight modification of the one suggested in [3].\n * Even the generative \"backbone\" model is taken from [3] as is without any modifications.\n2. I would rather characterize this work as \"finetuning\" of [3]. Specifically, my assessment of the contributions is:\n * the application of guidance to a pre-trained model (from [3])\n * a slight modification of the variance reduction technique, where instead of the full past, a sliding window is taken (again from [3])\n3. **Presentation needs improving.**\n * Some sentences are incomprehensible to me, such as\n * Line 182 \"Though different from guided diffusions that operate on noisy latent y, this guidance aligns with our generative process conditioned on θ\". What does it mean that guidance aligns with the generative process?\n * The introduction of guidance is the central component of the paper. However, we learn that the method \"requires training energy functions as a proxy\" in the last paragraph of the paper! I do not understand why the paper is not mostly discussing the energy proxies and how are they defined/trained/evaluated etc. Furthermore, simply adding an energy proxy to any of the baselines would surely improve their performance. Even a modification as simple as: generating multiple candidates and choosing the best one using the energy proxy.\n * I do not understand what Figure 2 is supposed to convey. There is no \"take-home\" summarizing message. In the text (lines 314-316) we read \"it succeeds in balancing sample quality (explore) and optimization efficiency (exploit)\". I do not see that in Figure 2. Why is sample quality called \"explore\"? How is Figure 2 supporting the claim about the tradeoff? The colored lines to me seem random and without any clear pattern.\n4. **Lack of mathematical rigor.**\n * Line 187 definition of $\\pi$. Is it a purely heuristics based definition? Regular guidance is derived from the Bayes' rule applied to the conditional log density (conditioned on some property of interest). What about this formulation? This is just a multiplication of two densities without an elaboration. \n * Proposition 4.1. I don't understand which parts of the proposition are assumptions and which are the claims. What does it mean that \"originally\" $\\mu_i$ follows a Gaussian? \n5. **A very strong objection I have is to the experimental design.** In my opinion it is impossible to assess the quality of the work without more information:\n * What are model sizes (the model proposed by the authors including all its components: the main model, energy proxies, and anything else that needs to be trained; and compare with model sizes of the baselines)\n * What is the training time? (Your method vs baselines)\n * What is the sampling time? (Your method vs baselines)\n * You include your method with a beam search. Perhaps other generative methods would perform even better when equipped with the beam search sampling strategy?\n * Optimization-based baselines. I strongly encourage the authors to include AutoGrow4 [5] as optimization-based baseline. It has been recently reported to work significantly better than RGA [6] (different version of the Vina software was used in that study - has a different range of Vina Dock values)\n6. **Reproducibility is questionable.** The code is submitted, but there are no trained model checkpoints provided, so I cannot check the parameter count myself, nor check sampling time or verify the reported results.\n\n---\nReferences\n\n[1] Xue et al. \"Unifying Bayesian Flow Networks and Diffusion Models through Stochastic Differential Equations\" (ICML2024) - It has been shown that BFNs are equivalent to DMs so deriving guidance for BFNs is not a novel contribuion\n\n[2] Tao et al. \"A Bayesian Flow Network Framework for Chemistry Tasks\" (Arxiv)\n\n[3] Qu et al. \"MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space\", (ICML2024)\n\n[4] Song et al. \"UNIFIED GENERATIVE MODELING OF 3D MOLECULES VIA BAYESIAN FLOW NETWORKS\" (ICLR 2024)\n\n[5] Spiegel et al. \"Autogrow4: an open-source genetic algorithm for de novo drug design and lead optimization\" (ChemInf 2020)\n\n[6] Karczewski et al. \"WHAT AILS GENERATIVE STRUCTURE-BASED DRUG DESIGN: TOO LITTLE OR TOO MUCH EXPRESSIVITY?\" (Arxiv)" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Enable gradient guidance over molecular data involving continuous coordinates and discrete types" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024unlocking,\ntitle={Unlocking the Power of Gradient Guidance for Structure-Based Molecule Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xt3mCoDks7},\nnote={under review}\n}" }, "abstract": { "value": "Structure-based molecule optimization (SBMO) aims to optimize molecules with both continuous coordinates and discrete types against protein targets.\nA promising direction is to exert gradient guidance on generative models given its remarkable success in images, but it is challenging to guide discrete data and risks inconsistencies between modalities.\nTo this end, we leverage a continuous and differentiable space derived through Bayesian inference, presenting Molecule Joint Optimization (MolJO), the first gradient-based SBMO framework that facilitates joint guidance signals across different modalities while preserving SE(3)-equivariance.\nWe introduce a novel backward correction strategy that optimizes within a sliding window of the past histories, allowing for a seamless trade-off between explore-and-exploit during optimization.\nOur proposed MolJO achieves state-of-the-art performance on CrossDocked2020 benchmark (Success Rate 51.3% , Vina Dock -9.05 and SA 0.78), more than 4x improvement in Success Rate compared to the gradient-based counterpart, and 2x \"Me-Better\" Ratio as much as 3D baselines.\nFurthermore, we extend MolJO to a wide range of optimization settings, including multi-objective optimization and challenging tasks in drug design such as R-group optimization and scaffold hopping, further underscoring its versatility and potential." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "molecule optimization", "structure-based drug design", "Bayesian flow network" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bbbbe7603a6f7bcce5463400b97dcdfc5abd157c.pdf" }, "presentation": null, "primary_area": { "value": "applications to physical sciences (physics, chemistry, biology, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/f3a146bfbf13fd624db2aa894eb7ae354102fbb0.zip" }, "title": { "value": "Unlocking the Power of Gradient Guidance for Structure-Based Molecule Optimization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xtTut5lisc
Iterative Feature Space Optimization through Incremental Adaptive Evaluation
main
Active
Automated Feature Optimization;Incremental Learning;Feature Space Evaluator
other topics in machine learning (i.e., none of the above)
3;5;5
3;3;4
2;3;2
2;2;2
1;2;3
4.333333
3.333333
2.333333
2
2
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The complexity of feature subspace generation and multi-head attention mechanisms still makes the EASE framework computationally intensive during training. Due to the large number of parameters in multi-head attention, which require constant updating, the efficiency of EASE may be suboptimal in large-scale datasets or high-dimensional feature spaces.\n2. The regularization parameters mentioned in this paper significantly impact the stability and adaptability of parameters in incremental updates, but detailed tuning strategies for different scenarios are lacking. Additionally, the size and sampling strategy of the feature-sample subspaces generated in different iteration steps could directly affect the accuracy and computational cost of final feature space evaluation.\n3. While the EASE framework performs well experimentally, it lacks theoretical analysis to explain its generalizability across different feature spaces and tasks. For instance, the paper does not provide detailed theoretical support for whether incremental updates are effective across all feature space distributions, or whether the contextual attention mechanism is universally applicable." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The EASE framework includes two key components: a feature-sample subspace generator and a contextual attention evaluator. The feature-sample subspace generator creates feature subspaces relevant to downstream tasks, allowing the evaluator to focus on the most challenging subspaces in each iteration.\n2. An incremental update strategy is introduced in this paper, which retains historical parameter weights and updates only critical parameters when new feature spaces appear, reducing the computational cost of retraining from scratch each time. Additionally, an Elastic Weight Consolidation (EWC) strategy is used to calculate Fisher information." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a general adaptive feature space evaluator named EASE, designed to optimize feature spaces through incremental updates and a contextual attention mechanism. By generating feature-sample subspaces and conducting incremental evaluations, this framework improves the efficiency and accuracy of feature selection, with its performance validated across multiple real-world datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1 The paper presents the innovative EASE framework. However, it provides limited implementation details, such as hyperparameter configurations, optimization strategies, and specific training processes. \n2 The paper compares EASE with some common models (such as GBDT and Random Forest) but lacks detailed comparisons with current state-of-the-art feature selection or feature space evaluation methods.\n3 The theoretical analysis of EASE relies on specific assumptions about data distribution and feature relevance, which may not always align with real-world data, especially in datasets where features are weakly related or uncorrelated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper effectively tackles prevalent issues in feature space optimization, such as bias, generalization, and training inefficiency, providing a more robust evaluation methodology.\n\n2. The combination of Feature-Sample Subspace Generator and Contextual Attention Evaluator offers a comprehensive solution that both decouples complex interactions and captures evolving patterns in feature spaces.\n\n3. The extensive experiments on multiple real-world datasets substantiate the superiority of EASE in terms of both accuracy and efficiency, enhancing the paper's credibility." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper designs a new framework (i.e., EASE) to enhance the optimization of feature spaces in machine learning tasks. EASE addresses common limitations in existing methods, such as evaluation bias, poor generalization, and inefficient training. It comprises two main components: the Feature-Sample Subspace Generator and the Contextual Attention Evaluator. The former mitigates evaluation bias by decoupling information within the feature space, while the latter captures evolving patterns incrementally for efficient evaluation. The framework is tested on twelve real-world datasets, demonstrating its effectiveness and efficiency over traditional methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed method conduct the efficiency comparison experiments of different feature space evaluators across various datasets. It would be better to further analyze the time complexity of different feature space evaluators to demonstrate the efficiency of the proposed method.\n\n2. The proposed method is applied to two iterative feature selection frameworks to validate its effectiveness and generalization capability, i.e., RFE and FLSR. However, both the two baselines are out-of-date. Are there any recent baselines can be applied to?\n\n3. The used several datasets are relatively small, i.e., almost all of their sample sizes are small than 10 thousand. How about applied the proposed method on large-scale datasets?\n\n4. I am a little concerned about the innovativeness of the proposed method, because Section 4.2 contextual attention evaluator seems to simply apply the self-attention mechanism." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Since this paper is considering feature selection, wouldn't it be easier for readers to understand if the authors propose the method as a feature selection method rather than using the abstract term 'feature space optimization'? \n- In Figure 3, why is the proposed method faster than other comparison methods? Since the proposed method is based on neural networks (attention), I think it is more expensive." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The feature selection is an important task in machine learning.\n- The effectiveness of the proposed method was evaluated with many datasets including classification and regression tasks and many evaluation metrics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method to iteratively find good feature space (specifically, a set of relevant features in the original features) to improve downstream task performance. For this, the authors propose a Feature-sample subspace generator that finds relevant features and difficult samples in each optimization step and a Contextual attention evaluator that evaluates the selected features so that it can improve the prediction performance. The experiments show that the proposed method outperformed existing iterative feature space optimization methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The significance of the task of iterative feature space optimization is not clear. For example, even if you don't find a feature space (specifically, feature selection) iteratively, lasso or neural network-based embedding-based feature selection methods can select features that directly improve the performance of downstream tasks by optimizing a single objective function once. This paper may be trying to solve unnecessarily complex problems.\n- The presentation quality and clarity of this paper are low. For example, in Eq. 1, the evaluator $M$ takes the feature space as input and calculates the loss with the label space. What is the loss between spaces? Isn't it correct to say data (feature) matrix and label vector, etc., rather than spaces? The word 'space' is associated with mathematical vector spaces, etc., which can be confusing. I also need help understanding Eq. 3. Is the score a scalar or a vector? Because the specific form of $F^t$ (whether it's a vector, a matrix, or something else) is not described, the output of $M$ is also unclear. Thus, the definition of Eq. 3 is ambiguous. \nAlso, Eq. 2 in section 3 is different from Eq. 10, which is actually used. What is the intention of introducing a specific formula Eq. 2 at the problem-setting stage? I think that this makes the paper unnecessarily difficult to understand.I think the overall explanation of the proposed method needs to be improved." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose an incremental adaptive evaluator for iterative feature space optimization task to assess the quality of the feature space." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024iterative,\ntitle={Iterative Feature Space Optimization through Incremental Adaptive Evaluation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xtTut5lisc},\nnote={under review}\n}" }, "abstract": { "value": "Iterative feature space optimization involves systematically evaluating and adjusting the feature space to improve downstream task performance. However, existing works suffer from three key limitations: 1) overlooking differences among data samples leads to evaluation bias; 2) tailoring feature spaces to specific machine learning models results in overfitting and poor generalization; 3) requiring\nthe evaluator to be retrained from scratch during each optimization iteration significantly reduces the overall efficiency of the optimization process. To bridge these gaps, we propose a gEneralized Adaptive feature Space Evaluator (EASE) to efficiently produce optimal and generalized feature spaces. This framework consists of two key components: Feature-Sample Subspace Generator and Contextual Attention Evaluator. The first component aims to decouple the information distribution within the feature space to mitigate evaluation bias. To achieve this, we first identify features most relevant to prediction tasks and samples most challenging for evaluation based on feedback from the subsequent evaluator. These identified feature and samples are then used to construct feature subspaces for next optimization iteration. This decoupling strategy makes the evaluator consistently target the most challenging aspects of the feature space. The second component intends to incrementally capture evolving patterns of the feature space for efficient evaluation. We propose a weighted-sharing multi-head attention mechanism to encode key characteristics of the feature space into an embedding vector for evaluation. Moreover, the evaluator is updated incrementally, retaining prior evaluation knowledge while incorporating new insights, as consecutive feature spaces during the optimization process share partial information. Extensive experiments on twelve real-world datasets demonstrate the effectiveness of the proposed framework. Our code and data are publicly available (https://anonymous.4open.science/r/EASE-1C51)." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Automated Feature Optimization", "Incremental Learning", "Feature Space Evaluator" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/361c306207dd786d89223e110d42d7b6c36370a0.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Iterative Feature Space Optimization through Incremental Adaptive Evaluation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xtlMtbVfWu
EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
main
Active
Distributed Training;Large Language Models;Local SGD;Training Acceleration
infrastructure, software libraries, hardware, systems, etc.
3;5;5
4;5;4
2;3;3
2;2;2
2;2;3
4.333333
4.333333
2.666667
2
2.333333
0.5
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weakness section." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The paper introduces an innovative approach that combines Local SGD with asynchrony and gradient penalty,, which addresses communication overhead and resource elasticity.\n* The paper rigorously evaluates EDiT and A-EDiT on multiple benchmarks, demonstrating improved performance in training speed, stability, and generalization compared to state-of-the-art methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents EDiT (Efficient Distributed Training) and its asynchronous variant A-EDiT, which aim to improve the efficiency of distributed training for large language models (LLMs). EDiT solves issues in existing distributed training such as communication bottlenecks, straggler delays, and limited scalability in heterogeneous environments. A pseudo-gradient penalty strategy is introduced to enhance training stability. Experimental results suggest EDiT and A-EDiT addresses the straggler issue and is more stable compared to baseline like DiLoCo." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* EDiT and A-EDiT rely on a range of hyperparameters, how should we think about and choose $\\alpha, \\phi, \\beta$ on a new training task?\n* The authors propose using gradient norm as a metric for anomaly elimination and gradient penalty. However, in llm training, gradients often have outliers on some of the examples. If we ignore examples with large gradient norms, will it create bias on training?\n* It would also help if the authors can provide convergence analysis for the proposed EDiT method. Do EDiT and A-EDiT have convergence guarantee?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* **Intra node for model shards:** on standard GPU clusters using nodes with 8 A100 or H100 GPUs with 80GB of memory, what size of model can be loaded & how many intra-node model replicas does it lead to?\n* **line 168:** a warm up phase is used as in post-local SGD. However, DiLoCo finds that it is not really helpful (cf their Fig.3), do you observe a different behavior here?\n* **Tab.1 & Fig.4**: Why not experiment with A-EDiT on the in-house dataset?\n* **Fig 5.a)**: can you explain the reason why A-EDiT and EDiT exhibit the same throughput in the random-straggler scenario?\n* **In ablation Fig 7.a):** the Gradient Clipping doesn’t seem to have much effect in the validation PPL curve, is it really necessary to keep it along with WA and anomaly detection (which both seem to smooth the spikes observed in the validation PPL curve during training)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* **The novel gradient penalty method seems effective**: In the experiment on the in-house dataset (Fig 4.d), the unstable behavior of standard local-SGD methods is highlighted, and the gradient penalty introduced seems to be an effective solution to alleviate it.\n* **Advantages of a proper distributed implementation of a local SGD algorithm are highlighted**: Section 4.3 displays the practical advantages (in terms of throughput) given by a proper implementation of local SGD algorithms using modern distributed methods (such as ZeRO-3/FSDP) when communication links between distributed workers have limited bandwidth.\n* **Asynchronous extension**: In the presence of stragglers, the advantage (in terms of throughput) of using an asynchronous extension of EDiT are highlighted in Sec.4.3." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces EDiT, a distributed training method leveraging local SGD to reduce communication overhead when training LLMs. The method relies on a \"2D-grid\" graph topology with a total of $K= M \\times N$ data parallel models. On highly connected GPUs (e.g., on the same compute node in a cluster), $N$ models are sharded and communicate at each optimizer step. On the other hand, $M$ of these islands of highly connected machines are connected through possibly lower bandwidth communication links (e.g., on separate compute nodes), and leverage the lower communication frequency needs of local SGD to alleviate communication bottlenecks. To counter the destabilizing effect of using lower batch sizes in local SGD, a gradient penalty method is introduced. Finally, an asynchronous version of the method is also introduced to mitigate the effect of stragglers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* **Claiming novelty on the distributed implementation of a local SGD algorithm seems too strong:** As the sharding strategy itself cannot be claimed as novel (see the ZeRO series, FSDP), the “distributed implementation” cannot either. In fact, apart from the gradient penalty method, the Algorithm seems to be a straightforward and natural way of implementing local SGD for LLM training (using ZeRO-3) and not particularly novel (previous local SGD papers such as CO2 [[Sun et al., 2024]](https://arxiv.org/pdf/2401.16265) seem to already make use of ZeRO for their Autoregressive Language Modeling task, and SlowMo [[Wang et al., 2020]](https://arxiv.org/pdf/1910.00643 ) already consider all GPUs inside a cluster node to be a single local worker in their experiments so that each local worker reaches larger batch sizes through data-parallel). Thus, although having a *detailed* pseudo-code for this implementation is welcome, making it the main contribution of the paper and dedicating so much space for explaining it weakens the paper for me. For instance, as the stabilizing effect of the gradient penalty is empirically demonstrated, focusing on this novel observation and contribution seems more relevant to me. \n* **No comparison with CO2**: while experiments with SlowMo/DiLoCo are performed, no comparison with state-of-the art method CO2 [[Sun et al., 2024]](https://arxiv.org/pdf/2401.16265) is done.\n* **Hyper-parameters introduced:** EDiT introduces 3 novel hyper-parameters $\\alpha, \\delta, \\phi$, but the impact of their values on the method is not discussed.\n* **Lack of clarity and imprecision in the writing:**\n * It was not clear at first that the $N$ models were *data parallel sharded* model replicas and **not** model-parallel splits. Maybe clarifying this distinction early in the paper could avoid potential confusion.\n * **line 21:** *“ensure training stability and improve performance.”* this is a bit cryptic. Maybe detailing the reason for these instabilities (as done lines 210-213) early in the paper would help understand the challenges tackled here more clearly.\n * **line 50:** *“Current Local SGD methods do not integrate well with modern distributed strategies”*. Why? Can you explain your point? For instance, the CO2 paper claims that their algorithm is compatible with ZeRO-series optimizers.\n * **lines 73-75:** the problem described is exactly the one the CO2 paper aims at solving, so saying *“current local-SGD method”* seems to be an overstatement here.\n * **line 116:** *“DiLoCo extends the slow momentum in SlowMo to the outer optimizer.”* this statement does not seem accurate: the momentum in SlowMo is already in the outer-loop, and seems to be roughly equivalent to DiLoCo (with nesterov SGD as the outer optimizer, which is also noted line 752). Can you clarify in which way DiLoCo extends SlowMo in this context?\n * **line 170:** *“model synchronization”*. What does this mean? Can you provide a clear definition or explanation of this?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The proposed hierarchical distributed training method on a 2-D device mesh should be better explained, like how workers within a model shard group coordinate with each other. What are the messages transmit between them (should be intermediate results instead of parameters?)\n2. Is there any design in the proposed method addressing the concern that each worker within model shard group may need to wait for the backward gradients of each batch from the next worker in the same group? \n3. How this proposed hierarchical distributed training method accelerates the distributed training? It should be evaluated in the numerical results. \n4. In Section 4.5, it seems suggesting that DiLoCo scheme with the pseudo gradient penalty mechanism is the proposed scheme EDiT?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "It is interesting and novel idea of arranging the participating workers into a two-dimensional device mesh, i.e., model replica group and the model shard group. The experimental results also demonstrate the effectiveness of the proposed pseudo gradient penalty mechanism and asynchronous version of the proposed scheme. The presentation of this paper is fair and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed a distributed training approach which arranges the participating workers into a two-dimensional device mesh, i.e., model replica group and the model shard group. It also introduces a pseudo gradient penalty mechanism that eliminates the significantly anomalous worker and averages and clips the gradients. It also introduces an asynchronous variant of the proposed scheme. Experiments have been conducted to evaluate the performance of the proposed scheme on the distributed training of LLMs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The novelty of this paper is limited. The author claimed that this paper is the first to integrate Local SGD with modern distributed strategies. What does \"modern distributed strategies\" refers to here is not clear. Is't local SGD a modern distributed strategy. The way to eliminate the significantly anomalous worker and averages and clips the gradients is not a new idea. \n2. The hierarchical distributed training method on a 2-D device mesh may be problematic. Since each worker within model shard group runs a subpart of the model, it may need to wait for the backward gradients of the same batch before processing the next batch, leading to significant idle time and slowing down the training. \n3. The improvement over the existing work since quite insignificant as shown in Figure 4. The performance of the proposed scheme is only compared to baseline, which is not sufficient to demonstrate its superiority. The author should compare it with a more advance asynchronous scheme, such as following work:\n[1] Nguyen J, Malik K, Zhan H, et al. Federated learning with buffered asynchronous aggregation[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2022: 3581-3607. \n4. No theoretical results on the convergence of the proposed approach, especially with this hierarchical distributed training design." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a novel Local SGD-based distributed training method for training LLMs effectively and efficiently, and we provide a large-scale verification of asynchronous pre-training for LLMs." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024edit,\ntitle={{ED}iT: A Local-{SGD}-Based Efficient Distributed Training Method for Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xtlMtbVfWu},\nnote={under review}\n}" }, "abstract": { "value": "Distributed training methods are crucial for large language models (LLMs). However, existing distributed training methods often suffer from communication bottlenecks, stragglers, and limited elasticity, particularly in heterogeneous or large-scale environments. Local SGD methods have been proposed to address these issues, but their effectiveness remains constrained to small-scale training due to the lack of robust distributed strategies and concerns over efficiency and stability. To tackle these issues, we propose EDiT, an innovative Efficient Distributed Training method that combines a tailored Local SGD approach with advanced distributed techniques to enhance large-scale training efficiency, and employ a pseudo gradient penalty strategy to ensure training stability and improve performance. Additionally, we introduce A-EDiT, a fully asynchronous variant of EDiT that accommodates heterogeneous clusters. Building on EDiT/A-EDiT, we conduct a series of experiments to validate large-scale asynchronous training for LLMs, accompanied by comprehensive analyses. Experimental results demonstrate the superior performance of EDiT/A-EDiT in terms of convergence, generalization, acceleration, scalability, and stability, establishing them as robust solutions for distributed LLM training in diverse computational ecosystems." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Distributed Training", "Large Language Models", "Local SGD", "Training Acceleration" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1872a3be01566e8d3de0a696ddbb3468b01858bc.pdf" }, "presentation": null, "primary_area": { "value": "infrastructure, software libraries, hardware, systems, etc." }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/de8ac8f34de2849ab96aa89166ffdb5374bc2b42.zip" }, "title": { "value": "EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xtp6QPnwLu
Imit-Diff: Semantics Guided Diffusion Transformer with Dual Resolution Fusion for Imitation Learning
main
Active
Imitation learning;Diffusion Policy;Dual Resolution;Semantics Injection
applications to robotics, autonomy, planning
3;3;5;5
4;5;3;3
2;2;3;3
2;2;2;2
3;2;3;3
4
3.75
2.5
2
2.75
-0.904534
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "n/a" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The proposed method technically sounds by incorporating richer visual cues with multiple \nresolution features, semantic masks, and Consistency Policy, each of which is promising to enhance diffusion-based policy learning.\n\n- The proposed enhancement for visual input looks widely applicable and can be useful for various manipulati0on tasks in the future. \n\n- The experiments are conducted in real hardware, unlike most of RL works in ML/AI venues." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This submission proposes a new diffusion-based imitation learning in robotic visuomotor control tasks.\nThe proposed method Imit-Diff follows the Diffusion-Transformer-based policy learning framework,\nthree new ideas are introduced to enhance performance.\nFirst, Dual Resolution Fusion utilizes original-resolution and downsampled images from environment and arm-mounted cameras\n to capture global and fine-grained visual information.\nSecond, Semantic Injection provides input images masked by manipulation-target segments with open-vocabulary \ndetector and tracker.\nThird, Consistency Policy by Prasad et al. (2024) is introduced for faster sampling.\nExperiments are conducted using a real robot arm and the proposed method outperformed ACT and DP-T." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Technical novelty of the proposed method seems limited:\nI think that the core of the Imit-Diff's learning mechanism is the diffusion-based policy learning,\nbut it is almost unchanged from Diffusion Policy's except introducing consistency-based loss.\nOverall, the contribution of the work is in the system level, and might not be in the best fit to ML venue like ICLR. \n\n- The effect of open-vocabulary vision models is not demonstrated well:\nThe used \"unseen\" manipulation targets are blocks of new colors, which seem insufficient to\nassess open-set generalizability of the method.\nDiverse objects are used as the clutter, but they are not similar to the targets so\nit is questionable whether they make the task difficult enough." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please see the Weaknesses section above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The overall presentation is relatively easy for understanding.\n* The experiments show that the proposed Imit-Diff brings some improvements on some real-world tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes the semantics guided diffusion transformer with dual resolution fusion for imitation learning. The introduced framework termed as Imit-Diff focus on semantic and fine-grained feature extraction, improving the generalization on unseen objects and environments. Imit-Diff mainly includes three key components: Dual Resolution Fusion, Semantics Injection and Consistency Policy on DiT. The proposed method outperforms some typical baselines, such as ACT and Diffusion Policy, on some real-world tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper writing is not well prepared since there are many obvious problems. Why the proposed policy is called Rep-Diff in the Figure.1(c)? For the third paragraph of introduction, the detailed reason for introducing the proposed three modules is missed.\n* The overall method indeed lacks of novelty. The ConvNext and DINOv2 are directly used as the visual encoders for the high/low-resolution inputs. Grounding DINO/MixFormerv2/MobileSAM are used for Semantics Injection. Moreover, the Consistency Policy proposed by Prasad et al (2024) is employed for few-step or single-step diffusion. The authors should clarify the contribution on the motivation. Why using ConvNext and DINOv2 for visual encoding?\n* I feel the ablation study part is not inefficient. It lacks the overall ablations for the introduced three modules: Dual Resolution Fusion, Semantics Injection and Consistency Policy. I want to see the improvements of Dual Resolution Fusion (low/high resolution inputs) instead of the detailed settings, like the loss or FPN feature levels.\n* The paper lacks of the comparison on the inference speed of overall framework. The comparison in Tab.5 is meaningless because CTM is only one part of Imit-Diff. I would feel the inference time is a large problem since Imit-Diff introduces so many modules such as Grounding DINO/MixFormerv2/.\n* The paper only compares the Imit-Diff with ACT and Diffusion Policy. I would like to see the generalization comparision with some vision-language-action approaches such as RT-2/OpenVLA." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The questions are mainly related to the fairness of comparison, which are in the weaknesses section. Please see above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* S1: The presentation of this paper is clear. The methods are proposed with reasonable motivation and explained clearly.\n\n* S2: The proposed method, including dual resolution fusion, semantic injection, and consistency policy, all lead to improvement to the system, supported by the experiments.\n\n* S3: The authors have conducted experiments on real-world robots and outperformed previous state-of-the-art ACT and diffusion policies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a framework called \"Imit-Diff\" which improves the learning and efficiency of imitation learning algorithms. The method proposed emphasizes (1) using both low-resolution and high-resolution features without significantly increasing the number of tokens; (2) using semantic injection (e.g., with open-set detectors like Grounding-DINO) to guide the imitation learning with prior masks explicitly; (3) improving the sampling speed of diffusion policy with consistency models. The authors support the effectiveness of Imit-diff with real-world experiments and show advantages in cluttered scenes and task interruption." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* W1: When comparing with the previous state of the art, Imit-Diff uses a much larger vision-backbone of ConvNext + ViT-S, while ACT and Diffusion Policy only uses a ResNet-18 (L341-L342). Therefore, it is unclear whether the improvement of the method comes from a larger learnable capacity.\n\n* W2: Table 4 might also be confusing since it shows the importance of pre-trained weights and backbones: when using ViT-S, the success rate is 30%, lower than ACT and Diffusion Policy. From my understanding, ViT-S is a more capable backbone than ResNet-18. Therefore, this table further raises concerns about the method's effectiveness and might require further clarification: Does the effectiveness come from the modules introduced by the authors or the pre-trained DINOv2 weights?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Given the large impact of different visual backbones (Sec 4.4), do the authors have comparative results indicating which foundation model (e.g., CLIP, DINOv2, SAM2) performs better?\n2. How specifically do efficiency and performance metrics change after applying the consistency policy?\n3. In Tab 4e, were the models re-trained when varying the FPN layers? In addition, regarding this ablation study, are there results for directly altering the resolution of high-res inputs? From current experiments, the improvements may stem from fusing two pre-trained encoders." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The proposed method integrates prior-based semantics to enhance generalization in imitation learning.\n2. Using multi-scale features to improve the performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents Imit-Diff, an imitation learning model that enhances fine-grained perception and semantic understanding in robotics. To generalize in complex scenes, Imit-Diff introduces a *Dual Resolution Fusion* mechanism to integrate high- and low-resolution visual information, a *Semantics Injection* method to incorporate prior knowledge through masks from open vocabulary models, and a *Consistency Policy* that reduces inference time with an accelerated denoising process. Experimental results demonstrate that Imit-Diff achieves state-of-the-art performance on real-world tasks and outperforms current baselines like ACT and Diffusion Policy." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While semantic injection is the key contribution aimed at improving generalization, the results in Tab 4e do not demonstrate substantial benefits from this method.\n2. Sec 3.3 allocates significant space to describing the consistency policy, yet it primarily introduces an existing policy without addressing specific challenges in applying it to the proposed algorithm. It would be helpful to clarify any unique difficulties encountered in this implementation.\n3. Conducting 20 real-world trials (Tab 4) for the ablation study may lead to high variance. Do the authors have comparable results from a simulated environment?\n4. Despite incorporating the consistency policy, the model remains inefficient." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024imitdiff,\ntitle={Imit-Diff: Semantics Guided Diffusion Transformer with Dual Resolution Fusion for Imitation Learning},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xtp6QPnwLu},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion-based methods have become one of the most important paradigms in the field of imitation learning. However, even in state-of-the-art diffusion-based policies, there has been insufficient focus on semantics and fine-grained feature extraction, resulting in weaker generalization and a reliance on controlled environments. To address this issue, we propose Imit-Diff, which consists of three key components: 1) Dual Resolution Fusion for extracting fine-grained features with a manageable number of tokens by integrating high-resolution features into low-resolution visual embedding through an attention mechanism; 2) Semantics Injection to explicitly incorporate semantic information by using prior masks obtained from open vocabulary models, achieving a world-level understanding of imitation learning tasks; and 3) Consistency Policy on Diffusion Transformer to reduce the inference time of diffusion models by training a student model to implement few-step denoising on the Probability Flow ODE trajectory. Experimental results show that our method significantly outperforms state-of-the-art methods, especially in cluttered scenes, and is highly robust to task interruptions. The code will be publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Imitation learning", "Diffusion Policy", "Dual Resolution", "Semantics Injection" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/594239541b31c0e7a17b9680565c0f569d4aabf9.pdf" }, "presentation": null, "primary_area": { "value": "applications to robotics, autonomy, planning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/399279d6cc87c49cb62e70aed195c7fd30d31f35.zip" }, "title": { "value": "Imit-Diff: Semantics Guided Diffusion Transformer with Dual Resolution Fusion for Imitation Learning" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xtzqU9FgSi
Is self-supervision enough for training sentence embeddings?
main
Active
self-supervised learning;language models;contrastive learning;transformers;natural language processing
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;3;5
4;4;3
2;2;2
2;2;2
4;2;2
3.666667
3.666667
2
2
2.666667
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "N/A" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper compares multiple self-supervised learning approaches for sentence embeddings, providing insights into the effectiveness of different data augmentation strategies, especially text crops.\n\n- It demonstrates that SSL can achieve sentence embedding quality close to supervised models, suggesting potential for relying more on SSL instead of supervised fine-tuning on large datasets." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates the potential of self-supervised learning (SSL) for generating high-quality sentence embeddings without relying on large, supervised datasets. The authors evaluate various SSL techniques, particularly focusing on text crops as a data augmentation strategy, and find it outperforms traditional dropout-based methods. Their findings suggest that SSL alone can yield competitive embeddings, with performance close to supervised models, and highlight that most improvements stem from generic sentence adaptation rather than domain adaptation. They also emphasize that embeddings can perform well even when trained from scratch or with minimal architecture." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper lacks originality, as most of its observations are already well-known in the research community. Self-supervised learning (SSL) has long been applied for training embeddings, and data augmentation techniques like text cropping and dropout are established as beneficial. The paper does not introduce new techniques or offer novel insights in this area.\n\n- The study does not compare with current state-of-the-art models, such as BAAI's BGE models or commercial models (such as OpenAI and Voyage AI). Although it’s true that these models may be trained on different data, this lack of such results weakens the paper’s contribution. Its conclusions would be far more convincing if the proposed model was trained on the same or similar data as state-of-the-art models. \n\n- Additionally, the evaluation is limited, omitting several popular benchmarks; for instance, many datasets from the MTEB benchmark are not included in the analysis." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Why train using cosine but evaluate using Euclidean?\n2. How was the learning rate chosen? Wouldn’t it be better to tune with a range of learning rates and pick the average (or best if you had a validation set)? How do I know that your dropout results are not poor simply because you selected your learning rate based on what worked best for cropping?\n3. “It is unclear how their [SSL approaches] compares between each other and to the SOTA” (l. 43) — what is unclear and why? It’s phrased in a way where it needs a better explanation, and it’s not actually something you are really examining in this paper?\n4. Why was MPNet chosen as the model to do the experiments with? No justification is given. I’d think it would make the paper stronger if you can show the same results for different models.\n5. Specifically, how does this method perform when you apply it to SBERT rather than MPNet? If it improves SBERT, then this could be a valuable contribution?\n6. What happens in section 4.1 when you use something like CCNews (or something even more out-of-domain) rather than PubMed?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Well written and easy to follow.\n2. Thorough analysis and sound experiments.\n3. Sufficient substance, including detailed appendix." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes fine-tuning sentence embeddings on consecutive text crops sampled from a dataset to improve embedding performance on that same dataset. The fine-tuning objective is InfoNCE with cosine as the similarity metric. The evaluation is done on MTEB clustering tasks, recast as knn accuracy. The results indicate that, for a particular type of model (MPNet), text crops work much better than alternative strategies, such as dropout augmentation. The approach does not perform as well as SBERT. Further analysis leads to claims of generalization and looks at the performance at different layers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Overclaiming in the central claim: The title is honestly just completely wrong — the work is not about whether SSL is “enough” for training sentence embeddings. The work, if I put it bluntly, proposes a tried and trusted method in machine learning: tuning on the test set. Please correct me if I’m wrong, but the approach fine-tunes on the same dataset that it is evaluated on, using essentially the same objective, so it is no surprise that the method outperforms its (weak) baseline. I suspect that alternative approaches (eg finetuning pretrained BERT NSP/MLM-style on the same dataset) would also see gains. The authors argue that given that “SSL does not have access to class labels”, there are no “overfitting issues” — this is misguided: the central claim is directly related to “having access” to the test set. It would be really worrying if this method did not see improved performance. Why not do K-fold cross-validation to show that the approach holds up generally, with confidence bounds that show that differences in performance are statistically significant?\n\n2. Lack of performance: Despite tuning on the test set, the approach does not outperform SBERT. What is the value of applying this method if I could simply apply SBERT? If the approach really works, couldn’t we apply it to SBERT to get SOTA? That would be valuable to practitioners. Similarly, you could evaluate whether downstream performance (eg in a RAG pipeline) improves if you adapt the embeddings to the domain.\n\n3. Overclaiming in adjacent claims: The first sentence of the discussion “We showed that self-supervised fine-tuning on a minimal amount of data can lead to large improvements in sentence embedding quality” should be rewritten to “We showed that self-supervised fine-tuning on a given dataset improves sentence embedding quality on that dataset”. It is a much narrower claim. It is important to be precise. In l.375-77 “We conclude that the majority of the improvement was due to generic sentence-level adaptation, while domain adaptation had a smaller effect” — it’s still a similar domain, scientific literature clustering? This claim is much too strong. Related to the first point above, if you want to make this claim, you need to provide much more and much stricter evidence.\n\n\nMinor:\n- The correct citations for sentence representations/embeddings would be “SkipThought” (Kiros et al; 2015) and “Distributed Representations of Sentences” (Hill et al; 2016), definitely not Reimers et al." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": ">Note that we purposefully used the entire dataset first for self-supervised training and later for supervised evaluation. As SSL does not have access to class labels, this does not present overfitting issues. For supervised evaluation, we used a 9:1 train/test split (using only labeled data).\n\nDid the self-supervised training use test-split data too? If yes, the setting is a little tricky, and \"As SSL does not have access to class labels, this does not present overfitting issues\" is misleading. Even without access to the class labels, it should be a dataset leakage or a transductive learning setting.\n\n\n>our own results showed that STS performance does not correlate with nearest-neighbor quality\n\nWhere is the result?\n\n\nDoes this self-supervised contrastive approach have any limitations? For example, is there any task where self-supervised contrastive learning may underperform supervised learning in MTEB or other tasks using embeddings?\n\n\nPre-training (e.g., language modeling) is also self-supervised learning. The title is misleading because it is ambiguous. Please consider different titles or wording to clarify it." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper demonstrates that self-supervised learning alone can produce high-quality sentence embeddings, reducing dependence on supervised data.\n\nIt systematically compares augmentation techniques, highlighting the effectiveness of cropping-based augmentation over traditional dropout methods used in SimCSE." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates whether self-supervised learning (i.e., using the domain corpus only) alone can produce high-quality sentence embeddings without extensive supervised fine-tuning (i.e., using a sentence pair dataset).\nThe authors re-explore the effectiveness of cropping-based augmentation for contrastive learning, demonstrating that this approach performs better than traditional dropout-based augmentation, SimCSE.\nThe paper shows that sentence embeddings can achieve comparable performance with supervised fine-tuning in some embedding tasks.\nKey contributions include a systematic comparison of augmentation strategies and evidence that self-supervision can suffice for strong sentence representations.\nThis work challenges the need for supervised data in sentence embedding and offers insights for more efficient embedding training." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The research question \"Is self-supervision enough for training sentence embeddings?\" is not convincingly answered. While there are various experimental results in the paper, the insights derived from them seem somewhat lacking in coherence and strength with respect to the research question.\n\nWhile the paper provides experimental findings, it lacks deep discussions or analyses about why it happens. For example, although the experimental results about the superior performance of cropping are somehow different from the conclusion in the SimCSE paper [Gao et al. 2021], there is no discussion or investigation about the reason. Gao et al. [2021] show that SimCSE is better than cropping and the next sentence pairing [Logeswaran and Lee, 2018]." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024is,\ntitle={Is self-supervision enough for training sentence embeddings?},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xtzqU9FgSi},\nnote={under review}\n}" }, "abstract": { "value": "In NLP, sentence embeddings are crucial for many tasks such as information retrieval, classification, clustering, or visualizing collections of texts. Currently, top-performing sentence embeddings are derived from pre-trained language models that undergo extensive supervised fine-tuning. This contrasts with computer vision, where self-supervised training has demonstrated remarkable success. Here we show that self-supervision alone can produce high-quality sentence embeddings, albeit slightly below those from state-of-the-art supervised models. We systematically compare several existing augmentation strategies for positive pair generation in contrastive learning and show that text crops strongly outperform popular dropout-based augmentation. Using text crops, well-performing embeddings can be obtained even when training from scratch without using pre-trained model weights, or when training a bare token embedding layer without any transformer architecture. Overall, we show that self-supervised learning allows rapid training of text embeddings of a given dataset." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "self-supervised learning", "language models", "contrastive learning", "transformers", "natural language processing" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/46c97eb68ceb4bb16c17de2301fd9297e264978c.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Is self-supervision enough for training sentence embeddings?" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xuQSp75HmP
PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions
main
Active
Diffusion Model;Image Generation;Image-to-Image
generative models
5;5;6;6
4;4;4;3
3;3;3;3
2;2;2;2
3;3;3;3
5.5
3.75
3
2
3
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper effectively explores the unification of multiple tasks within a single framework.\n- The proposed model demonstrates strong performance across a wide range of tasks.\n- The method is adaptable to images of varying resolutions." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces PixWizard, a visual assistant that performs diverse image generation, manipulation, and translation tasks based on free-form language instructions. PixWizard unifies various vision tasks into a single image-text-to-image framework using a newly created Omni Pixel-to-Pixel Instruction-Tuning Dataset. Built on the DiT architecture, PixWizard supports dynamic resolutions, integrates structure- and semantic-aware guidance, and demonstrates competitive performance and generalization across unseen tasks and instructions, making it a powerful interactive tool for visual tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The reviewer does not have significant concerns but notes a few minor weaknesses:\n- Some implementation details are unclear; For example, in the statement, “In the first stage, we initialize the model by combining the weights of a pretrained text-to-image model with randomly initialized weights for the newly added modules,” it is unclear which model is used for initialization.\n- The paper reads more like an engineering report than a research paper with deep insights. However, it would be valuable if the model, code, and/or dataset were made publicly available.\n- A concluding analysis on how different tasks influence each other within the unified framework would be beneficial." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "none" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The current state-of-the-art general image editing methods, Imagic, followed by its speedup and less overfitting open-sourced version Forgedit, is capable of conducting general image editing tasks including the non-rigid editing. Is pixwizard capable of conducting non-rigid editing like Imagic and current state-of-the-art Forgedit? If yes, I would like to see some examples compared with Imagic and Forgedit with examples from TEdBench in the revised version. If not, I would like to see some discussions on this issue and how to solve it in the revised version. Imagic and Forgedit require test-time fine-tuning, which costs at least 30 seconds. Is it possible for pixwizard to somehow distill and integrate these methods to tackle the non-rigid editing problem? Considering the time limit of rebuttal period, I won't require the authors to fix this issue in such a short time. Yet discussions and possible solutions to this problem in the revised paper are compulsory . \n\n\n2. shown in table 6, the task-aware dynamic sampler do not bring significant performance gain. What are the computational savings then? They are not reported in the paper." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The task studied by this paper is the main-stream trend nowadays: unifying multiple image generation tasks in one DiT model.\n\n2. Solid quantitative experiments demonstrate the generalization capability of the proposed method.\n\n3. The authors designed a DiT version of DynamicViT. The Multi-Hot Gumbel-Softmax is also interesting. The proposed task-aware dynamic sampler, which sparsifies the image token to reduce computational cost and utilizes one of the two text encoders for task specific embedding, is a brilliant design. \n\n4. Many properties of pixwizard inherit from previous work lumina-next, including the DiT pretrained weigth, dynamic resolutions, which makes pixwizard a meaningful extension of the lumina-next-t2i DiT model in terms of omni image generation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "PixWizard presents a unified approach to conduct various kind of image generation, manipulation, and translation tasks with one DiT model. The contributions are:\n\n1. The authors constructed an image-instruction-image triplet instruct tuning dataset to train this model. \n\n2. The authors proposed Structural-Aware Guidance to fix the structure of reference image and Semantic-Aware Guidance to learn the instruction-based image to image capability. Inspired by DynamicVit, the authors designed a task-aware dynamic sampler to squeeze the image tokens for different image to image tasks and utilized Multi-Hot Gumbel-Softmax to extract topK tokens in order to decrease computational cost in DiT.\n\n3. PixWizard inherits the dynamic ability of lumina-next model to handle images of any resolution and aspect ratio. In addition, a two-stage training strategy is designed to improve the performance of the model.\n\n4. Extensive experiments on various dataset and benchmarks concretely verified the generalization of PixWizard" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. One of my main concerns is that pixwizard seems to be incapable of conducting non-rigid editing, which is a very common and important, yet challenging topic in image editing, for example, let the standing dog sit, let the polar bear raise its hand, close the open book, etc. In figure 1, I cannot find such a common task being presented. I doubt from two perspectives whether pixwizard is capable of conducting such non-rigid editings: \n\nfirst, the authors designed the structural aware guidance, which concats the vae latent with random noise in channel dimension. This is identical with instructpix2pix, which claims that \"our model is not capable of performing viewpoint changes, can make undesired excessive changes to the image, can sometimes fail to isolate the specified object, and has difficulty reorganizing or swapping objects with each other.\". From my experiments, instructpix2pix completely fails in non-rigid editing. Thus I doubt whether pixwizard inherits such a incapability of instructpix2pix due to the same concat operation.\n\nsecond, the datasets utilized for training image editing task are strongly biased and do not contain non-rigid editing data. In section 2, the authors listed the training datasets: UltraEdit (2024), MagicBrush (2024a), GQA-Inpaint (2023), Instruct P2P (2023), SEED-X-Edit (2024), GIER (2020), and HQ-Edit (2024). However, almost none of these datasets contain non-rigid editing data. Instead, these datasets focus on very simple editing tasks, which could simply be solved by inpainting models for adding, deleting and replacing objects or backgrounds, or controlnet+text to image to change textures and style transfers (of course, pixwizard integrates all these capabilities in one model and demonstrates comparable or better quantitative results than separate models, which is still a contribution. I just want to convey my personal opinion that these tasks are generally considered simple image editing tasks). These tasks are rather simple in general without changing spatial and structural features. Without training data on non-rigid editing, it is almost impossible for pixwizard to conduct such edits. \n\n2. my second concern is that although the quantitative experiments are solid, the quantitative results in table 1 show that for many image to image tasks, there is still a significant gap between pixwizard and task-specific model. In table 2 the image editing results are close, yet the testsets are very biased towards simple editing tasks. In table 3 the results on controlnet and t2i tasks are close though.\n\n\n3. shown in table 6, the gain of task aware dynamic sampler and two stage training strategy is minimal, which weakens the effectiveness of the propose method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The paper involves a giant dataset, primarily consisting of open-source datasets and self-collected data." }, "flag_for_ethics_review": { "value": [ "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. \"To balance tasks, we assign manual sampling weights to each dataset, randomly selecting data when a weight is less than 1.0.\" - could the author elaborate on how this is done? Like, what does it mean by the weight is less than 1, when the weights are manually assigned?\n\n2. As one of the major contributions of the paper is the public data, will the processed dataset be released in any form? \n\n3. Which layers of the CLIP image features are used as the conditions, is that the 1D feature after pooling or 2D feature before pooling?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Collecting and preprocessing the Omni Pixel-to-Pixel Instruction-tuning Dataset is a huge effort, which is the largest of its kind to my best knowledge and can be a great contribution to the community.\n\n2. The paper is clearly written with rich details.\n\n3. Extensive evaluation is done to demonstrate the performance of the model on various visual tasks, from image editing to image grounding.\n\n4. Ablation studies are performed to verify the proposed methods, helping to understand the significance of each individual components." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a generalist vision model that unifies various image generation and recognition tasks into an image-instruction-to-image problem. To address this problem, they collected a dataset, Omni Pixel-to-Pixel Instruction-tuning Dataset, consisting of 30 million image-instruction-image triplets. Fueled by this dataset, they train a conditional DiT model that takes the conditional image and instruction as the input. The paper shows extensive comparisons against specialized and generalist baselines on various benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the authors claimed \"competitive performance compared to task-specific models and surpass general visual models,\" this is not directly reflected by the results: there is a clear gap in tasks like depth estimation, semantic segmentation, and image grounding with task-specific models. The general visual model also outperforms some of these tasks. The authors should state the contribution in a more clear way.\n\n2. The other contribution claimed in the abstract that was not well-demonstrated in the paper is that \"the model exhibits promising generalization capabilities with unseen tasks.\" How much can the model generalize beyond instruction that it does not see during the training? For example, can the model follow the instruction \"Mark the specified area with a star in red: {caption}\"?\n\n3. The baselines chosen for image generation were not state-of-the-art anymore, e.g., UniControl is not compared, although their dataset (MultiGen-20M) was used.\n\n(Minor) Although some ablation studies are provided to justify the proposed methods, some other choices that seem novel are not well-proved, such as using Gemma-2 instead of T5 as the text encoder and RoPE as the positional embedding." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I tried to outline my concerns in the weakness section above. To sum them up in two main questions:\n\n1) What does this paper bring to the table that doesn’t already exist in prior published work like Emu Edit?\n\n2) What is the extent of generalization that your model actually supports? Can you back this up with experiments?\n\nI appreciate the effort the authors clearly put into the work, so if these are answered in a satisfactory manner, I’ll gladly increase my score.\n\nMinor clarification questions (skip these if you don’t have time):\n\n3) What are the details of the MHGS? How is it implemented?\n\n4) L100 says “We extend the dynamic partitioning and padding scheme to handle input images of any resolution, aligning closely with human perception.” but L279 says these resolution capabilities are inherited from [Zhuo et al., 2024]. Do you mean that Zhou support multiple resolutions but not good enough, and you improve this? Are you extending the scale of resolutions that they support? \n\n5) L298: “we assign manual sampling weights to each dataset” How? Can you describe your criteria?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper does a remarkable job in evaluating against a long range of baselines and across a wide list of tasks. It also demonstrates good performance across most of the tasks, showing that their approach can indeed lead to a fairly general multi-task model.\n\nThe authors also provide a fairly detailed overview of their network, block choices, and the sources and handling of their training data. In the appendix, they further analyze many of their contributions in an ablation study, and report on choices even when they do not meaningfully impact the results (which is good in my eyes, as it lets future work know which components they may want to discard in favor of simplicity).\n\nFinally, the appendix also contains an interesting application for the all-in-one approach, where they use the model to segment parts of the image and then edit them. Such interactive multi-step approaches that use different network skills to improve results can serve as a good reason for exploring multi-task models. Please see more on this in the weakness section below." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose PixWizard, a flow-based model that can tackle a range of visual tasks including text-to-image generation, inpainting and outpainting, as well as a long list of image-to-image translation tasks (sketch-to-image, depth-to-image, denoising, box-drawing based detection, segmentation and more).\n\nThe core of their approach revolves around teaching a pre-trained text-to-image model (Lumina-Next) to tackle multiple novel tasks by conditioning it on additional image input features and features describing the specific tasks the network should handle. The authors propose a set of blocks to extract said features and integrate them into the network, and propose a data curation and balancing strategy to ensure the model does not ignore rare tasks.\n\nFinally, the authors conduct a wide range of experiments on many different tasks and demonstrate that their approach is competitive (and sometimes even improves over) networks trained for each individual task, and also outperforms most generalist network approaches which aim to concurrently tackle many tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My chief concerns with the paper are as follows, ranked from most-important to least important:\n\n1) The paper does not appear to distinguish itself from prior, published multi-task approaches like Emu Edit [Sheynin et al, CVPR 2024], and it does not actually outperform them. More specifically, Emu Edit already introduced a multi-task approach based on a pre-trained text-to-image model. It already has learned task embeddings, and it shows a range of similar downstream tasks. \nI appreciate that outperforming closed models trained on unknown datasets is difficult, but when I ask myself what I learned from reading this paper that I did not already know from Emu Edit, I have a hard time coming up with an answer, and this is an issue.\n\n2) The paper claims to generalize to unseen tasks (e.g. L26). I may have missed the results that corroborate this claim, but the closest I could find are the zero-shot restoration experiments in the appendix. However, these are close enough to existing restoration tasks that even non-multi-task baselines can generalize well to them and outperform this model? I would have liked to see generalization to actual new tasks, even using inverted task embeddings like Emu Edit. Otherwise, this weakens the contribution.\n\n3) Related to (1) – most of the paper focuses on comparisons to other methods, to the point where ablation is relegated to the end of the appendix. I think ablation is crucial here and should be moved into the core paper, even at the cost of pushing out some experiments. since it is the only section that gives us validation for the specific new ideas introduced by the paper. This method uses a different base-model, is trained on different data, and has a significantly different number of parameters from the baselines. I understand that you beat them, but I also want to understand why. \n\n4) There is relatively weak motivation for why multi-task models are needed. Saving some memory compared to specialized approaches is okay, but its not a significant contribution. Again, this can be contrasted with Emu Edit which show that training on multiple tasks can improve each of them individually, and that training on discriminative tasks like segmentation can also improve performance on generative tasks like editing. I had a hard time finding similar motivations here.\n\n5) The paper can be better self-contained. Some components (e.g., dynamic partitioning and padding scheme) refer back to recent preprints with no citations, and if these parts are important then they should probably be given a bit more detail (even in the preliminaries in the appendix).\n\n\nMinor issues that did not affect my score:\n\n1) L121: without extra any -> without any extra\n\n2) “Next, we concatenate the image latent with the noise latent along the channel dimension” – Consider citing SR3 [Saharia et al, TPAMI 2022] or Palette [Saharia et al, SIGGRAPH 2022].\n\n3) Table 3 – Bold numbers are missing from the baselines when they win.\n\n4) L483 notes that the method outperforms the inpainting baselines based on FID and LPIPS scores, but the method seems to underperform the baselines on LPIPS?\n\n5) Appendix C.4. – It seems like the gains highlighted here are not related to any change in the proposed paper, but are just due to selecting a flow-matching baseline over a diffusion model? Did I miss something?\n\n6) L1544: less tokens requiring less inference time should be shown experimentally because it’s not trivial that the gain here is meaningful. The method uses extra layers to actually select what tokens to drop, and the tokens enter through cross-attention (as opposed to self-attention) which means they scale compute linearly and not quadratically?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024pixwizard,\ntitle={PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xuQSp75HmP},\nnote={under review}\n}" }, "abstract": { "value": "This paper presents a versatile image-to-image visual assistant, PixWizard, designed for image generation, manipulation, and translation based on free-from language instructions. To this end, we tackle a variety of vision tasks into a unified image-text-to-image generation framework and curate an Omni Pixel-to-Pixel Instruction-Tuning Dataset. By constructing detailed instruction templates in natural language, we comprehensively include a large set of diverse vision tasks such as text-to-image generation, image restoration, image grounding, dense image prediction, image editing, controllable generation, inpainting/outpainting, and more. Furthermore, we adopt Diffusion Transformers (DiT) as our foundation model and extend its capabilities with a flexible any resolution mechanism, enabling the model to dynamically process images based on the aspect ratio of the input, closely aligning with human perceptual processes. The model also incorporates structure-aware and semantic-aware guidance to facilitate effective fusion of information from the input image. Our experiments demonstrate that PixWizard not only shows impressive generative and understanding abilities for images with diverse resolutions but also exhibits promising generalization capabilities with unseen tasks and human instructions." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Diffusion Model", "Image Generation", "Image-to-Image" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/578216dad3dd77e9ccd64fb106e955a367f73b57.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xvUVk9T3kZ
Multi Task Inverse Reinforcement Learning for Common Sense Reward
main
Active
multi task learning;reinforcement learning
reinforcement learning
1;1;5;5
4;3;4;4
1;1;2;2
2;1;2;2
3;3;3;1
3
3.75
1.5
1.75
2.5
0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. IQ-Learn [2] showed that they could recover the reward function and should be added as a baseline for comparison. \n2. When the task reward functions are known, then using Eq 6,7 with multiple tasks should recover the common reward function. However, for unknown task rewards (Section 4.4), Eq 8 will try to recover the task reward function. Given that Eq 8 uses the expert demonstration, will it recover the overall reward function? \n3. If point 2 holds, a reward function trained with Eq 9 might not learn anything informative. This might be true because the learned task reward should learn about the sum of rewards from the expert trajectories. Here, I assume that the demonstration is obtained by experts trained with task rewards (Eq 10 /11). \n3. At line 380, how are the trajectories sampled?\n4. In Section 5.2, how will the performance be impacted if the agents cannot recover 100% success rate?\n5. In Section 5.3, the method MT-CSIRL-LT uses a similar mechanism to IRL for learning the reward function and the policy. Since IRL does not recover good reward function, how well does the task reward correlate with the original task reward?\n6. Overall, I feel the experiment section is hard to understand and needs to be explained better about the training and evaluation setups. The results presented in Table 1 are not clear. \n7. As the episodes can terminate as soon as the task is solved, is it possible that the bias in rewards described in [3] leads to agents not following the constraints, and thereby the baselines in Table 2 do not learn desired behaviors?\n\n#### References\n[1] Swamy et al., Of Moments and Matching: A Game-Theoretic Framework for Closing the Imitation Gap, ICML'21\\\n[2] Garg et al., IQ-Learn: Inverse soft-Q Learning for Imitation, NeurIPS '21\\\n[3] Jena et al., Addressing reward bias in Adversarial Imitation Learning with neutral reward functions. Deep RL workshop at NeurIPS'2020." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The problem of recovering reward functions using demonstration is interesting as this will facilitate better transfer and generalization across tasks. \n- The experiments on Meta-World show that the method can recover the reward functions that are common across tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a method to recover a “common-sense” reward using demonstrations across multiple tasks. This reward represents the part of the reward function that may be common across tasks. The paper discusses why existing IRL methods like AIRL, GAIL fail to recover the true reward function upon convergence. For stable learning, a mechanism to scale the rewards using historical averages is proposed. Experiments on a few tasks in MetaWorld show that the proposed method can extract this reward and with the learned common-sense reward, the agent can learn new tasks with known (standard RL) or unknown task (IRL for task) rewards." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The motivation is not clear. The point that an IRL algorithms (AIRL, GAIL) can fail to recover the true reward function is highlighted multiple times. However, it depends on the formalism as described in [1], where methods like GAIL is a primal approach and the learned reward at convergence might not align with the true reward, whereas solving the dual form of IRL where the reward function is learned with no-regret approach, the method should recover a meaningful reward function. So, this might explain that IRL methods when trained in Dual form can recover the true reward function and AIRL is not able to learn it.\n- The paper is not well written and hard to follow, especially the experiments section. \n- According to the paper, the IRL methods do not learn good reward functions. It is not clear why the proposed method still uses this formalism for the scenario with unknown task rewards (Section 4.4)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "I listed the questions in the above section together with weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- **[Motivation]**: This paper addresses a compelling and fundamental technical challenge in reinforcement learning: the need to learn common-sense rewards for improved generalization.\n\n- **[Technical Soundness]**: The method is generally sound, with no major technical flaws. Its simplicity facilitates easy implementation and potential extension to other RL domains.\n\n- **[Clarity of Presentation]**: The overall presentation is clear and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a framework that learns disentangled task-specific and task-shared common-sense rewards from multiple tasks using adversarial inverse reinforcement learning (IRL). The approach is simple and straightforward, based on the hypothesis that the reward has an additive structure. The method involves decomposing the reward term of traditional adversarial IRL methods. Results from robotic control experiments (Meta-World) indicate that learning from diverse tasks is crucial for capturing common-sense rewards, and the proposed method outperforms approaches that do not incorporate this common-sense reward learning.\n\nOverall, this paper addresses an intriguing and significant challenge in reinforcement learning: how to learn useful common-sense rewards from demonstration data to facilitate generalization and adaptation. However, I think that certain technical aspects of the work, particularly the hypothesis regarding the reward structure, could be explored further to enhance its applicability to more general tasks and strengthen the framework. Therefore, I would assign a borderline rating (slightly negative), but I will consider adjusting this based on the authors' rebuttal and discussions with other reviewers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **[Reward structure]**: The authors assume an additive reward structure, which may be overly simplistic. It would be beneficial to consider more complex functional forms that are representative of real-world scenarios, such as general non-linear functions. Additionally, incorporating noise factors in both common-sense and task-specific rewards—like additive noise, linear non-Gaussian models, or post-nonlinear models with noise—could enhance the framework. A discussion and analysis of these more general function forms, both theoretically (ensuring that disentanglement factors remain identifiable) and empirically (demonstrating that they can be learned within the framework), would be valuable.\n\n- **[Common-sense reward concepts]**: The definition of common-sense rewards in this work is quite basic. Can the framework be scaled to learn more generalizable common-sense rewards, such as a foundational understanding of physical world models or other relevant norms applicable to more complex tasks, like those involving embodied agents? Incorporating language might also be a relevant avenue to explore [1]. This point is more of an open question, but any discussion around it would be appreciated.\n\n- **[Relation to unsupervised RL]** In unsupervised RL, especially mutual-information-based skill discovery, methods can also learn a set of common skills from tasks. It might be beneficial to discuss that line of research and the major benefit of using this framework to learn disentangled common-sense rewards. \n\n- **[Evaluation]**: Is there any theoretical or formal analysis addressing why common-sense reward learning performs poorly in single-task or single-target transfer scenarios compared to multi-task learning?\n\n[1] Zhao, Zirui, Wee Sun Lee, and David Hsu. \"Large language models as commonsense knowledge for large-scale task planning.\" Advances in Neural Information Processing Systems 36 (2024)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "None" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The paper addresses an important problem of learning a policy with learned environmental constraints using maxentIRL.\n\n2. The paper is easy to read and communicates the method clearly." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes to use multi-task experts to learn environmental constraints via a common sense reward. Then the authors demonstrate that by representing any task's reward function as common-sense reward+task-specific reward, agents can learn quickly and safely." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Motivation : The paper's motivation is based on unsubstantiated claims regarding capabilities and limitations of IRL. In line 76-82 the paper claims that \"IRL fails to learn correct reward functions due to its connection to GANs\". IRL is not equivalent to GAIL; rather GAIL is a way to solve distribution matching and obtained from a primal-dual perspective. While, methods like GAIL are not aiming to learn reward functions there exists method like f-IRL[2] and MaxEntIRL[3] that explicitly try to learn reward functions. f-IRL has looked at learning \"stationary reward functions\" - reward functions on which the policy can be retrained.\n2. Literature review: I found the literature reviews to be somewhat shallow. For example, IRL was attributed to Arora and Doshi (2021); IRL existed long before and its origins can be traced back to [1] and possibly earlier. Authors are encouraged to use correct attributions and perform an in-depth literature review.\n3. Unprincipled method: The reward function for any task is assumed to decompose as common sense reward+task-specific reward. Under such a decomposition there is no guarantee that optimizing task-specific reward results in same optimal policy as the combination with common sense reward. Abeel and Ng[4] have proposed a class of reward functions that result in same optimal policy.\n4: Assuming known structure of common sense rewards: In experiments it was presumed that common sense reward had a fix structure. I think that is a major limitation as the structure cannot be determined without domain knowledge. \n5. Section 4.3 is titled curriculum learning, but I do not see the connection to curriculum learning from the method proposed. \n6. Experiments: Standard IRL methods are missing from comparisons like MaxentIRL and f-IRL.\n\n\n[1]: Abbeel, Pieter, and Andrew Y. Ng. \"Apprenticeship learning via inverse reinforcement learning.\" Proceedings of the twenty-first international conference on Machine learning. 2004.\n[2]: Ni, Tianwei, et al. \"f-irl: Inverse reinforcement learning via state marginal matching.\" Conference on Robot Learning. PMLR, 2021.\n[3]: Ziebart, Brian D., et al. \"Maximum entropy inverse reinforcement learning.\" Aaai. Vol. 8. 2008.\n[4]: Ng, Andrew Y., Daishi Harada, and Stuart Russell. \"Policy invariance under reward transformations: Theory and application to reward shaping.\" Icml. Vol. 99. 1999." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* For MT-CSIRL transferred onto an unseen task, do you have access to that task’s task-specific reward and demonstrations? Do you do any finetuning for the cs-reward?\n* How are the MT-AIRL methods evaluated in Table 2? Are they trained over all tasks’ demonstrations including the unseen task’s? Do you use task-specific reward for MT-AIRL as well?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "* Many of the ideas presented are interesting and worth exploring: disentangled reward functions, learning common sense rewards from multi-task data" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose to do multi-task inverse reinforcement learning by decomposing the reward function into the sum of a task-specific reward, which they assume is given, and a common sense reward, which is learned and shared between the tasks. Their proposed method uses AIRL to learn a discriminative common sense reward over multiple tasks, then transfers this common sense reward to a new task. The authors also propose an extension of their method that does not require pre-specified task-specific reward by learning the task specific and common sense reward at the same time with two AIRL discriminators. They conduct experiments on a subset of Meta-World tasks to demonstrate that their method requires multi-task training in order to learn an effective common sense reward and compares with other multi task IRL methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Their main method assumes task-specific rewards are given. This is an unrealistic assumption and changes their problem setting from IRL to demonstration-guided RL. In practice their method looks more like learning an auxiliary reward from multi-task demonstrations to supplement the task reward given by the environment.\n* The unknown task reward extension of their method naively learns a task specific and common sense reward jointly without any effort at disentangling the two. This will not automatically separate the reward functions and it’s unclear why this method would be better than learning a combined reward function.\n\nThere are also some serious issues with the experimental setup, which makes the paper’s main claims unconvincing to me.\n* The authors only experiment on selected Meta-World tasks, which are all relatively similar tabletop manipulation tasks. I am not convinced these results are generalizable to other benchmarks and types of tasks.\n* Comparison methods. While their method has access to both task specific rewards and expert demonstrations, the baselines methods seem to only have access to one or the other (reward for SAC, demos for IRL methods), so this is an unfair comparison. Furthermore, they are missing other methods that decompose reward functions including Reward Network Distillation which they cite in related work.\n* The definition of the target velocity and action norm common sense rewards seem contrived since they are not critical or even necessarily helpful to learning the tasks (since almost all experiments achieve 100% task success). The common sense reward effectively learns a style preference from the demonstrations that’s not present in the task reward function. The setup in section 5.4 where the algorithm extracts both task specific and common sense reward from demonstrations makes much more sense.\n* Section 5.4 is missing any baseline methods, especially doing AIRL on each task individually, so I am not convinced that MT-CSIRL+LT is effectively decomposing the reward functions and transferring them." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024multi,\ntitle={Multi Task Inverse Reinforcement Learning for Common Sense Reward},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xvUVk9T3kZ},\nnote={under review}\n}" }, "abstract": { "value": "One of the challenges in applying reinforcement learning in a complex real-world environment lies in providing the agent with a sufficiently detailed reward function. Any misalignment between the reward and the desired behavior can result in unwanted outcomes. This may lead to issues like \"reward hacking\" where the agent maximizes rewards by unintended behavior. In this work, we propose to disentangle the reward into two distinct parts. A simple task-specific reward, outlining the particulars of the task at hand, and an unknown common-sense reward, indicating the expected behavior of the agent within the environment. We then explore how this common-sense reward can be learned from expert demonstrations. We first show that inverse reinforcement learning, even when it succeeds in training an agent, does not learn a useful reward function. That is, training a new agent with the learned reward does not impair the desired behaviors. We then demonstrate that this problem can be solved by training simultaneously on multiple tasks. That is, multi-task inverse reinforcement learning can learn a useful reward function." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "multi task learning", "reinforcement learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/3cf6be76ca2377eacc2ad47c349b2a6cb66bff25.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Multi Task Inverse Reinforcement Learning for Common Sense Reward" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xvhV3LvYTc
InstantSplamp: Fast and Generalizable Stenography Framework for Generative Gaussian Splatting
main
Active
Gaussian Splatting;3D Generation;IP Verfication
applications to computer vision, audio, language, and other modalities
3;5;5;8
5;3;4;5
2;3;3;3
2;2;2;3
3;1;2;3
5.25
4.25
2.75
2.25
2.25
0.12666
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- It indeed doesn’t require per-scene optimization, which gives it a time advantage.\n\n- The idea of injecting a watermark directly into the 3D generation model is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "InstantSplamp introduces a fast, scalable framework that embeds hidden information like copyright tags into 3D generative models without additional processing time. Leveraging visual foundation models and cross-attention mechanisms, this approach integrates watermarking directly within the 3D Gaussian Splatting process. Unlike traditional methods requiring per-scene optimization, InstantSplamp minimizes overhead to nearly zero, enabling efficient large-scale deployment of watermarked 3D assets. A U-Net-based decoder recovers the hidden information, balancing visual quality with steganographic fidelity. This innovation addresses scalability challenges in 3D asset generation and protection, optimizing both watermark embedding and retrieval." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In Figures 3 and 4, the 3D assets generated by your method show some artifacts in rendering, and the colors are somewhat distorted. Injecting the watermark affects the visual quality. Although it performs much better compared to StegaNeRF, the impact on visual quality due to watermark injection seems counterproductive.\n- There is no 360-degree visual quality demo, and only two views are provided, which makes it hard to assess the rendering quality of the 3D assets and the quality of watermark extraction. It’s unclear whether the rendering quality of the 3D assets is 3D consistent.\n- From the data in Table 1, the rendering quality of your method is not significantly better than LSB or DeepStega, and there’s no comparison with the latest method, GS-Hider.\n\nGS-Hider: Hiding Messages into 3D Gaussian Splatting" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "See weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The result achieved by the proposed method is much better than previous methods, especially for the hidden recovery performance.\n2. The method does not need per-scene optimization, which is a generalized model and thus leads to faster speed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a method, named InstantSplamp, to insert watermark information in generated 3D contents. The method is generalized, which means it does not need per-scene optimization. The time cost is extremely faster than previous methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The notations are not clear, it is hard to understand the figure 2 without notations. \n2. It's hard to understand the \"AdaptiveGradientHarmonisation\", the cosine similarity in Eq. (4) seems to be calculated based on all parameters. In this way, the similarity is not a vector value, so what does the mask stand for?\n3. The training is only conducted on one model, i.e., LGM. This limits the application. Authors should show more results on different 3D generative models with different representations." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "No ethics review needed." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Does the proposed method exhibit superior steganographic capability compared to existing 3DGS steganography techniques?\n- Is it possible to increase the capacity for embedding additional images within the steganographic framework?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper proposes a generalizable steganography mechanism that avoids additional time costs and modifications to the original Gaussian generation process.\n- The experimental results in the paper demonstrate that the steganography capability of 3DGS surpasses that of similar methods applied in NeRFs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces an end-to-end framework for 3DGS steganography, embedding an image during Gaussian generation and recovering it from a specific rendering via a decoder network. In the hiding stage, the framework employs a cross-attention mechanism to seamlessly integrate the hidden image features into the spatial details of the intermediate Gaussian features. In the recovery stage, the decoder network extracts the hidden image exclusively from the rendering of a specific viewpoint. Additionally, an adaptive gradient harmonization technique is introduced, which functions as a masking mechanism, embedding the hidden information within certain model weights to preserve both steganographic ability and the visual quality of the renderings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The method is similar to StegaNeRF and lacks sufficient novelty.\n- The experimental baselines are too limited. Notably, an existing method, GS-Hider: Hiding Messages into 3D Gaussian Splatting, already achieves multi-scene information hiding within a 3DGS model.\n- The experiments lack an analysis of steganographic capability, such as different capacity, resistance against steganalysis networks and robustness to additional distortions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See problems mentioned in Weaknesses." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The proposed InstantSplamp framework is highly innovative as it integrates watermarking directly into the 3D generation process, significantly reducing time overhead, making it practical for large-scale deployment. \n2. The methodology leverages visual foundation models and cross-attention mechanisms in a novel way to embed and recover hidden information effectively, while maintaining high rendering quality. \n3. The paper presents strong empirical validation across multiple deployment scenarios, demonstrating the method’s efficiency and generalizability with various 3D objects and modalities, including images, text, QR codes, and even video. \n4. The use of adaptive gradient harmonization to balance rendering fidelity and information hiding represents a practical and insightful solution to a common challenge in steganography, ensuring minimal visual quality degradation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author skillfully integrates the visual foundation models into 3D steganography by leveraging cross-attention mechanisms to embed hidden information during the generation process. The proposed framework optimizes the balance between rendering quality and watermark fidelity, ensuring minimal distortion while preserving the integrity of the embedded data. The author validates the practicality of this approach through extensive experiments on various 3D assets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While Figure 1 illustrates the time efficiency improvements of the proposed method for watermarking, could you provide some quantitative experimental results to further emphasize this point?\n2. The robustness testing only considers two types of corruptions (JPEG compression and Gaussian blur), which seems limited in scope. It would be valuable to include additional forms of corruption, such as noise, scaling, or cropping, for a more comprehensive evaluation. Additionally, a comparative robustness analysis with other state-of-the-art methods is missing, which would provide a clearer understanding of how InstantSplamp performs under various conditions.\n3. How does the proposed method compare with other 3D watermarking approaches targeting binary messages, such as those for NeRF or other 3D representations? Specifically, it would be helpful to see a comparison of performance in embedding and recovering complex information, as well as any advantages InstantSplamp may have over these existing methods." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "InstantSplamp enables instant, efficient watermarking of 3D assets during generation, balancing quality and security for practical, large-scale deployment." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024instantsplamp,\ntitle={InstantSplamp: Fast and Generalizable Stenography Framework for Generative Gaussian Splatting},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xvhV3LvYTc},\nnote={under review}\n}" }, "abstract": { "value": "With the rapid development of large generative models for 3D, especially the evolution from NeRF representations to more efficient Gaussian Splatting, the synthesis of 3D assets has become increasingly fast and efficient, enabling the large-scale publication and sharing of generated 3D objects. However, while existing methods can add watermarks or steganographic information to individual 3D assets, they often require time-consuming per-scene training and optimization, leading to watermarking overheads that can far exceed the time required for asset generation itself, making deployment impractical for generating large collections of 3D objects. To address this, we propose InstantSplamp a framework that seamlessly integrates the 3D steganography pipeline into large 3D generative models without introducing explicit additional time costs. Guided by visual foundation models,InstantSplamp subtly injects hidden information like copyright tags during asset generation, enabling effective embedding and recovery of watermarks within generated 3D assets while preserving original visual quality. Experiments across various potential deployment scenarios demonstrate that \\model~strikes an optimal balance between rendering quality and hiding fidelity, as well as between hiding performance and speed. Compared to existing per-scene optimization techniques for 3D assets, InstantSplamp reduces their watermarking training overheads that are multiples of generation time to nearly zero, paving the way for real-world deployment at scale." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Gaussian Splatting", "3D Generation", "IP Verfication" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/63adacee528c6ef5078ea36bdea99df10c98d356.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "InstantSplamp: Fast and Generalizable Stenography Framework for Generative Gaussian Splatting" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xvsNb5y9CN
Sample-Imagined Generator: Efficient Virtual Sample Generation Method for Off-policy Reinforcement Learning with Sparse Rewards
main
Active
Off-policy Reinforcement Learning;Sparse Reward Reinforcement Learning;Sample Efficiency
reinforcement learning
3;3;3;3
4;4;3;4
1;2;2;2
1;2;2;2
2;1;2;2
3
3.75
1.75
1.75
1.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Why there's a noise in the actual state transfer? Is $\\Delta$ an exploration noise? Is the agent taking action $a_t + \\Delta$? If so, why the next state is computed based on $a_t$ but not the action the agent has taken? \n2. Objective 13: what's $\\mu_t$ and $\\sigma_t$? how are they computed? where does $t$ come from? where is $i$?\n3. Objective 14: what's $\\mu(k)$ and $\\sigma(k)$?\n4. Pseudo code line 13: How are $\\mu_t$ and $\\sigma_t$ computed? By using all the $L_{ssg}$ so far, or in the inner loop? \n5. The ablation is conducted only on the fixed switch. Does fixed length or sample ratio also hurt the performance a lot?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The writing and presentation is mostly clear.\n2. The experiments study both off-policy RL settings and offline-to-online RL settings on 5 off-policy or offline RL algorithms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes sample imagined generator (SIG), a method to synthesize imagined replay to increase the sample efficiency for the reinforcement learning algorithm. Experiments on 5 continuous control tasks proved SIG can improve the sample efficiency of 5 off-policy algorithms." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Experiments need improvements\n\n 1.1. Important baselines are missing. The author needs to compare SIG against SYNTHER [1], a method focuses on using imagined replay to improve RL's sample efficiency.\n\n\n 1.2. The author needs to compare SIG against REDQ [2], and see if REDQ can be further improved by SIG\n\n\n 1.3. Some claims are not rigorous or have factual errors. The baseline selection is very confusing. \"we combine the state-of-the-art MCAC with SAC, TD3, GAE, OEFD, CQL, ... a total of 10 off-policy algorithms\". First, \"GQE enhances training stability based on SAC by incorporating a sophisticated reward estimation\", GQE is more commonly called GAE instead, and it was published before SAC. It is not an off-policy algorithm. \n\n CQL is an *offline RL* algorithm. OEFD is based on DDPG (DDPG is an off-policy algorithm), but OEFD is more of an algorithm to leverage demonstrations (with state reset, hindsight, q filter, etc). Also combining them with MCAC will not make 10 off-policy algorithms. Furthermore, RLPD [3] is considered the current state-of-the-art approach rather than MCAC.\n\n\n 1.4. The plots in Figure 3 are hard to read. For example, if you make SAC / SAC+SIG in the same color, but in solid / dash lines (same change for other algorithms), it may be better for the readers to tell (1) if SIG is improving (2) how different the performance across algorithms.\n\n2. The selection about $H_{sig}$, $R_{sig}$, and $T_{sig}$ and other hyperparameters are all heuristic-based or a little bit arbitrary. More ablation studies are needed to test if they are sensitive to hyperparameter choices, e.g. $K$, $f_{less}$, $r_{max}$\n\n[1] Lu et al., Synthetic experience replay\n\n[2] Chen et al., Randomized Ensembled Double Q-Learning: Learning Fast Without a Model\n\n[3] Ball et al., Efficient Online Reinforcement Learning with Offline Data" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Model-based data generation techniques are of great interest to the RL community, making this paper quite relevant.\n1. SIG is paired with a wide range of RL algorithms in the empirical evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the Sample-Imagined Generator (SIG), a framework for generating new experience form a learned model to improve the sample efficiency of RL. Empirically, SIG improves the sample efficiency of a wide range of RL algorithms in 5 complex tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I lean to reject primarily because (1) SIG is evaluated across a wide range of RL algorithms but is not compared to any model-based RL baselines such as MBPO [1] \n and PETS [2], and (2) Length-adaptive Trajectories Generation, Ratio-adaptive\nSampling, and Interaction-adaptive Switch Time seem unmotivated and their effect on performance is not ablated. I would like to see ablations on each of these components to understand how important they are for improving sample efficiency. \n\n* Since SIG is not compared to other model-based imagination algorithm, it's difficult to assess whether SIG is improving over existing methods in at least one aspect;\n\n* Eq. 11 essentially says that model rollouts are very short when the SSG loss is large ($L_{ssg}/l_0 \\approx 1$ implies that $H_{sig} \\approx 0$) and that model rollouts correspond to full trajectories when the SSG loss is 0. However, this does not seem well-motivated; the MBPO paper [1] provides theoretical justification for using shorter model rollouts. Is there empirical evidence motivating the use of longer model rollouts? I'm wondering if performance would improve if exclusively shorter rollouts were used. In other words, does length-adaptation actually improve performance? MBPO also implements a similar linear increase in the trajectory length over the course of training (e.g see appendix C in [1]). How does SIG's length adaptation relate to MBPO's?\n\n* Eq. 12 SIG integrates more model data into learning as the SSG loss decreases. MBPO keeps this quantity fixed throughout training (e.g. 400 model samples generated per environment step). Is there a benefit to linearly increasing the number of model samples generated vs. keeping it fixed?\n\n* SIG improves sample efficiency for the following setups:\n * SAC: Lift, Door, Extraction (3/5 tasks)\n * TD3: Lift (though TD3 + SIG doesn't seem to solve the task), Door, Extraction (3/5 tasks)\n * OEFD: Lift, Door, Push (3/5 tasks)\n * CQL: Lift (1/5 tasks)\n * SM: Lift, Door, Extraction, Push, Navigation (5/5 tasks)\n * TM: Lift, Extraction, Push (3/5 tasks)\n * OM: Lift, Door (2/5 tasks)\n * CM: None (0/5 tasks)\nWhile I'd agree that SIG can improve sample efficiency in some tasks with some algorithms, the paper should discuss why it offers no improvement -- or worse performance -- in other task/algorithm combinations (e.g. CM + SIG and CQL + SIG). \n\n* Figure 3 would be easier to read if curves for <algo> and <algo> + SIG had the same color but differen line styles (e.g. solid vs dashed). Also, figure labels are very tiny and difficult to read. Please use larger font sizes! \n\n\n1. \"This issue can be mitigated by improving the algorithm’s target updating or value estimation method\nto encourage exploration, thereby improving sample efficiency.\" This statement is seemingly disconnected from the previous paragraph. Why should we immediately jump to target updates and value estimation to resolve issues with sample efficiency? What's the motivation?\n\n2. \"This module could verify the rationality of the imagined states\" It's unclear what rationality means here.\n\n[1] When to Trust Your Model: Model-Based Policy Optimization. https://arxiv.org/abs/1906.08253\n[2] Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models. https://arxiv.org/abs/1805.12114" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- How does SIG compare to state-of-the-art model-based RL methods in terms of performance and sample efficiency?\n- Can you provide results demonstrating SIG's effectiveness in environments with high-dimensional state spaces?\n- What is the computational overhead of implementing SIG compared to standard off-policy RL methods?\n- How does the performance of SIG degrade as the complexity of the environment increases?\n- What are the limitations of SIG, and in what types of environments or tasks might it not be suitable?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Compatibility: The method is designed to work with various off-policy RL algorithms, increasing its potential impact and applicability.\n- Self-validating mechanism: The closed-loop structure of the SSG, including the Action Validation Module, aims to ensure high-quality imagined samples.\n- Adaptive sampling: The SII module's ability to adjust imagined trajectory length and sampling ratio could potentially optimize the use of imagined samples during training.\n- Reduced environmental interactions: SIG aims to achieve comparable or better performance with fewer real environment interactions, which could be valuable in scenarios where interactions are costly or limited." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces the Sample-Imagined Generator (SIG), a method designed to improve sample efficiency in off-policy reinforcement learning (RL) with sparse rewards. SIG consists of two main components:\n- Self-validating Sample Generator (SSG): Generates high-quality imagined samples using three modules:\nState Imagination Module (SIM)\nAction Validation Module (AVM)\nReward Imagination Module (RIM)\n- Self-adaptive Imagination Inference (SII): Adaptively adjusts the length of imagined sample trajectories and the quantity used in policy learning.\n\nThe authors claim SIG can be combined with various off-policy RL algorithms and demonstrate improved sample efficiency across 10 different methods in 5 continuous control tasks with sparse rewards." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of comparison with model-based RL: SIG bears similarities to model-based RL approaches, but the paper fails to compare it with existing model-based methods, leaving its novelty and effectiveness in context unclear.\n- Limited exploration of high-dimensional state spaces: The paper does not address how SIG performs with high-dimensional state spaces, which could be a significant limitation for real-world applications. Generating new states with high-dimensional state spaces (like images) is much more difficult.\n- Insufficient experimental results: While the authors claim to have tested SIG with 10 off-policy RL algorithms across 5 continuous control tasks, this range of experiments is still relatively narrow and may not fully demonstrate the method's robustness and generalizability.\n- Poor presentation: The paper suffers from unclear writing and organization, making it difficult for readers to follow the proposed method and understand its contributions.\n- Lack of computational analysis: The paper does not discuss the computational overhead of implementing SIG, which could be significant due to the additional neural networks and sample generation processes." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Do the authors consider SIG to be a model-based RL approach? \n\n- Although length-adaptive and ratio-adaptive sampling is clear, the interaction-adaptive switch time is not so much. Is this the time when buffer augmentation through imagination starts, in the sense that until then, no imagination happens during training?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "- The experimental set-up covers many complex continuous control tasks with sparse reward. \n\n- The results show that plugging SIG into specific off-policy RL algorithms can improve sample efficiency." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to improve the sample efficiency of off-policy RL algorithms by developing a method called Sample-Imagined Generator (SIG), which generates synthetic rollouts with adaptively varying lengths to replace interactions in the environment with imaginary ones. SIG has two modules: 1) Self-validating Sample Generator module generates imaginary transitions, i.e., states and rewards, and stabilizes imagination through an action validation component. 2) The Self-adaptive Imagination Inference module adaptively adjusts the length of the imaginary rollouts, the ratio of imaginary-to-real samples, and the time to switch from real to imaginary samples. The authors evaluate SIG in 5 continuous control domains with sparse rewards by combining it with ten different off-policy RL algorithms to analyze the benefits of SIG." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper is not well-written. Section 4 is very hard to follow. The authors introduce a lot of modules and components with similar, sometimes confusing names, such as 'Length-adaptive Trajectories Generation.' \n\n- The introduction talks about some existing approaches requiring 'meticulously designed hyperparameters,' yet SIG trains three modules with many hyperparameters: learning rate, network size/depth, covariance on noise, the intervals defined for the reward imagination module, etc. I believe this argumentation is neither fair nor provides a clear motivation for the proposed approach.\n\n- The circular definition in Equation (8) is confusing.\n\n- There are typos: In a lot of places, the hat on a s or r is not placed the letter but the whole character, such as \\hat{s_{t+1}} instead of \\hat{s}_{t+1}.\n\n- Font sizes of labels/titles on figures are tiny, hence hard to read.\n\n- Quantitative results do not indicate a clear sample-efficiency benefit of using SIG with an off-policy RL algorithm. SIG usually achieves similar performance, but sometimes it seems to be slower. \n\n- The paper does not include qualitative results relating to how imagination works or improves during training." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024sampleimagined,\ntitle={Sample-Imagined Generator: Efficient Virtual Sample Generation Method for Off-policy Reinforcement Learning with Sparse Rewards},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xvsNb5y9CN},\nnote={under review}\n}" }, "abstract": { "value": "Off-policy reinforcement learning (RL) requires extensive real interaction with environment to gain experience for policy learning, presenting a challenge of low sample efficiency, especially in the condition of sparse rewards. To address this, we propose a Sample-Imagined Generator (SIG) which automatically trains a sample generator during environment interaction and could adaptively generate valuable imagined samples for policy learning. Through SIG, the policy greatly reduced the interaction with the environment during training and achieved comparable or even higher performance with those trained only through real interactions. SIG could be combined with any off-policy RL algorithm. Experiment in 5 continuous control tasks demonstrate that by substituting imagined samples for real ones to supplement the experience pool, SIG accomplishes tasks with significantly less interaction with the environment, notably improving sample efficiency across 10 off-policy reinforcement learning algorithms." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Off-policy Reinforcement Learning", "Sparse Reward Reinforcement Learning", "Sample Efficiency" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/c707e8d033aa46daba0b60bbafa4bc42327f16a5.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/da763a5e794d455fa8e8b07fca50aaadd58b8ce9.zip" }, "title": { "value": "Sample-Imagined Generator: Efficient Virtual Sample Generation Method for Off-policy Reinforcement Learning with Sparse Rewards" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xw4jtToUrf
Investigating Online RL in World Models
main
Active
World models;Domain Randomization;Offline RL
reinforcement learning
1;3;3;3;5
3;4;3;4;4
2;2;2;2;3
2;2;2;2;3
1;1;3;1;3
3
3.6
2.2
2.2
1.8
0.645497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. \"Each of the baselines is tuned by doing a grid search of the ranges documented in their respective papers:\n- Is the grid search conducted over their original dataset and transferred to the curated dataset? Or is the grid search done on the curated dataset?\n\n2. What would happen if the proposed method is trained and evaluated on the d4rl datasets? In the current paper, we see baseline methods do not perform as well as the proposed method under the paper's settings. The authors argued d4rl dataset is biased towards the baseline methods and provided visualizations. However, it would still be interesting to see how it impacts the proposed method.\n\n3. \"We open source all our code and data to facilitate further work in this exciting direction.\"\n- I couldn't find a link to the code repo, or find any supplementary materials." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Originality:\n- While the approach of using an ensemble of world models to reduce over-fitting and exploitation has been previously studied for model-based RL methods, this work differentiates itself by training entirely within the world model and on full-length roll-outs without any penalty terms inside learnt models.\n\n2. Quality:\n- The experiments are well-designed with clear assumptions and comparisons against multiple baselines. The use of different data scales and detailed ablation studies on the components of their method provides a thorough validation of their claims.\n\n3. Clarity:\n- The paper is well-organized with informative figures and tables. Especially figure 9 and 10, they explained how the curated dataset differs from the d4rl dataset. The writing is generally easy to follow. \n\n4. Significance:\n- The proposed assumptions, settings, and methods are valuable to the RL research community as it shows preliminary positive results on smaller scale world models, which could potentially serve as basis for training RL agents within larger and more capable world models on more complex tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel approach to online reinforcement learning. It is different from the typical approach to online interactions directly with the underlying environment (sim or real). Instead, this work studies RL methods within learned world models, attempting to mitigate common pitfalls of offline RL (reward exploitation, skewed datasets, etc.) and avoid the costly and sometimes infeasible samplings directly from the environments.\n\nUnder the settings where authors collected a more uniformly distributed dataset (in terms of state/action coverage compared to D4RL), this work trained PPO from an ensemble of independently trained world models, using Domain Randomization techniques (DR) and Unsupervised environment design (UED) methods. On the curated dataset, the experiment results suggest significant improvements over offline RL methods (CQL & SACn).\n\nOverall, this work is novel and the results presented in the paper offers new insights to training RL agents purely from world models. If there are more experimental evidence from different tasks or environments to support the the claim that \"full roll-out training inside world models is possible\", and clarify the questions I have below, I would be willing to raise the score." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper presents results from multiple tasks (pendulum, half-cheetah, cartpole, hopper), there is a lack of extensive testing across a wider variety of environments and tasks. This raises some questions about the robustness and generalizability of the proposed method beyond the tested scenarios. Perhaps some tasks such as ant-maze or robot arm manipulation ones.\n\n2. The ensembles of world models seems essential to the proposed method. A sweep over the numbers of world models in the ensemble v.s. tasks' performance could reveal more information on how many world models are needed to achieve certain level of task performance.\n\n3. Minor typos:\n- Line 131: This setting Furthermore\n- Line 509: the type (of) increasingly available large-scale datasets" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- Could the authors provide additional experiments on visual offline RL tasks to demonstrate the world model's generalizability in open-world or visual environments?\n- I strongly suggest that the authors further discuss the impact of the number of world models on the experimental results and clarify the necessity of using 100 world models.\n- If possible, please refine the methods section to more clearly highlight the core contributions of the paper." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper presents a novel approach for training RL agents using world models derived from large-scale, uncurated offline data. By employing an ensemble of world models trained on the same dataset and leveraging them to create learning curricula through the Unsupervised Environment Design method, this work introduces a fresh perspective to RL." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper explores the potential of using uncurated offline data to train world models that can serve as a training ground for reinforcement learning. The primary goal is to enable the transfer of learned policies from these world models to the real world, thereby reducing the reliance on task-specific simulation environments. The authors demonstrate that by ensembling multiple independently trained world models, they can achieve robust transfer to the real world, even when the offline datasets are much smaller than those typically used in offline RL." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In my view, the main issue with this paper is that it somewhat exaggerates the contributions of the proposed method. The title, \"Investigating Online RL in World Models,\" is slightly misleading, as the study addresses an offline RL problem. Additionally, the term \"world models\" could be confusing. While the paper discusses many existing visual world model approaches (such as Ha and Schmidhuber's NIPS 2018 paper and recent interactive video generation studies), it does not actually work with visual data, instead focusing on fully observable MDPs in low-dimensional state spaces. I suggest the authors consider replacing the term \"world models\" to more accurately reflect the context.\n\n- The organization of the paper is also disjointed, with an imbalanced structure. For example, Chapter 2 uses considerable space for background information, while the methods section in Chapter 3 is relatively brief. This structure makes it challenging for readers to fully grasp the paper's core contributions.\n\n- In methodology, the authors treat world models trained on offline data of varying quality as different levels in an unsupervised environment design approach. I recommend that the authors discuss the motivation and rationale for this choice, explaining why this training method would lead to a robust and transferable policy.\n\n- While the paper outlines an ambitious story, it lacks sufficient experimental support: \n(1) The authors claim to use the D4RL dataset but do not provide comprehensive experimental results across different tasks. The focus on the relatively simple Hopper task is insufficient to support their claim.\n(2) The paper lacks comparisons with recent offline RL methods, as well as more detailed model analysis, such as examining the impact of the number of world models --- The authors mention training 100 world models in line 77, which seems excessive for a simple Hopper task and introduces considerable training overhead, which raises questions about the method's practical use in real-world applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see weaknesses above" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- the paper performs an interesting combination of UED and world models, considering each world model as a level in UED\n- the method doesn't rely on online tuning in the environment (and uses held out world models to tune hyperparameters)\n- the results demonstrate sample-efficiency gains compared to CQL and a vanilla world model baseline" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper combined Unsupervised Environment Discovery (UED) and world model ensembles to provide a method for offline model-based RL that is sample-efficient. It treats different world models as levels within the PLR curriculum method in UED. It evaluates the method on vector-based cartpole, hopper, half cheetah, and pendulum tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The differences with other model-based offline RL methods like MOPO [1], MBPO [2], Planning with Diffusion [3] is not clear, especially since the world model in the experiments in this paper, unlike David Ha's work, is just a simple MLP. \n- It would be helpful to compare with other offline MBRL methods like the ones above as well as world model based approaches like IRIS [4] and Dreamer [5], particularly on more challenging environments with image observations than the vector observation based locomotion environments. Detailing the differences with these world model based methods would also be useful\n- Other papers to discuss in the related work include sample-efficient BC approaches that can work with very few demonstrations like ROT [6] and MCNN [7]. In summary, this work could use more comparisons (or atleast discussions comparing) to other works on offline MBRL, world models, and sample-efficient behavior cloning as well as evaluations in more challenging environments with image observations.\n\nMinor comments:\n- the return plotted for different methods is not normalized --- this makes it hard to determine its performance between random and expert and hard to compare with other papers\n- confusingly, \"inside world model\" is referred to as \"simulation\" and \"in simulation\" is referred to as \"real world\"\n- Appendix A.3 is empty\n\n[1] T Yu, et al, MOPO: Model-based Offline Policy Optimization, NeurIPS 20\n\n[2] M Janner, et al, When to Trust Your Model: Model-Based Policy Optimization, NeurIPS 19\n\n[3] M Janner, et al, Planning with diffusion for flexible behavior synthesis, ICML 22\n\n[4] V Micheli, et al, Transformers are Sample-Efficient World Models, ICLR 23\n\n[5] D Hafner, et al, Mastering Diverse Domains through World Models \n\n[6] S Haldar, et al, Watch and match: Supercharging imitation with regularized optimal transport, CoRL 22\n\n[7] K Sridhar, et al, Memory-Consistent Neural Networks for Imitation Learning, ICLR 24" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. What is the size of the collected dataset in terms of transitions?\n2. Could you provide pseudocode for the algorithm to aid understanding?\n3. Why were existing implementations of CQL[1] and SAC_N[2] not used?\n4. There are numerous typos that hinder readability. Such as incorrect citation format (line 51 and 53), \"This setting Furthermore, they showed ...\" (line 131), \"Therefore, the policy’s minimizes ...\" (line 136), \"Prioritized Level Replay as described in with an ...\" (line 305) and so on.\n5. The abstract states that \"training inside world models is usually studied in the context of offline RL.\" However, many online RL algorithms (e.g., MBPO, Dreamer, TD-MPC, IRIS, TWM, STORM) train agents entirely with imagined trajectories (i.e., within world model). On the contrary, most of the offline model-based RL seems to be based on the Dyna framework, that is, using both offline datasets and world model imagination trajectories to train the agent.\n\n[1] https://github.com/young-geng/JaxCQL\n\n[2] https://github.com/Howuhh/sac-n-jax" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The motivation to perform online RL directly in world models is timely and relevant." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper investigates online reinforcement learning directly within world models without conservative constraints typically used in offline RL and proposes to combine ensemble world models and prioritized level replay to tackle this problem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The method is confusing. What is the $\\delta$ in Equation (5)? Why the Equation (5) approximates the regret? The right side of Figure 1 is difficult to understand. I suggest adding pseudocode to clarify the algorithm.\n\n2. The paper structure and writing need significant improvement. For example, the introduction lacks a clear statement of contributions. The relationship between contextual MDP subsection in Preliminaries and the reset of paper is unclear. The Preliminaries section is too long.\n\n3. The dataset construction method appears similar to D4RL. Both record the encountered transitions from randomness to expertise during training. Why the authors claim that D4RL is adversarial in data coverage (line 55) and CQL and TD3+BC does not inform this (line 263)? The data coverage is a common concern when building offline RL datasets, and previous benchmarks typically offer choices with various data coverage (D4RL and Atari DQN-Replay[1]).\n \n4. The checkpoint frequency seems uniform based on Figure 2, despite claims of heuristic selection. At the 5th ckpt, the agent has basically converged. The distribution perhaps change little after the subsequent sampling. Should the sampling frequency be increased between the first few ckpts?\n\n5. The world model architecture (simple MLP with current state and action as input) seems overly simplistic compared to state-of-the-art approaches like RSSM, Transformer, or diffusion models. Since the authors assumes that learning is done within a generalist world model (line 58 and 87), the experiment results of MLP with no historical trajectory are not convincing. I even suspect that using a single SOTA architecture world model can already solve this problem.\n\n6. Based on Figures 4-6, the DR methods perform well, and the proposed PLR methods do not show clear advantages.\n\n7. Judging from the rendering results of the `ref` and `cite` commands, this paper does not use the ICLR 2025 template!\n\n[1] Agarwal R, Schuurmans D, Norouzi M. An optimistic perspective on offline reinforcement learning. International conference on machine learning. PMLR, 2020: 104-114." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- What is the size of the world model ensemble used in your experiments? What is the effect of the number of world models? \n- What is the additional cost of training and evaluating using an ensemble? \n- How does your method transfer to other benchmarks (e.g., Atari, Procgen, Meta-World, etc.) with regard to efficiency at training and inference time?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper investigates an interesting question, that is, whether RL agents can be trained online inside world models, without the need for constraints required in offline RL." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the challenge of training online RL models inside world models. For this they rely on an ensemble of world models to train agents in an online fashion, without any offline penalties. The proposed method is evaluated on robotics tasks including Cartpole, Halfcheetah and Hopper." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The presentation of this paper can be improved. For example, Figure 1 is confusing and lacks a detailed figure caption to explain what’s going on. Dataset distributions of offline RL datasets are criticised in the Discussion section, which feels out of place (also as this section comes after Related Work). Instead of bringing these results after the main experiments, they could be seen as a motivation before the main experiments (which would require experiments that support the motivation).\n\nIt is unclear where the proposed method starts and ends, or what the authors consider as their own method. Instead, 6 different variations (PLR, PLR_PVL, DR, DR_STEP, DR_PROB) of ensemble world models are tested against a single world model (WM). \n\nIn the current form, the paper lacks convincing empirical evidence. The main experiments are conducted on toy tasks like Cartpole, Pendulum, Acrobot, Hopper and Halfcheetah. Across most environments (Figure 4, 6, 7), the ensemble methods (PLR, DR, PLR_PVL, DR_STEP) tend to learn faster than a single model (WM) but reach the same level of performance. Furthermore, there is a lack of offline RL baselines (only CQL is provided). Also, there’s a lack of ablations on components of their method. For example, what is the effect of the size of the ensemble? \n\nThe proposed method relies on an ensemble of world models. Training an ensemble of world models is feasible in the toy tasks considered currently, but raises questions on scalability of the proposed methods to more complex environments, such as visual domains. Consequently, reporting information on the additional cost of training and during evaluation of those models would benefit the paper." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We train a set of different world models on a dataset of mixed expertise and use them as levels to train an RL agent" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024investigating,\ntitle={Investigating Online {RL} in World Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xw4jtToUrf},\nnote={under review}\n}" }, "abstract": { "value": "Over the past decade, online reinforcement learning (RL) has made drastic improvements in a number of settings, such as video games and robotics. However, despite these successes, the impact of RL on many *real-world* problems has remained limited. Underlying this fact is that, in many settings, we are unable to learn in an online fashion due to excessive cost and safety requirements or lack of an accurate simulator. \nIn principle, foundation world models trained on large-scale uncurated offline data such as internet videos and other modalities could provide a training paradigm for generalist AI agents which alleviates the need for task specific simulation environments. \nUnfortunately, training inside world models is usually studied in the context of offline RL, where popular datasets have a biased structure. This necessitates short roll-outs or other severely limiting mechanisms to prevent model exploitation. \nHere we probe under what circumstances full roll-out training inside world models is possible *without* any penalties.\nWe find that on a non-adversarial offline dataset simply ensembling over a large number of independently trained world models is sufficient to ensure transfer to the real world, even for datasets that are orders of magnitude smaller than is common in offline RL. Interestingly, more sophisticated methods for level selection provide no advantage and standard offline RL methods underperform in this setting." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "World models", "Domain Randomization", "Offline RL" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1a4de14d25d41cc425301e58ebc6600daf3d0762.pdf" }, "presentation": null, "primary_area": { "value": "reinforcement learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Investigating Online RL in World Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xwcCFxIEEL
111
main
Withdraw
11
unsupervised, self-supervised, semi-supervised, and supervised representation learning
Shilin Yan
~Shilin_Yan1
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": { "value": "@misc{\nyan2024,\ntitle={111},\nauthor={Shilin Yan},\nyear={2024},\nurl={https://openreview.net/forum?id=xwcCFxIEEL}\n}" }, "abstract": { "value": "xx" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Shilin_Yan1" ] }, "authors": { "value": [ "Shilin Yan" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "11" ] }, "large_language_models": { "value": [ "No, not at all." ] }, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": { "value": "xxx" }, "paperhash": { "value": "yan|111" }, "pdf": null, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": { "value": "No" }, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": { "value": "Yes" }, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "111" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xwcCFxIEEL
111
main
Withdraw
11
unsupervised, self-supervised, semi-supervised, and supervised representation learning
Shilin Yan
~Shilin_Yan1
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": { "value": "@misc{\nyan2024,\ntitle={111},\nauthor={Shilin Yan},\nyear={2024},\nurl={https://openreview.net/forum?id=xwcCFxIEEL}\n}" }, "abstract": { "value": "xx" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Shilin_Yan1" ] }, "authors": { "value": [ "Shilin Yan" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "11" ] }, "large_language_models": { "value": [ "No, not at all." ] }, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": { "value": "xxx" }, "paperhash": { "value": "yan|111" }, "pdf": null, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": { "value": "No" }, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": { "value": "Yes" }, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "111" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xwcCFxIEEL
111
main
Withdraw
11
unsupervised, self-supervised, semi-supervised, and supervised representation learning
Shilin Yan
~Shilin_Yan1
0
0
0
0
0
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": { "value": "@misc{\nyan2024,\ntitle={111},\nauthor={Shilin Yan},\nyear={2024},\nurl={https://openreview.net/forum?id=xwcCFxIEEL}\n}" }, "abstract": { "value": "xx" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Shilin_Yan1" ] }, "authors": { "value": [ "Shilin Yan" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "11" ] }, "large_language_models": { "value": [ "No, not at all." ] }, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": { "value": "xxx" }, "paperhash": { "value": "yan|111" }, "pdf": null, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": { "value": "No" }, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": { "value": "Yes" }, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "111" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xxSK3ZNAhh
HeurAgenix: A Multi-Agent LLM-Based Paradigm for Adaptive Heuristic Evolution and Selection in Combinatorial Optimization
main
Active
Combinatorial Optimization; Heuristic Evolution; Heuristic Selection; Large Language Models
optimization
3;3;3;5;5
4;4;3;4;5
2;2;2;3;3
2;2;3;3;2
2;2;2;2;2
3.8
4
2.4
2.4
2
0.645497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- What is the runtime of the learned approaches?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- **Research direction**\nThe research direction - using LLMs to create or adapt heuristics for combinatorial optimization - is very interesting and innovative.\n- **Novelty**: \nThe method is unique. While similar studies have recently been published, I consider them concurrent work in this new field." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces a novel approach that uses a large language model (LLM) to implement heuristics for combinatorial optimization problems. The method consists of four main phases: heuristic generation, heuristic improvement, benchmark evaluation, and heuristic selection. In each phase (referred to as an “agent” in the paper), an LLM is used to guide the process. The authors test their approach on six combinatorial optimization problems and report strong performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Reproducibility**:\nA major concern is that the paper does not provide enough detail to make the method fully reproducible. Since the approach has many components, each one needs a clear, in-depth description. However, the paper only describes these components at a high level, making it very difficult, if not impossible, for others to reimplement the method based solely on the information provided. The authors should offer much more detail on each phase.\n2. **Lack of Ablation Studies**:\nAnother significant issue is the lack of ablation experiments. The paper provides limited insights into the inner workings of the method, such as which components are the most critical or which hyperparameters exist and which are the most important. Including an ablation study would help clarify the contribution of each part of the approach and offer valuable guidance for tuning and improving the method.\n3. **Limited Evaluation on New Problems**:\nThe authors only evaluate the method on one new problem (My understanding is that the other problems tested were also used during development of the method). Given the claim that the method can be easily adapted to new problems, it would be more convincing if the evaluation covered a wider variety of problems (e.g., 10–20 problems). Testing on more problems would support the idea that the method can truly learn heuristics for many different combinatorial problems, rather than just performing well on a few hand-selected examples. Considering the simplicity of the tested problems and that only one new problem was used, the paper feels more like an early case study. However, because this research direction is so new, this limitation is more of a minor weakness.\n4. **Overstatement of Performance**:\nThe authors state in the abstract that their method “significantly outperforms state-of-the-art approaches.” This claim may be too strong, as the comparison is only against other LLM-based methods, not traditional optimization approaches from operations research. Furthermore, the comparison to other LLM-based methods is only done on one problem (the TSP), so even the claim of outperforming all LLM-based approaches is not fully substantiated." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How does each component impact performance? Which components require problem-specific knowledge, and to what extent do they influence the results?\n2. How many LLM queries are utilized in each component? Please provide examples or estimated numbers for LLM queries used in each component (including the self-correctness and all the other subcomponents), rather than a total number.\n3. How many independent runs are conducted for comparisons on all the tested tasks? Are they use the same intial heuristics? How robust is the performance across different runs? Which main component(s) do you think most significantly impact the robustness of the results and how do you resolve it?\n4. What would be the outcome if the same initial heuristics were used for both the proposed method and the comparison methods, such as EoH and ReEvo?\n5. During the heuristic generation stage, how are reference papers selected, are the papers automatically searched, selected, scaned, and summarized by LLMs, and how many reference papers are utilized for generating initial heuristics in each task?\n6. How are related problems determined? Does it mean that we already use domain knowledge on determine these problems? How many related problems are considered for each task? \n7. In real-world applications, where there are often few or no related papers and existing problems, how is the methodology adapted?\n8. How is the performance of the proposed method on the online bin packing problem, which has been tested on both FunSearch and EoH?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1.The introduction of new components, such as benchmark evaluation and heuristic selection, enhances both performance and efficiency. The efforts to extract knowledge from academic papers and related problems are interesting.\n2. The results have been effectively demonstrated across a variety of combinatorial optimization problems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents an LLM-based multi-agent framework named HeurAgenix for automated heuristic design. HeurAgenix utilizes LLMs not only for generating heuristics but also for their evaluation and selection. Additionally, it incorporates insights from related academic papers and heuristics applied to similar problems. The experiments are conducted on several combinatorial optimisation problems. The results show effectiveness and promise when compared to existing LLM-based heuristic search methods, such as EoH and ReEvo." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The framework comprises multiple components, some of which require interaction with LLMs tailored to specific problem designs. Although this complex framework might improve performance, it could hinder the generalization across different heuristic design tasks and various problems. The authors have shown applications in some combinatorial optimization problems; however, it would be beneficial if the authors could clearly outline which components necessitate task-specific adjustments and designs, how these effective designs can be implemented, and the extent to which different designs affect performance.\n2. It is recommended to perform an ablation study of each component within the same tasks to clearly ascertain their individual effectiveness.\n3. The paper could benefit from additional clarifications on the methodology (e.g., heuristics generation from reference paper and related problems) and experimental setups to further validate the claimed effectiveness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- An important research topic.\n- Targeting an emerging research field as a timely addition. \n- The introduction of a novel problem ensures meaningful evaluation.\n- A principled agentic framework is preferred over prior manual LLM+EA designs.\n- Promising empirical performance." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces a novel multi-agent framework, HeurAgenix, that leverages LLMs to solve COPs. HeurAgenix comprises four key agents: heuristic generation, evolution, evaluation, and selection. These agents utilize LLMs' capacities such as autonomous knowledge synthesis, dynamic adaptability, and decision-making. HeurAgenix outperforms state-of-the-art approaches by generating scalable, data-efficient heuristics for both classical CO tasks (e.g., the Traveling Salesman Problem) and novel ones (e.g., Dynamic Production Order Scheduling)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Some parts of the method seem case-specific rather than principled:\n - How do LLMs learn from reference papers or transfer knowledge from other problems during heuristic generation? Do you have to manually provide the involved knowledge sources and instructions?\n - In DPOSP experiment, how does the agent know the transferability between DPOSP and CVRP?\n - In single-round evolution, how do you perturb the solution? Do you have to manually design a perturbation heuristic in advance to enable the agentic framework?\n\n- Generating features and selecting heuristics at every solution step can lead to substantial latency. However, the solution efficiency of your framework is left undiscussed.\n\n- The overall framework contains many subtle mechanisms that are not sufficiently validated. For example, is the LLM-based smoke test effective? Can and why can LLM select effective heuristics given various complicated features?\n\n- Limitations of your agentic framework are not discussed. The cost and latency are obvious limitations.\n\n- Section 4.3 lacks experimental details. How do you ensure a fair comparison against prior LLM-based HH? For example, during training, do you use the same number of heuristic evaluations for all methods? During inference, do you implement both constructive and improvement heuristics while the baselines only implement the former? If so, does it make sense to also consider comparing runtime?\n\n- API costs and runtime of your method should be detailed.\n\n- Presentation should be improved. E.g., vector graphic is preferred; line 243: Figure3 -> Figure3; line 245: AppendixG.2 -> Appendix G.2." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This manuscript dynamically chooses the most appropriate heuristics for different COP instances." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This manuscript proposes HeurAgenix to generate, evolve, evaluate, and select heuristics for solving Combinatorial Optimization Problems (COPs)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "W1: On Page 2, the authors state that \"However, these approaches still rely heavily on existing approaches\". Yet, on Page 3, they acknowledge that their method also depends on existing heuristics: “the heuristic generation agent generates heuristics from LLM’s internal knowledge, reference papers, or related problems’ heuristics”. Please explicitly articulate the key innovations of HeurAgenix compared to previous LLM-based approaches.\n\nW2: The method description is purely verbal, lacking essential mathematical definitions and formulas. This omission makes it difficult for readers to grasp some key steps and hinders reproducibility. For example, what specific conditions determine when a single round of evolution should be performed versus multiple rounds? \n\nW3: On Page 4, the authors state that \"Due to a phenomenon known as hallucinations, directly using LLMs to generate heuristics for new problems often leads to incorrect heuristics.\" I disagree with this characterization. The term “hallucination” typically refers to instances where LLMs generate false or fabricated information. In this context, heuristics that are unexecutable or perform poorly should not be described as instances of \"hallucination\". Please provide a more precise description of the specific issues when using LLMs to generate heuristics.\n\nW4: The inclusion of results for EoH [1] and ReEvo [2] in Figure 8 but not in Figure 6 raises concerns about the consistency of comparative data. \n\nW5: In Section 4, the experiments are conducted exclusively with GPT-4, which raises concerns about the robustness of HeurAgenix when applied to other LLMs. Please evaluate HeurAgenix using multiple LLMs, such as Llama3-70b.\n\nW6: To my knowledge, several deep learning methods have been proposed for selecting heuristics across diverse COP instances (e.g., [3, 4]). Incorporating comparative experiments with these methods would demonstrate the effectiveness of HeurAgenix .\n\nW7: This manuscript omits several important experimental details and leaves key concepts undefined. For instance, the authors do not explain the \"cheapest insertion method,\" which is presented as a baseline in Figure 6. This manuscript contains numerous grammatical errors and incoherent sentences. For instance, on Page 3, the sentence: “Selection hyper-heuristics optimize by selecting the most suitable heuristic from a predefined set to adapt to the current problem scenario.” Additionally, on the same page, the term \"AppendixG.2\". Please check the content of the entire manuscript carefully!\n\n\n[1] Evolution of heuristics: Towards efficient automatic algorithm design using large language model. In International Conference on Machine Learning, 2024.\n\n[2] Large language models as hyper-heuristics for combinatorial optimization, arxiv, 2024.\n\n[3] A novel reinforcement learning-based hyper-heuristic for heterogeneous vehicle routing problem. Computers & Industrial Engineering, 2021.\n\n[4] Selecting meta-heuristics for solving vehicle routing problems with time windows via meta-learning. Expert Systems with Applications, 2019." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. How about the sensitivity of the proposed framework w.r.t. the underlying LLMs?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. This paper first combines the multi-agent concept with LLMs-based heuristic generation in solving CO problems.\n2. The experiments are generally extensive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces HeurAgenix, a multi-agent LLM-based framework for adaptive heuristic evolution and selection in solving combinatorial optimization (CO) problems. The framework consists of four key agents: a heuristic generation agent, a heuristic evolution agent, a benchmark evolution agent, and a heuristic selection agent. Experiments are conducted using GPT-4 on various CO problems, including TSP, CVRP, JSSP, MaxCut, and MKP." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This paper appears to involve substantial engineering and manual prompt design, which limits its novelty. As a follow-up to works like Funsearch, EoH, and ReEvo, either innovative optimization paradigms or significant empirical performance gains are expected to meet the bar of top-tier ML conferences.\n2. The proposed framework is complicated, with lots of components. It is unclear which part contributes most to the observed improvements. Ablation studies would greatly clarify the importance of each component.\n3. The empirical results are underwhelming. In Figure 9, why `GLS+EoH` is significantly better than your `GLS+Ours`?\n4. The presentation of the paper could be improved. For example, the figures are visually unappealing, and Page 6 is entirely filled with figures." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024heuragenix,\ntitle={HeurAgenix: A Multi-Agent {LLM}-Based Paradigm for Adaptive Heuristic Evolution and Selection in Combinatorial Optimization},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xxSK3ZNAhh},\nnote={under review}\n}" }, "abstract": { "value": "Combinatorial Optimization (CO) is a class of problems where the goal is to identify an optimal solution from a finite set of feasible solutions under specific constraints. Despite its ubiquity across industries, existing heuristic algorithms struggle with limited adaptability, complex parameter tuning, and limited generalization to novel problems. Recent approaches leveraging machine learning have made incremental improvements but remain constrained by extensive data requirements and reliance on historical problem-specific adjustments. Large Language Models (LLMs) offer a new paradigm to overcome these limitations due to their ability to generalize across domains, autonomously generate novel insights, and adapt dynamically to different problem contexts. To harness these capabilities, we introduce $\\textbf{HeurAgenix}$, a novel multi-agent hyper-heuristic framework that leverages LLMs to generate, evolve, evaluate, and select heuristics for solving CO problems. Our framework comprises four key agents: heuristic generation, heuristic evolution, benchmark evaluation, and heuristic selection. Each agent is designed to exploit specific strengths of LLMs, such as their capacity for synthesizing knowledge from diverse sources, autonomous decision-making, and adaptability to new problem instances. Experiments on both classic and novel CO tasks show that HeurAgenix significantly outperforms state-of-the-art approaches by enabling scalable, adaptable, and data-efficient solutions to complex optimization challenges." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Combinatorial Optimization; Heuristic Evolution; Heuristic Selection; Large Language Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/43c288b7d628122148c50f7190a8e991663692e0.pdf" }, "presentation": null, "primary_area": { "value": "optimization" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "HeurAgenix: A Multi-Agent LLM-Based Paradigm for Adaptive Heuristic Evolution and Selection in Combinatorial Optimization" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xxzukMsYs9
3D Object Manipulation in a Single Image Using Generative Models
main
Active
3d object manipulation;diffusion models;image editing;image animation
applications to computer vision, audio, language, and other modalities
3;5;6;8
4;4;3;4
2;3;3;3
1;3;2;3
3;3;4;3
5.5
3.75
2.75
2.25
3.25
-0.160128
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See above." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed method utilizes explicit 3D generation capability to ensure both static and dynamic manipulation. While other 2D-based methods fail to do so.\n- The utilization of HDRi for realistic lighting.\n- Visual quality is good." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces OMG3D, a framework for object manipulation in images that addresses both static editing and dynamic motion generation challenges, while also improving the realism of object appearance and lighting. OMG3D integrates precise geometric control with the generative capabilities of diffusion models, converting 2D objects into 3D for user-directed modifications and lifelike motions. To enhance texture realism, it proposes CustomRefiner, a module that refines textures using a customized diffusion model to match the style and perspective of the original image. Additionally, the authors introduces IllumiCombiner, a module that adjusts background lighting to create more realistic illumination, aligned with human visual perception. The authors conducted extensive experiments show that OMG3D achieves good performance in both static and dynamic applications." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of technical novelty, most of the presented techniques exist. The proposed method seems to put them together nicely to produce a few good results. E.g., CustomRefiner is a combination of depth-controlnet, dreambooth lora, and differentiable rendering on UV map, IllumiCombiner is a combination of HDRi estimation and virtual-plane rendering.\n- The idea of realistic lighting is interesting to me, but I am not convinced by the proposed method. The real light transport is more complex than by linearly modulated two terms. Solving global illumination is a very difficult problem with a single image. The proposed method can only handle simple objects.\n\nMinors:\n- L216: IlluminCombiner should be bold." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Have you tried to insert real humans instead of animation? Want to see that result." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "They can combine precise geometric control.\nThey can handle better texture renderings.\nThey can handle lighting better.\n\nThey offer complete comparison to showcase their results with other state-of-the-art methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method for inserting 2D objects into 3D and letting users modify them. The method involves training a customized diffusion model added with a lighting processing module." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "They can try comparison with VSD loss. Or provide more visual examples in the supplementary results." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Fig.4, there are two \"(ours)\". It seems that the one in the 2nd column is a typo.\n- How is c_a in (6) used to correct the color of the estimated lighting? \n- What are I_d and I_c in (7)? How to adjust I_d \"to maintain the object's saturation while ensuring that shadows remain evenly distributed in all direcionts\"?\n- How is the normal vector from the depth map being used in rendering shadows?\n- Is the motion manually defined for the results of Image Scultping? How come the pumpkin jumping example shows lack of motion?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ Compared with Image Sculpting, OMG3D can produce results showing better texture alignment with the original image and achieve realistic light and shadow effects.\n+ The idea of gradient backpropagation to the UV texture map through differentiable rasterization sounds novel.\n+ The idea of estimating a spherical light from the background image and applying it in the rendering pipeline to achieve realistic shading and shadow sounds logical.\n+ The qualitative results look convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors present a framework named OMG3D for object manipulation from images. Similar to Image Sculpting (Yenphraphai et al. 2024), OMG3D first segments the object and reconstruct a mesh in 3D. The mesh can then be mainuplated by 3D manipulation software. To address the loss of details in the rendered results, the authors propose a texture refinement module, named CustomRefiner, which performs gradient backpropagation directly to the UV texture map through differentiable rasterization. To achieve realistic lighting and shadow effects, the authors propose a lighting processing module, named IllumiCombiner, which estimates lighting from the background images and renders shadows that align with the scene." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- A large part of the proposed pipeline comes from Image Sculpting. For instance, the \"precise geometric control\" is made possible by object segmentation followed by image-to-3D. The generative enhancement used in driving the UV-texture optimization also appears to be identical to that in Image Scuplting. This makes this work a bit incremental and lowers its novelty. Overall, this work can be regarded as an integration of Image Scultping (for 3D model manuipulation) and DiffusionLight (for introducing light and shadows effects).\n- The key difference between this work and Image Sculpting is the replacing of direct rendered image enhancement with UV-texture optimization. However, the authors fail to discuss/analyze in detail why UV-texture optimization can produce better results than direct image enhancement.\n- The light processing and plane creation part is not very clear." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- How is depth control incorporated? Is it added to SDXL during Dreambooth finetuning? What does the denoising function look like?\n\n- How to deal with multiple planes (e.g., table and ground) in plane creation process? Since depth from Depth Anything is up to scale, how to adjust the plane to be in contact with the object?\n\n- How is the material property specified during rendering in the editing stage?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well written and the video demo is well made.\n- The method produces high-quality results, and those can be plugged into existing graphics pipelines relatively easily due to the explicit modeling of geometry, texture and lighting.\n- The authors did a good job in quantitative evaluations and comparisons, including some recent image/video generation methods.\n- The method is lightweight and can run on a single 3090 GPU." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes OMG3D for 3D editing of objects. It takes an image and produce 3D assets with high quality texture and lighting estimation. Then 3D editing including motion and shadow effects can be added in common 3d editing softwares. To get high-quality results, it proposes a texture refinement and a lighting estimation module in the image-to-3d stage. The method is compared against image and video generation models using user study and LLM-based evaluations, under which it works better in realism and consistency axis, and works better or on-par in terms of image/text alignment." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Is the comparison to images/video models fair? For instance, the pipeline introduced in this paper involves human editing (e.g., rigging and animation) while the baselines does not. Although I like the fact that the comparisons reflect the final quality, the different level of human involvement can be made clearer and discussed.\n- It would be useful to explicitly state the steps that require manual intervention and differentiate those from the automatic component of the method through a table. For instance, intensity enhancement needs manual adjustment but lighting estimation does not.\n- Lack of ablations and comparison to image-to-3D methods. Since this is main technical contribution, it would be useful to ablate the design choices that are different from existing methods, and report quantitative results showing their individual contributions to the overall performance. For instance, how much does DDIM inversion help, how much does dreambooth customization help, how much does depth controlnet help, and how much does feature injection helps. Ideally, by ablating each of the designs one after another, it can reach a known baseline." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024d,\ntitle={3D Object Manipulation in a Single Image Using Generative Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xxzukMsYs9},\nnote={under review}\n}" }, "abstract": { "value": "Object manipulation in images aims to not only edit the object presentation but also gift objects with motion. Previous methods encountered challenges in concurrently handling static editing and dynamic motion applications, while also struggling to achieve realism in object appearance and scene lighting. In this work, we introduce OMG3D, a novel framework that integrates the precise geometric control with the generative power of diffusion models, thus achieving significant enhancements in visual performance. Our framework converts 2D objects into 3D, enabling user-directed modifications and lifelike motions at the geometric level. To address texture realism, we propose CustomRefiner, a texture refinement module that pretrain a customized diffusion model to align the style and perspectives of coarse renderings with the original image. Additionally, we introduce IllumiCombiner, an lighting processing module that estimates and adjusts background lighting to match human visual perception, resulting in more realistic illumination. Extensive experiments demonstrate the outstanding visual performance of our approach in both static and dynamic scenarios. Remarkably, all these steps can be done using one NVIDIA 3090. The code and project page will be released upon acceptance of the paper." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "3d object manipulation", "diffusion models", "image editing", "image animation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/e5714e7593aae87a66988089441c48d2086f2969.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/8fa5e831c3992a2561ff112f866631aaa53df5ca.zip" }, "title": { "value": "3D Object Manipulation in a Single Image Using Generative Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xy6B5Fh2v7
Astute RAG: Overcoming Imperfect Retrieval Augmentation and Knowledge Conflicts for Large Language Models
main
Active
Retrieval Augmented Generation;Knowledge Conflicts
foundation or frontier models, including LLMs
5;5;5;6
3;4;4;4
3;2;3;3
2;2;2;3
2;3;2;3
5.25
3.75
2.75
2.25
2.5
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "While the paper presents valuable contributions and strong empirical results, it falls somewhat short of the technical depth and theoretical foundations:\n- Could you elaborate on your choice and implementation of Google Search as the retrieval method: Were there specific criteria for determining which snippets were included in the 10 selected passages? What were the specific reasons for choosing Google Search over other commercial search engines (e.g., Bing)? How did you handle sponsored or advertised content in the search results? \n- How might the results differ with other retrieval approaches? How sensitive is the method to different prompt templates? What is the impact of different parameter settings (e.g., number of iterations, maximum generated passages) on performance and stability?\n- How would Astute RAG perform on more complex tasks beyond short-form QA? Could you provide examples of failure cases when the system might not be appropriate to use?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper presents an advanced approach to RAG systems, with a well-structured methodology for combining internal LLM knowledge with external sources. The three-step process demonstrates careful consideration of the challenges in knowledge integration.\n- The authors provide comprehensive experimental results using state-of-the-art LLMs (Gemini and Claude) and multiple datasets (NQ, TriviaQA, BioASQ, PopQA), offering valuable insights into the method's performance across different scenarios.\n- The work addresses a critical challenge in RAG systems: \"the prevalence of imperfect retrieval and knowledge conflicts\", backed by solid empirical evidence showing that roughly 70% of retrieved passages don't directly contain true answers.\n- The authors' analysis using realistic conditions with Google Search as a retriever provides valuable insights for the research community, particularly in understanding real-world RAG system behavior." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Astute RAG, a novel approach addressing imperfect retrieval and knowledge conflicts in RAG systems. \nThe authors first conduct comprehensive analyses demonstrating that imperfect retrieval is prevalent in real-world RAG applications and identify knowledge conflicts between LLMs' internal knowledge and external sources as a key bottleneck. Authors then proposes Astute RAG, which adaptively elicits internal knowledge from LLMs, performs source-aware knowledge consolidation, and finalizes answers based on information reliability. Besides that, the authors demonstrate that Astute RAG outperforms previous robustness-enhanced RAG methods (especially in worst-case scenarios where traditional RAG approaches fail)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- While technically sound, the paper doesn't sufficiently articulate why Astute RAG is particularly advantageous for real-world applications compared to simpler approaches, such as different passage ordering strategies (e.g., chronological, relevance-based, or clustered arrangements), varying quality of knowledge sources (from high-authority to potentially unreliable sources), or different passage selection methods. In fact, authors state that passages are presented by reversed order in the set of experiments, although no further positional dependence has been explored. Recent RAG studies has shown significant differences between distinct arrangement strategies (e.g. Alessio et al. 2024 Improving RAG Systems via Sentence Clustering and Reordering).\n- The focus on short-form QA tasks limits understanding of the method's broader applicability. Imho, retriever improvements do not always translate into proportional gains in final answers, particularly in open-domain questions. in specialized tasks, even small improvements in retrieval can significantly boost final answers.\n- The method's reliance on advanced LLMs with strong instruction-following capabilities significantly limits its applicability. The paper would benefit from addressing how the approach could be adapted for more resource-constrained or smaller specialized language models. Could you discuss strategies for adapting Astute RAG to resource-constrained environments?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "see weakness" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- This paper provides an empirical experimental analysis of the impact of imperfect retrieval and knowledge conflict on the accuracy of RAG systems, thereby validating the significance of its motivation.\n- This paper presents an intuitive approach that, through the careful design of a framework and prompts, can alleviate the issues posed by the aforementioned challenges, thereby enhancing the performance of RAG systems without training." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes astute RAG, a framework to address imperfect retrieval and knowledge conflict challenges in retrieval-augmented generation.\nSpecifically, astute RAG consists of 3 steps including adaptive internal knowledge generation, iterative knowledge consolidation and answer finalization.\nExperiments demonstrate that astute RAG outperforms previous robustness-enhanced RAG methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Applicability of the method**: The framework proposed by the authors involves a complex pipeline and prompt design, which necessitates that the target model possesses various capabilities. However, the authors do not provide sufficient experimental evidence regarding the performance of current models in these intermediate stages. Furthermore, the experiments were conducted solely on two close-sourced models with undisclosed details. The lack of relevant details prevents readers from assessing the applicability and limitations of the method. I strongly recommend that the authors include experiments using more open-source models and provide analyses of the intermediate processes.\n- **Mismatch between method and experimental design**: In the initial step, the authors employ adaptive generation to extract the model's internal knowledge. However, in the experimental setting, the model is required to generate a maximum of only one passage, which means the design of adaptive generation is not effectively reflected in the experiments. Consequently, the comparison of the number of API calls also lacks significance. Furthermore, I believe that the authors should focus on comparing the total number of API tokens used rather than the number of calls.\n- **Novelty of the method**: Although the authors have designed a new framework, similar ideas have appeared in some prior works (e..g, [1]). This factor limits the novelty of this paper.\n\n[1] Merging Generated and Retrieved Knowledge for Open-Domain QA" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. The relationship between retrieval imperfection and query characteristics needs further investigation. Please consider conducting additional experiments to: (1) analyze how different query types affect retrieval performance, (2) evaluate whether the findings from short-form QA can be generalized to other formats,(e.g. Long-form QA) and (3) compare system performance with varying numbers of retrieved documents (e.g., 10, 20, 50) given that modern LLMs can handle larger context windows. These analyses would provide more comprehensive insights into the retrieval mechanism's limitations and potential improvements.\n\n2. In the manuscript, the author mentioned that there were several cases where the LLM refused to provide answers during the experiments. Could you please clarify how these refusal cases were handled in the results presented in Table 1? It would be valuable if you could provide specific statistics on the frequency of LLM refusals, and describe mitigation strategies implemented to handle these refusal scenarios? Furthermore, it appears that these LLM refusal cases could significantly impact the overall Recall of the question-answering , yet there seems to be a lack of discussion regarding this crucial aspect. A more detailed analysis of how these refusals affect the system's would strengthen the evaluation section and provide a more comprehensive understanding of the system's real-world performance and limitations.\n\n3. Please elaborate on the potential role of reranking in ASTUTE RAG. Specifically, could you: (1) discuss why reranking was not incorporated in the current implementation, (2) analyze how reranking might help address the performance decrease observed in Table 1 when using RAG, and (3) explore whether reranking could be a promising direction for future improvements in reducing noise and enhancing retrieval relevance. This discussion would provide valuable insights into the design choices of ASTUTE and potential avenues for enhancement.\n\n4. Please maintain consistency in the abbreviation format, specifically for 'Retrieval-Augmented Generation (RAG)' in the first sentences of the abstract and 'Retrieval augmented generation (RAG)' in the first sentences of the introduction." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The research question, \"Is there an effective method to integrate internal and external knowledge for enhancing the reliability of RAG?\", is at the cutting edge of the field and holds substantial practical significance for the RAG community. \n\n2. The essence of this paper is to utilize LLM to conduct a self-reflection on the query (generating internal knowledge documents) first, and then perform a conflict detection on external issues, making full use of the LLM's own capabilities and effectively alleviating the conflict between internal and external support. \n\n3. The method enhances the robustness in case of retrieval failure. Even when all retrieved information is noisy, it can effectively ensure the bottom-line performance of LLM." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper focuses on the problem of imperfect retrieval in RAG for LLMs. It analyzes its prevalence and negative impacts like RAG failures and knowledge conflicts. To address this, ASTUTE RAG is proposed, which adaptively generates internal knowledge, iteratively consolidates knowledge with source-awareness, and finalizes answers based on reliability. Experiments show ASTUTE RAG outperforms baselines in various scenarios, effectively resolving knowledge conflicts." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The performance and efficiency of the method are relatively low. The AstuteRAG proposed by the authors is used in the retrieval (online) stage, where LLM is made to generate as much internal knowledge Q&A as possible. This process is rather time-consuming and consumes tokens. Subsequently, iterative knowledge verification is required, which still uses LLM for conflict detection and paragraph merging. Overall, it is very time-consuming and reduces the practicality of the method.\n\n2. A major potential drawback is that the knowledge consolidation process is fully completed by LLM. Although the conflicting information is grouped according to its consistency and the conflicting information is separated and then LLM is made to propose answers and assign confidence scores for each group of documents. Essentially, it is still LLM making judgments based on its own capabilities, and the influence of inherent bias and hallucination cannot be eliminated. A very concerning question is whether the confidence score of LLM shows an obvious bias when there is a conflict between internal and external knowledge? The bias may not necessarily be specifically towards internal knowledge or external knowledge, but rather towards some tokens with a specific high distribution.\n\n3. The authors' verification of the method in this paper is all based on benchmarks with definite factual answers, lacking analysis of some scenarios. For example, both the internal knowledge and the external retrieval content are actually correct, but they form a conflict due to the time range or other specified limiting conditions in the query. From the current method, it is not seen that AstuteRAG has good robustness in this regard. This is also strongly related to the judgment basis mentioned in Weakness 2" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See Weaknesses above." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "In general, I think it is critical to resolve the imperfect retrieval problem and the knowledge conflict problem for better/more robust RAG, and the proposed pipleline(adaptive inner knowledge generation and iterative consolidation) is helpful for robust RAG. Furthermore, I found the proposed method is training-free." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors address the imperfect retrieval problem and the knowledge conflict problem in RAG, and propose a AstuteRAG method which firstly adaptively generate passages using its internal parameter knowledge, then consolidate information from both generated passages and retrieved passages by taking their source information into consideration, constructing consistent groups, and addressing the conflicts iteratively. Experimental results show some performance improvements over several RAG baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Both the imperfect retrieval problem and the knowledge conflict problem have been widely recognized in the RAG field (see Benchmarking large language models in retrieval-augmented generation AAAI 24 for example), therefore it is inappropriate for \"previous studies have rarely connected...\". The authors should make a more comprehensive review of RAG studies;\n2. There are many previous studies address the source/confidence-awareness of RAG, iterative RAG, and knowledge-conflict consolidation methods. Therefore, to claim this paper's contributions, the authors should explain the differences/advantages of their methods over these studies.\n3. Based on the above observations, I found the baselines in this paper are all general RAG baselines rather than RAG methods addressing the noise/conflict RAG problem, which makes the experimental results not convincing enough. I think the authors should also compare with iterative-based, agent-based, and confidence-aware RAG baselines." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024astute,\ntitle={Astute {RAG}: Overcoming Imperfect Retrieval Augmentation and Knowledge Conflicts for Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xy6B5Fh2v7},\nnote={under review}\n}" }, "abstract": { "value": "Retrieval augmented generation (RAG), while effectively integrating external knowledge to address the inherent limitations of large language models (LLMs), can be hindered by imperfect retrieval that contain irrelevant, misleading, or even malicious information. Previous studies have rarely connected the behavior of RAG through joint analysis, particularly regarding error propagation coming from imperfect retrieval and potential conflicts between LLMs' internal knowledge and external sources.\nThrough comprehensive and controlled analyses under realistic conditions, we find that imperfect retrieval augmentation is inevitable, common, and harmful. We identify the knowledge conflicts between LLM-internal and external knowledge from retrieval as a bottleneck to overcome imperfect retrieval in the post-retrieval stage of RAG.\nTo address this, we propose Astute RAG, a novel RAG approach designed to be resilient to imperfect retrieval augmentation. It adaptively elicits essential information from LLMs' internal knowledge, iteratively consolidates internal and external knowledge with source-awareness, and finalizes the answer according to information reliability.\nOur experiments with Gemini and Claude demonstrate the superior performance of Astute RAG compared to previous robustness-enhanced RAG approaches. Specifically, Astute RAG is the only RAG method that achieves performance comparable to or even surpassing conventional use of LLMs under the worst-case scenario. Further analysis reveals the effectiveness of \\method in resolving knowledge conflicts, thereby improving the trustworthiness of RAG." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Retrieval Augmented Generation", "Knowledge Conflicts" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/fa7de8799edddd585935995aa4a9596f802754bb.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Astute RAG: Overcoming Imperfect Retrieval Augmentation and Knowledge Conflicts for Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xy9yv5siYQ
Learning Dynamic 3D Gaussians from Monocular Videos without Camera Poses
main
Active
Dynamic reconstruction;camera pose estimation
applications to computer vision, audio, language, and other modalities
3;5;5;8
4;4;5;4
2;2;3;4
2;2;3;3
2;1;3;4
5.25
4.25
2.75
2.5
2.5
-0.080845
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "It would be interesting to add even more ablation studies to understand which parts of the method are more important." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The idea is interesting and important. The topic itself has gained a lot of attention in recent years.\n- The novel idea of hexplane representation together with camera pose initialization with additional optimization is appreciated.\n- Also, using priors such as depth and optical flow is meaningful.\n- The method outperforms other methods on this task, sometimes even methods that assume camera poses.\n- The paper is well-written and easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new method for dynamic scene reconstruction. The main novelty is in using the hexplane representation without known camera poses. Results show superior performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The method would have been stronger if it didn't have to assume the given camera intrinsics.\n- More qualitative results would make the paper better." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "please refer to the weakness part." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Promising experimental results demonstrate the effectiveness of this work.\n2. Camera pose estimation with relative initialization and joint optimization is novel.\n3. Splitting dynamic and static objects in scenes for optimization can reduce artifacts in reconstruction." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work presents a framework to efficiently reconstruct dynamic scenes from casually captured monocular videos. Similar to many concurrent works, the main method also uses Gaussian splatting as a 3D representation. In this framework, a camera estimation module is introduced to obtain frame-wise camera poses. Deformations of Gaussians are represented using a HEX-Plane representation. Extensive experiments are conducted on datasets including Dychec, NVIDIA, and Sintel." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Limited technical novelty. Combining HexPlane representation with Gaussian Splatting does not seem novel, as many published works have combined TriPlane representation with Gaussian Splatting.\n\n2. Lack of justification for using HexPlane. Although disentangling dynamic and static objects in scenes is sound, the adoption of HexPlane representation is not well-presented. A simple Fourier series, as used in Splatter-a-Video, can also represent Gaussian dynamics in the center position and rotations. While HexPlane introduces significantly more computation and storage overhead, it is vital to justify the use of HexPlane over a simple Fourier series.\n\n3. Although this work is concurrent with similar Gaussian video representations, such as Splatter-a-Video [1] and GFlow [2], discussion on these works is still necessary, as these two works have been publicly available on arXiv for about four months before the ICLR submission deadline.\n\n4. The relative camera pose module is designed only with depth priors, focusing on relative camera movement between two frames. This setting is consistent with that in Dust3R; why not directly apply Dust3R?\n\n5. Evaluation metrics are too limited. As the relative camera poses are initialized and further jointly optimized in this work, why not quantitatively evaluate the camera pose accuracy on the Sintel dataset?\n\n6. Typos exist, e.g. line 080 and line 138.\n\n\n\n\n[1] \"Splatter a Video: Video Gaussian Representation for Versatile Processing.\", NeurIPS 2024 \n\n[2] \"GFlow: Recovering 4D World from Monocular Video\", arxiv 2024" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "It would be helpful if the authors could address the following questions:\n1. What is the main difference between the proposed pairwise camera pose estimation compared to the camera marbles initialization in DGMarbles? It would be helpful to clarify or discuss this in the paper. \n2. The Hexplane representation seems to be one of the main contributions of the paper. If that is the case, it would be helpful to discuss its relation to 4DGS in Section 3.3 and highlight their differences. \n3. Similar to DGMarbles, the proposed approach supervises the motion field with a tracked 2D trajectory from CoTracker. Why is this supervision chosen in opposed to the optical flow that was used in the static regions? Would it bring any difference if both regions were supervised with 2D trajectory or optical flow?\n4. The ablation study only studies the differences in representation. However, the proposed method has a heavy emphasis on loss regularizations, which likely heavily contributed to the improved performance. How would the system perform without each regularization term? It would be helpful to include these results to bring intuitions on how and why the chosen Hexplane representation brings merit. \n5. The paper reads to be a carefully engineered system comprised of components from previous approaches. It is difficult to locate the merit or intuitions from the paper. The paper can be further improved if the novelties are highlighted and discussed more in detail." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The problem of focus is clearly motivated. Reconstructing a dynamic scene from a monocular sequence is becoming more important and is a nontrivial task. The proposed pipeline is able to outperform previous methods for most tested scenes. The diagrams are clear and easy to understand. There are enough visuals to illustrate the improved render quality. Thorough baseline comparisons are included to demonstrate the performance improvements." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper is motivated to model dynamic Gaussian Splatting scenes without knowing camera poses. The authors point out that the previous approaches typically separately model the static and dynamic regions, leading to prolonged training time and potentially suboptimal reconstruction. In response to these problems, the authors propose to initialize the camera poses with pair-wise relative camera poses, and a unified representation using Hexplane for modeling the static and dynamic regions together. Depth and optical flow priors are also introduced to regularize the motion further." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Although the proposed approach outperforms existing approaches, it seems that the approach is more like an engineered combination of previous approaches. The initialization of the pairwise camera pose is identical to that of the camera initialization in DGMarbles. The Hexplane representation of a Gaussian scene seems to be inherited from the 4DGS method. Currently, it is hard to identify the main difference of each component in the proposed approach from the previous methods. The paper can be further improved if the merits are highlighted and with detailed discussions to distinguish itself from baselines. \n\nBesides, the paper's claim on not \"disentangling static and dynamic regions using two separate representations\" seems questionable. Although the static and dynamic parts do share the same Hexplane representation, they are still supervised differently, leading to different treatments for the two parts. The approach used a complex combination of loss terms, which are also not ablated in the experiments, so it is hard to know whether these loss terms are necessary or if they could potentially be part of the byproduct of using a Hexplane representation. \n\nOverall the novelty of the approach seems limited without further clarifications. The paper can be potentially further improved by addressing this problem and including more ablation studies on the loss term to promote understanding of the system." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. The static Gaussian field and the deformable Gaussian field are optimized separately using mutually exclusive masks, but why do the authors claim that the proposed hexplane-based Gaussian field is a _unified_ representation? From my perspective, 4DGS appears to provide a more ‘unified’ representation.\n2. Is the deformable Gaussian field optimized using only foreground dynamic masks? In other words, only dynamic regions of the deformable Gaussian field are supervised? Then how to ensure that Gaussians in static regions would not be affected by the deformable field when rendering from a novel view?\n3. How are 3D Gaussians initialized before optimizing the Hexplane-based Gaussian field? \n4. Please provide more details on how to minimize the objective defined in Eq. 16.\n5. How are the static regions obtained in each frame?\n6. It would be beneficial to provide a mathematical form of the _rotDeform_ in Eq. 17.\n7. How many datasets are used for evaluation? The authors list three: DyCheck, NVIDIA DynamicNeRF, and MPI Sintel (Line 86-87), but an additional DAVIS dataset is also mentioned (Line 413).\n8. Including relevant citations for the scale-invariant loss and ARAP loss would be helpful.\n9. I am concerned about the qualitative results in Fig. 4, especially the last two columns, which empirically do not align with the quantitative results in Tab. 2. There is an urgent need for additional qualitative comparisons presented in the form of videos.\n\n- Please check the format of citations: \n> When the authors or the publication are included in the sentence, the citation should not be in parenthesis using \\citet{}.\n- Please check the format of notations. For example, it is recommended to format the high-dimensional spatial-temporal feature _in bold_ as $\\mathbf{f}_d$.\n- A non exhaustive list of typos:\n\t- Line 69: vides -> videos\n\t- Line 137-138: files -> fields" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The method explores 3DGS for dynamic scene reconstruction with no pose prior. \n- Comprehensive experiments on diverse datasets demonstrate its effectiveness in dynamic novel-view synthesis as well as camera pose estimation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the task of dynamic scene reconstruction, specifically developing a deformable 3DGS representation from an unposed monocular video. The proposed method first initializes camera poses by optimizing relative poses between adjacent frames via local 3DGS. To learn the global deformable 3DGS, a Hexplane-based encoder is employed to model both the static and dynamic regions in a unified manner. The authors evaluate the proposed method on diverse datasets and demonstrate its effectiveness and robustness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Lack of novelty. The proposed method is more like a mixed bag which combines [1], [2], [3] and [4]. Specifically, the hexplane-based deformable Gaussian field has been explored by [1]. The relative pose initialization is adopted from [2]. The reprojection loss in Eq. 12 is similar to that of [3], while the depth alignment loss in Eq. 13 is similar to the Eq. 15 in [4]. The authors may have to make their contributions more explicit. Please also refer to Q1-2 in the question section.\n- The authors may have to improve the clarity of writing. The paper is somewhat hard to follow due to factors such as missing necessary details (see Q3-6), inconsistencies (see Q7), and missing citations (Q8).\n- The limited qualitative results raise another concern. It would be preferable to include videos or real-world demonstrations as additional supplementary material. Also see Q9.\n\n\n[1] Guanjun Wu, et al. \"4D Gaussian Splatting for Real-Time Dynamic Scene Rendering.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024.\n\n[2] Yang Fu, et al. \"COLMAP-Free 3D Gaussian Splatting.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2024.\n\n[3] Yu-Lun Liu, et al. \"Robust Dynamic Radiance Fields.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2023.\n\n[4] Jiahui Lei, et al. \"MoSca: Dynamic Gaussian Fusion from Casual Videos via 4D Motion Scaffolds.\" arXiv preprint arXiv:2405.17421. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024learning,\ntitle={Learning Dynamic 3D Gaussians from Monocular Videos without Camera Poses},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xy9yv5siYQ},\nnote={under review}\n}" }, "abstract": { "value": "Dynamic scene reconstruction aims to recover the time-varying geometry and appearance of a dynamic scene. Existing methods, however, heavily rely on the existence of multiple-view captures or the accurate camera poses estimated by Structure from Motion (SfM) algorithms. To relax this constraint, we introduce a method capable of reconstructing generic dynamic scenes, from casually captured monocular videos without known camera poses. Unlike recent works that treat static and dynamic content separately, we propose a unified Hexplane-based Gaussian field to capture the complex effects of scene deformation and camera motion. The Hexplane decomposition enables feasible disentanglement for effective optimization. Combined with an efficient camera pose initialization strategy, our approach significantly improves view synthesis quality and camera pose estimation accuracy over previous methods, while enhancing computational efficiency." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Dynamic reconstruction", "camera pose estimation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/445e1d2ca51adfc03239c196c7063f59df87e057.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Learning Dynamic 3D Gaussians from Monocular Videos without Camera Poses" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xybTwSsdBP
OptBatch: Optimizing Instruction Tuning with Data Selection through Batch Stratified Sampling
main
Withdraw
data selection;coreset;gradients;instruction tuning;large language model
other topics in machine learning (i.e., none of the above)
run zou;Yifan Ding;Siyu Liu;Jianhang Ding;wenwu;Hao Chen;Beibei Chen;yun lou
~run_zou1;~Yifan_Ding6;~Siyu_Liu7;~Jianhang_Ding1;~wenwu1;~Hao_Chen79;~Beibei_Chen1;~yun_lou2
3;3;5;6
4;4;3;4
2;3;2;4
2;2;2;3
2;2;2;1
4.25
3.75
2.75
2.25
1.75
-0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": { "value": "Dear Editor,\n\tI would like to express my sincere gratitude to the hardworking staff of your conference for the efforts in reviewing our manuscript. I am deeply sorry to submit a request for the withdrawal our paper to your esteemed conference. We believe that there are some areas in the manuscript that require further improvement, and out of our sense of responsibility towards your conference, we have decided to withdraw our paper after careful deliberation and discussion.\n\tWe understand that this decision may cause inconvenience, and we deeply apologize for any waste of time and resources that may have been incurred. We want to assure you that this decision was made in the best interest of maintaining the scientific integrity of them manuscript, and we do not wish to publish work that does not meet the highest standards of quality. Once again, we sincerely apologize for any inconvenience caused to your conference. Thank you for your understanding and cooperation in this matter." }, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. In most of the figures, what does the \"Training Samples\" means?\n3. What the time complexity of the online batch selection procedure?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. Consistently outperform other baselines across different models, datasets and pruning rates.\n2. Fair computational saving (~30%) to maintaining equivalent loss.\n2. Human judgment for more robust evaluation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduce a novel data selection method for instruction tuning named OptBatch. OptBatch propose an online loss-probability based stratified sampling algorithm to select batch with higher diversity and use hessian gradient optimization to guide next batch selection. Experiments on multilingual translation, QA datasets, and multi-dialogue conversations shows that OptBatch can maintain the same loss at a reduced computational cost." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. All the experiments is conducted for one training epoch. Is one epoch best for final performance?\n2. It's unclear whether OptBatch will finally reach similar loss compared to full data training. If not, OptBatch may not suitable for practical use because we may perfer FLOPs saving under the same performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Is the loss in Figure 3, 4, 5, 6, 8, 9 training loss or validation loss? If it is the validation loss, how is the validation set built for each dataset? Also, what is the unit of training example in the x-axis?\n2. Appendix C mentions that Openorca is not suitable due to the potential pre-exposure of Llama3 to the dataset. Can this be addressed by using the base model of Llama3 instead of the instruction model?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The method combines difficulty and diversity to select samples. It is more reasonable than methods can consider only one aspect.\n2. They propose to calculate the distance with Hessian gradient. Ablation study in Figure 9 shows that Hessian gradient is better than embedding or gradient norm when calculating the distance.\n3. To measure the response quality, they provide both GPT-4 and human evaluations. They also provide the loss under different computes in Figure 8 to show that the proposed method can achieve the most computational savings among different methods." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an online data selection method for instruction tuning of LLMs that combines difficulty and diversity. To ensure the difficulty they apply stratified sampling with the probability proportional to the exponential of loss. To ensure the diversity, they greedily sample examples to maximize Hessian gradient distance to existing samples. Experimental results show that the method can achieve lower loss under different pruning rates and better response quality under both GPT-4 and human evaluations." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The settings of human evaluation (the number and qualification of human annotators, instructions given to them, etc) are missing. From the given details, the human annotators are provided the scoring results of GPT-4, which might weaken the value of human evaluation.\n2. The writing of the paper is poor. Especially the theoretical analysis in Section 3.1 is confusing. I am not sure whether it really supports any part of the algorithm. The authors should consider moving the subsection after the algorithm description and clearly stating which step the theoretical analysis supports." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "I am concerned about the human annotators' background and how they are tasked in the human judgement." }, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Please refer to Weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- **Originality**: The paper introduces a novel data selection technique, OptBatch, which leverages stratified sampling combined with Hessian gradient optimization to maximize diversity and learnability in batch selection.\n- **Quality**: The authors thoroughly evaluate OptBatch against several baseline methods. Experimental results across multiple datasets demonstrate that OptBatch achieves competitive or superior performance while significantly reducing computational costs.\n- **Clarity**: The paper is generally well-organized and structured, presenting the motivation, methodology, and experimental results in a clear sequence.\n- **Significance**: OptBatch addresses a crucial challenge in instruction tuning for LLMs by reducing training data volume without compromising model accuracy, making it impactful for applications requiring cost-effective scaling." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes OptBatch, a data selection method designed to optimize instruction tuning for large language models (LLMs) by focusing on whole-batch data learnability. The approach uses stratified sampling to ensure coverage of the data distribution, maximizing inter-sample diversity within batches by increasing relative distances between samples. Additionally, it employs Hessian gradient optimization to guide the selection strategy for subsequent batches, enhancing generalization and reducing computational cost by 20-40% without sacrificing model performance. Experiments across tasks such as multilingual translation, dialogue, and question answering show that OptBatch outperforms prior methods, achieving lower loss and improved computational efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Unclear Notation**:\nThe paper suffers from unclear and inconsistent notation, which hampers understanding of the key equations and methods:\n\n - In Equation (1), the notation for $l$ lacks clarity in definition. It’s unclear how $l$ is computed, leaving the derivation and insight behind Equation (1) and (2) unclear.\n - In Equation (6), the introduction of the Hessian gradient is confusing. Adam is generally known as a first-order optimization method, so the addition of a Hessian term deviates from standard practice and isn’t adequately justified. Furthermore, the paper inconsistently switches between bold and italic symbols around Equation (6), which adds unnecessary confusion.\n\n2. **Usage of Equation (7)**:\nEquation (7) is insufficiently integrated into the proposed method. The paper fails to clarify how this equation aligns with the broader methodology, making its inclusion feel confusing and extraneous within the current context. More detailed explanations are needed to make this equation’s purpose clear to readers.\n\n3. **Hyperparameter $k$ (Number of Strata)**:\nThe paper lacks a discussion on the hyperparameter $k$, which determines the number of strata. There is no mention of how $k$ is set or how it impacts the performance of OptBatch. Providing insight into $k$’s influence on performance would help clarify how stratified sampling affects the results and guide practical implementation.\n\n4. **Lack of Novelty**:\nThe method is not particularly novel. Similar coreset selection strategies based on clustering have been introduced in prior works, such as in [1], [2] and [3]. The similarity of OptBatch to these clustering-based coreset methods raises questions about the incremental contribution of this approach.\n\n[1] TAGCOS: Task-agnostic Gradient Clustered Coreset Selection for Instruction Tuning Data\n\n[2] GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training\n\n[3] Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "+ Where is the reference or link for the NetLit dataset? Or is it a private dataset? Sorry if I missed any detailed description about NetLit in the paper.\n+ Some papers appeared more than once in the REFERENCES section, e.g., *Less: Selecting influential data for targeted instruction tuning*.\n+ How do the authors think about other ways of maximizing the distances within selected embeddings/vectors? One example could be Diversify and Conquer: [Diversity-Centric Data Selection with Iterative Refinement](https://arxiv.org/abs/2409.11378)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ I appreciate the novelty of the idea.\n+ I personally believe it is important to study data selection methods that consider the diversity of the selected datapoints. In my understanding, if we take the gradient-based feature as \"how much new information the datapoint can bring to the model\", OptBatch maximizes \"new information\" by diversifying the gradient-based features of selected datapoints - this motivation makes a lot of sense to me." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes OptBatch for instruction-tuning data selection. The core idea is to take the Hessian gradient as a datapoint's feature and select datapoints by maximizing the distances within their features. To let OptBatch account for both challenging and easy samples, it stratifies datapoints based on loss and calculates each stratum's selection size. The paper showed that OptBatch outperforms several baselines on LLaMaQA, WikiMatrix, and NetLit." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "+ **Evaluations are poor**: In the modern setup of instruction tuning, the loss is NOT indicative of the real performance at all. Commonly, people take only the GPT/Human win-rate (or score) as the metrics for measuring performance (it is kind of a standard way these days).\n + Most evaluations in the paper take loss as the metrics (in both main and auxiliary experiments), and the paper does GPT/Human win-rate (or score) evaluation on ONLY NetLit.\n + In my understanding, LLaMaQA is a standard **open-ended** instruction-response dataset (like WildChat, ShareGPT, Alpaca, etc), so the best way to do evaluation is still GPT/Human win-rate (or score) evaluation instead of metrics like BLEU or ROUGE.\n + Overall, the paper's evaluation is poor and unreliable IMO, which cannot validate the real effectiveness of OptBatch.\n+ **Important design choices are unjustified**: An important design choice of OptBatch is stratifying datapoints based on their losses. Please feel free to point it out if I missed anything.\n + Why do you do that? I saw `methods that prioritize high-loss may be overly influenced` but it is does not explain why you adopt this design choice.\n + There is also no ablation study to prove its necessity.\n + What's the number of K? I think it is an important hyperparameter and even some experimental analysis is needed (at least K=1 is relevant to the ablation study mentioned above).\n+ **Presentation could be improved (not a major concern)**\n + The paper could try to make some motivations and intuitions more clear. For example, The reason why `we use exp(loss) as the selection probability` (line 212) is unclear.\n + In Algorithm 1, the notation usage is confusing.\n + Why is there $\\mathbb{B}$ and $\\mathbb{S}$ in lines 3 and 4, which didn't exist before?\n + It seems that you want to first do some sampling to fix the selected datapoint number in each stratum, so $S$ in line 2&4 is different from line 9&14. If my understanding is correct, it would be better to use different notations. It is very confusing to let $S$ contain both selected datapoints and datapoints in the preliminary sampling process (just used to fix the number of selected datapoints).\n\nMy current overall score for this paper is 3, which is below the acceptance threshold. However, I would be happy to consider increasing my score if the authors can address (even part of) my concerns." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We design a novel coreset selection method that optimizes instruction tuning by considering both data distribution coverage and batch diversity." }, "_bibtex": { "value": "@misc{\nzou2024optbatch,\ntitle={OptBatch: Optimizing Instruction Tuning with Data Selection through Batch Stratified Sampling},\nauthor={run zou and Yifan Ding and Siyu Liu and Jianhang Ding and wenwu and Hao Chen and Beibei Chen and yun lou},\nyear={2024},\nurl={https://openreview.net/forum?id=xybTwSsdBP}\n}" }, "abstract": { "value": "Instruction tuning has optimized the specialized capabilities of large language models (LLMs), but it often requires extensive datasets and prolonged training times. The challenge lies in developing specific capabilities by identifying useful data and efficiently fine-tuning. High-quality and diverse pruned data can help models achieve lossless performance at a lower cost. In this paper, we propose \\textbf{OptBatch}, a novel data selection method that focuses on the learnability of whole batch data rather than individual samples. OptBatch considers the coverage of the data distribution through stratified sampling and maximizes the relative distance between samples within a batch to enhance diversity. Furthermore, OptBatch utilizes Hessian gradient optimization to guide the selection strategy for subsequent batches. OptBatch effectively captures the intrinsic value of data curation, surpasses previous state-of-the-art methods, and demonstrates robust generalization performance across diverse downstream tasks and models. Extensive experiments reveal that OptBatch training in various pruning rates outperforms full dataset training, reducing computational cost by 20-40\\%. Additionally, evaluations using GPT-4 scores and other metrics for multi-turn dialogue, multilingual translation and QA tasks consistently demonstrate OptBatch's optimal performance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~run_zou1", "~Yifan_Ding6", "~Siyu_Liu7", "~Jianhang_Ding1", "~wenwu1", "~Hao_Chen79", "~Beibei_Chen1", "~yun_lou2" ] }, "authors": { "value": [ "run zou", "Yifan Ding", "Siyu Liu", "Jianhang Ding", "wenwu", "Hao Chen", "Beibei Chen", "yun lou" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "data selection", "coreset", "gradients", "instruction tuning", "large language model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "zou|optbatch_optimizing_instruction_tuning_with_data_selection_through_batch_stratified_sampling" }, "pdf": { "value": "/pdf/7e5a1ebb6621422024b6e542b197c9b5c65323a7.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "OptBatch: Optimizing Instruction Tuning with Data Selection through Batch Stratified Sampling" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xyfb9HHvMe
DSPO: Direct Score Preference Optimization for Diffusion Model Alignment
main
Active
Text-to-image generation
applications to computer vision, audio, language, and other modalities
5;5
5;3
3;3
3;3
2;3
5
4
3
3
2.5
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- In Figure 2, what baseline models were used to calculate the win rate? Is it the pretrained SD1.5 model?\n- During evaluation, did the authors generate images from multiple fixed seeds and average the results over them, or do the results come from a single specific seed?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper takes a different approach to aligning text-to-image diffusion models, motivated by score matching, which sets this method apart from the others.\n- In terms of multiple open-source reward scores, DSPO demonstrates effectiveness in increasing reward values." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a new direct score preference optimization method for diffusion model alignment that utilizes a target human-preferred score function, thereby aligning the fine-tuning objective with the pretraining objective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- In general, I find that many claims are too vague and ambiguous. When we examine the final loss of Diffusion-DPO and DSPO, how can we definitively say that one aligns with the pretrained loss of Stable Diffusion more clearly than the other? Additionally, why is aligning the diffusion model with direct reward optimization or RL considered suboptimal due to a mismatch in pretraining and fine-tuning objectives? Is there any theoretical justification beyond the win rates?\n- In my opinion, since the method is based on human preference, a human evaluation should be conducted to confirm whether it truly increases the reward aligned with human preference. Relying solely on open-source reward model scores seems unreliable, as these models can carry inherent biases.\n- Furthermore, why does the Diffusion-KTO result differ so significantly from the original paper? I think the authors should provide detailed explanations of their evaluation settings, including the seeds used, the number of images generated per method, and other relevant factors. Without this information, the results may appear unreliable." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "It might be better to filter out the Pick-a-Pic dataset to discard the NSFW images." }, "flag_for_ethics_review": { "value": [ "Yes, Privacy, security and safety" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Figure 2 could mention the base model on which the respective methods were applied.\n\n* L099 - L101: The authors mention \"... with existing baselines for preference learning in T2I diffusion models.\" However, Figure 2 compares the performance of a single base model on which the respective methods were applied. So, I think it's better to be specific and mention the base model in the statement.\n\n* Equation 12 could benefit from an expansion of the notations used. For example, I don't know where $\\lambda$ is coming from. Furthermore, it'd be beneficial to highlight the score function of the data distribution replacing $p_{ref}$.\n\n* It's not clear how DSPO incorporates $\\mathbf{x}_t^w$. Under Section 4.2, $\\mathbf{x}_t^w$ only appears in Equation 14. \n\n* L091: Typo on \"constraints\". \n\n* SD1.5 is a relatively old model. Since DSPO doesn't consider other recent models like SDXL, SD3, Flux, etc., it's unclear as to how well DSPO generalizes. I can understand that providing further results on SD3 or Flux might be computationally challenging, but I request that the authors at least consider SDXL experiments. Additionally, LoRA fine-tuning (similar to how DPOK [1] does it) when doing DSPO for larger models like SD3 and Flux might help them quickly evaluate its potential better. \n\n* Are there any sample-efficient aspects of DSPO? More specifically, I am interested to see if using the score-matching perspective of alignment fine-tuning like DSPO does can improve alignment with fewer samples than other methods.\n\n* The authors could also consider using human-benchmark arenas such as imgsys [2] for evaluation. \n\n* To assess the practical aspects of DSPO, it would be useful to report the wall-clock time and memory requirements of DSP and compare them against the existing methods. \n\n**References**\n\n[1] DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models; Fan et al.; 2023.\n\n[2] imgsys; fal.ai team." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The score-matching formulation for alignment fine-tuning of diffusion models hasn't been explored before, and the paper does a good job of exploring this direction. \n* The connection between the objective covered by the RLHF methods for diffusion models and DSPO." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces DSPO, presenting a score-matching formulation for fine-tuning pre-trained diffusion models on human preference data. The authors argue that since existing preference alignment fine-tuning methods have a different objective than the pre-training objective, it can lead to sub-optimal results, and they demonstrate this with empirical evidence." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper misses out on using MaPO [1] as a reasonable baseline even though it considers contemporary works like Diffusion KTO. The reason why I think MaPO is important here to consider is because it has similar motivations and also either performs on par with Diffusion DPO or outperforms it under various settings. \n* Lack of experimental results on models like SDXL makes it unclear as to how scalable DSPO is and if it works for models other than SD v1.5. \n* The ablations lack experiments on some of the design choices the authors make to arrive at the final objective of DSPO. For example, they use the direct score function of the underlying data distribution as opposed to using that of $p_{ref}$, but they don't justify it with sufficient experimental results. \n* Pick-a-Pic v2 contains duplicate prompts. Did the authors perform any de-duplication? If not, I think it might be better to run at least a few experiments with de-duplication to check if this improves the results.\n\n**References**\n\n[1] Margin-aware Preference Optimization for Aligning Diffusion Models without Reference; Hong et al.; 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024dspo,\ntitle={{DSPO}: Direct Score Preference Optimization for Diffusion Model Alignment},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xyfb9HHvMe},\nnote={under review}\n}" }, "abstract": { "value": "Diffusion-based Text-to-Image (T2I) models have achieved impressive success in generating high-quality images from textual prompts. While large language models (LLMs) effectively leverage Direct Preference Optimization (DPO) for fine-tuning on human preference data without the need for reward models, diffusion models have not been extensively explored in this area. Current preference learning methods applied to T2I diffusion models immediately adapt existing techniques from LLMs. However, this adaptation introduces a mismatch between the pretraining and the fine-tuning objectives specific to T2I diffusion models. This inconsistency can potentially lead to suboptimal performance. In this work, we propose Direct Score Preference Optimization (DSPO), a novel algorithm that aligns the pretraining and fine-tuning objectives of diffusion models by leveraging score matching, the same objective used during pretraining. It introduces a new perspective on preference learning for diffusion models. Specifically, DSPO distills the score function of human-preferred image distributions into pretrained diffusion models, fine-tuning the model to generate outputs that align with human preferences. We theoretically show that DSPO shares the same optimization direction as reinforcement learning algorithms in diffusion models under certain conditions. Our experimental results demonstrate that DSPO outperforms preference learning baselines for T2I diffusion models in human preference evaluation tasks and enhances both visual appeal and prompt alignment of generated images." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Text-to-image generation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5d93f82732c1b2f3c8201f7413781c508a24dc70.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/baa0317e43d01273fbfa8b3b61c6ccf643d75d45.zip" }, "title": { "value": "DSPO: Direct Score Preference Optimization for Diffusion Model Alignment" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xyysYa4YvF
Interpretable Boundary-based Watermark Up to the condition of Lov\'asz Local Lemma
main
Active
Watermark;Model extraction attacks;Intellectual property protection
alignment, fairness, safety, privacy, and societal considerations
1;5;6
5;5;3
2;4;3
1;3;3
2;3;4
4
4.333333
3
2.333333
3
-0.654654
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How sensitive is the method to the various hyperparameters, e.g. number of perturbed models s, perturbation range δ, boundary thresholds a and b? Guidelines for setting them would help practitioners.\n- The theoretical guarantees require satisfying the Lovász Local Lemma constraints on watermark-related parameters. How difficult is this to achieve in larger scale model like VIT-high or SigCLIP? Are there techniques to guide the optimization of the α values?\n- The results focus on CIFAR and ImageNet with a ResNet architecture. How well does the method generalize to other datasets and tasks? Additional results there would strengthen the work." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Novel boundary-based trigger selection strategy that optimizes distinguishability between benign/stolen models\n- Theoretical analysis proving guarantees under Lovász Local Lemma constraints on watermark-related parameters\n- Strong empirical results on CIFAR-10/100 and ImageNet demonstrating state-of-the-art trigger accuracy and p-values\n- Ablations showing the effectiveness of the proposed trigger selection and labeling approach\n- Well-written and clearly presented, with good coverage of related work" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a novel boundary-based watermarking method to protect deep neural networks against model extraction attacks. The authors decompose the probability of successfully identifying a stolen model into the trigger set accuracy and probability that each trigger can differentiate models. Their method optimizes both components." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Theoretical guarantees rely on achieving the Lovász Local Lemma parameter constraints, but it's unclear how difficult this is in practice or how to set the α values. Also, other hyperparameter sensitivities and computational costs are not deeply explored, and more ablation studies are needed to prove this idea.\n- Limited evaluation of large-scale datasets and widely-used production models. The paper's experiments focus primarily on CIFAR-10, CIFAR-100, and ImageNet datasets with ResNet34 and VGG11 classifier architectures. However, it lacks evaluation on much larger scale datasets such as LAION or ImageNet-21K, which would further demonstrate the method's scalability and robustness, and also closer to real-world scenarios. Additionally, the paper does not test the proposed watermarking method on widely used production models like CLIP or SAM (Segment Anything Model).\n- Some low-level methodological details are lacking, e.g. how exactly are boundary samples selected, how are labels assigned when multiple have the same low probability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "- How would this method hold up against attackers who try to avoid boundary-based watermarks specifically?\n\n- How does the computational cost of this method compare to other watermarking techniques?\n\n- How realistic is it to use this approach in real-world applications where resources might be limited?\n\n- Did the authors try their method with larger networks and other architectures, such as ResNet-101, ResNet-152, DenseNet, ConvNeXt, MobileNetV2, and VGG?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The paper is well-written, well-organized, and easy to follow, which makes the contributions and results accessible to readers.\n\n- Model extraction is a relevant issue for DNNs in production, making this approach practical and valuable.\n\n- The paper introduces a boundary-focused approach that addresses limitations in previous watermarking methods, providing a robust solution against model extraction attacks.\n\n- The use of the Lovász Local Lemma gives theoretical backing, strengthening the reliability of the watermark and adding rigor to the approach.\n\n- The method is tested on CIFAR-10, CIFAR-100, and ImageNet, demonstrating its generalizability and effectiveness across multiple datasets and outperforming existing techniques." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel boundary-based watermarking technique for protecting neural networks from model extraction attacks. Previous watermarking approaches rely on randomly selected trigger sets, which may fail to differentiate between benign and stolen models due to ambiguous trigger points. This method instead selects boundary samples as triggers, assigns them rare labels, and applies the Lovász Local Lemma to achieve a theoretically tight bound that guarantees watermark efficacy. Experimental results on CIFAR-10, CIFAR-100, and ImageNet datasets show that this approach outperforms state-of-the-art techniques in both trigger set accuracy and p-value tests, enhancing its ability to identify stolen models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The process involves multiple perturbations, decision boundary identification, and label selection, which may introduce computational overhead or complexity in real-world deployments.\n\n- While the method is robust for certain types of attacks, the paper does not fully address how it might respond to adaptive adversaries who could circumvent boundary-based triggers.\n\n- The paper doesn’t explore how well this method would work with very large models or different architectures, which could affect its scalability.\n\n- The method may need a lot of computational resources, which could make it difficult to deploy in practical settings." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) Why did the author choose not to compare their results with the already published work? A brief comparison shows that the method proposed in the paper yields worse both trigger set and benign accuracy. \n2) Does method work in case of other model extractions attacks?\n3) What is known about the FPR of the method? How can one guarantee that the method does not detect fingerprinted model as the non-fingerprinted one?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper is accurately written and provide some experimental results on model fingerprinting task. The method proposed relies on the construction of perturbed models to generate and certify the trigger set, that is known to be effective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a watermarking approach to defend the intellectual property of deep neural networks against model extraction attacks. The method relies on two-step procedure – generation and certification of trigger sets. The method is compared to several baselines; the efficiency of the method is illustrated against distillation-based model extraction attacks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The idea to use perturbed models for trigger set generation and certification is not new, what notably limits the novelty of the paper. \n\n2) The author listed several other watermarking approaches but did not choose them for comparison (for example, see Mikhail Pautov, et. al, Probabilistically robust watermarking of neural networks, IJCAI-2024). \n\n2) The method leads to notable degradation of the performance of the fingerprinted model even on simple datasets (CIFAR10/100), making the feasibility of the approach questionable. \n\n3) The method is tested only against distillation-based attacks; only the models of the same architecture are considered to be included in perturbed set; overall, it leaves doubts about the efficiency against other extraction attacks.\n\n4) Crucially, the paper provides no study / results about the false positive detection of benign models. If a non-fingerprinted model is often detected as fingerprinted, it indicates inappropriate choice of the trigger set. \n\n5) No code is provided. \n\n\n\nThe paper has high degree of similarity with the previously published work, and has no comparison with it. I doubt that the paper brings enough novelty: the idea is known, the experimental results are notably below the sota ones." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024interpretable,\ntitle={Interpretable Boundary-based Watermark Up to the condition of Lov{\\textbackslash}'asz Local Lemma},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xyysYa4YvF},\nnote={under review}\n}" }, "abstract": { "value": "Watermarking techniques have emerged as pivotal safeguards to defend the intellectual property of deep neural networks against model extraction attacks. Most existing watermarking methods rely on the identification of samples within randomly selected trigger sets. However, this paradigm is inevitably disrupted by the ambiguous points that exhibit poor discriminability, thus leading to the misidentification between benign and stolen models. To tackle this issue, in this paper, we propose a boundary-based watermarking method that enhances the discernibility of trigger set, further improving the ability in distinguish benign and stolen models. Specifically, we select trigger samples on the decision boundary of base model and assigned them labels with the least probabilities, while providing a tight bound based on the Lov\\'asz Local Lemma. This approach ensures the watermark's reliability in identifying stolen models by improving discriminability of trigger samples. Meanwhile, we provide theoretical proof to demonstrate that the watermark can be effectively guaranteed under the constraints guided by the Lov\\'asz Local Lemma. Experimental results demonstrate that our method outperforms the state-of-the-art watermarking methods on CIFAR-10, CIFAR-100 and ImageNet datasets. Code and data will be released publicly upon the paper acceptance." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Watermark", "Model extraction attacks", "Intellectual property protection" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/0592077fa59f7dfad290c76a77abc806c2ecda14.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Interpretable Boundary-based Watermark Up to the condition of Lov\\'asz Local Lemma" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xz3dmxfFva
Video Representation Learning Without Natural Videos
main
Active
video representation learning;learning from synthetic data
unsupervised, self-supervised, semi-supervised, and supervised representation learning
1;5;5
4;4;4
1;3;3
2;2;2
2;3;3
3.666667
4
2.333333
2
2.666667
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Could you provide experimental results from other datasets, such as Something-SomethingV2?\n2. What type of generator is used in this study? The paper mentions an on-the-fly generation strategy for training (Page 4, Line 215). Does this approach consume more computational resources than the original pre-training? Please provide relevant experimental data.\n3. In the paragraph on Page 6, Line 292, how were the experimental data obtained? Is there a missing table or figure? How was the conclusion reached? Additionally, how is the “97.2%” figure mentioned multiple times in the paper derived? How are the experimental data in Section 4.3 calculated?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The proposed method generates synthetic data through operations on shapes (e.g., accelerating, transforming), leading to models that perform comparably to those trained on natural data in downstream tasks. This approach is relatively simple and novel, distinguishing itself from previous work centered on human-based synthetic data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for effectively learning video representations from synthetic videos without the need for training on natural videos. It introduces simple generation steps, such as moving, transforming, and accelerating shapes, to create a series of synthetic video datasets. The authors explore the performance of models pre-trained on these synthetic datasets in downstream tasks, focusing primarily on action recognition using VideoMAE. Experiments on the UCF101 and HMDB51 datasets show that the performance of models pre-trained on synthetic data is comparable to those pre-trained on natural data. Additionally, results from UCF101-P demonstrate that models pre-trained on synthetic data exhibit similar robustness." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper lacks completeness:\n - a) It primarily focuses on synthetic data for action recognition (Page 6, Line 511). Still, it does not discuss or compare its experiments with similar works by [2] and [3]. Furthermore, the models and datasets used (VideoMAE and UCF101/HMDB51) are less robust compared to these studies.\n - b) The UCF101 and HMDB51 datasets are relatively simple and do not adequately demonstrate the advantages of synthetic data for video pre-training. Experiments on more complex datasets, like Something-SomethingV2, are needed to support the claims.\n - c) Section 6 mentions that this is merely a preliminary work and suggests additional tasks or models will be discussed in future work, leading to the conclusion that this paper does not constitute a complete study.\n\n2. The overall writing resembles a technical (experimental) report, particularly in Section 5, which is structured similarly to [1]. While the paper presents work on videos, it does not adequately expand on this area compared to the image work done by [1].\n\n3. The two synthetic datasets generated, “Accelerating Transforming StyleGAN Crops” and “Accelerating Transforming Textures,” yield good performance for the pre-trained models. However, these datasets are based on data generation techniques introduced by [1].\n\n4. There are writing issues:\n - Page 3, Line 140: Incorrect citation for “Section 3.1.”\n - Page 10, Line 537: The number “92.%” is incomplete.\n - Page 8, Line 424: The origin of “28 datasets” is not explained.\n\n5. Section 5 analyzes how synthetic data benefits video pre-training only from the perspective of static images or individual frames (spatial information). Given that videos are characterized by temporal dynamics and motion cues, the paper lacks relevant experiments and analyses in these areas.\n\n6. The accuracy of the “Dynamic StyleGAN videos” setting in Table 3 is reported to be only 68.7%, but the paper does not provide an explanation or conclusion regarding this result.\n\n[1]\tManel Baradad, Jonas Wulff, Tongzhou Wang, Phillip Isola, and Antonio Torralba. Learning to see by looking at noise. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview. net/forum?id=RQUl8gZnN7O.\n\n[2]\tYoWhan Kim, Samarth Mishra, SouYoung Jin, Rameswar Panda, Hilde Kuehne, Leonid Karlinsky, Venkatesh Saligrama, Kate Saenko, Aude Oliva, and Rogerio Feris. How transferable are video representations based on synthetic data? In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. URL https://openreview. net/forum?id=lRUCfzs5Hzg.\n\n[3]\tHoward Zhong, Samarth Mishra, Donghyun Kim, SouYoung Jin, Rameswar Panda, Hilde Kuehne, Leonid Karlinsky, Venkatesh Saligrama, Aude Oliva, Rogerio Feris. Learning Human Action Recognition Representations Without Real Humans. In Thirty-seventh Conference on Neural Information Processing Systems Track on Datasets and Benchmarks. URL https://openreview.net/forum?id=UBbm5embIB" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The main question this work attempts to answer is L041:\"we ask if natural videos are even needed to learn video representations that are similar in performance to current state-of-the-art representations\". However, this work fails to answer this question due to the use of only one ssl approach and UCF/HMDB datasets. The experiments performed in this work are not sufficient to be able to answer this question.\n\nThe claim L043:\" In this work, we reach a downstream performance that is similar to the performance of models\npre-trained on natural videos, while pre-training solely on simple synthetic videos and static images\", may be correct to some extent for UCF/HMDB/MAE space, but definitely not enough to make any conclusions, since it was explored for a single technique and datasets with small size and where temporal aspects are not really important.\n\nSome other questions/concerns I have:\n\n- Table 1: poor linear probe performance is not good, as linear probe is preferable over fine-tuning due to its efficiency and preserving learned weights. \n\n- Some more efforts in dataset progression will further strengthen this work. \n - Was accelerating shapes also experimented with? What about number of shapes/circles? Density/size of shapes?\n\t- Why was circle chosen as a first step? What about square, rectangle, triangle, ete.?\n\t- Are there complex shapes too?\n\t- What about static complex shapes?\n\n- Its stated that the size of synthetic dataset is kept same as original datasets, but from Table 2 it seems a large number of natural images have been used (9K ucf training videos vs 1.3M static images), which is a concern since these many images will provide a lot of variations in texture and patches in comparison with original training videos.\n\n- Results on UCF101-DS are not shown; it is a real-world distribution shift dataset. Also, how about results on HMDB51-P, UCF101-P, Kinetics400-P, and SSv2-P?\n\n- Table 1: why linear probe results are missing for hmdb? \n\n- There is no analysis on the impact of number of pre-training videos on the down-stream performance. It will be good to see how the performance varies with the size of pre-training dataset size. \n\n- In Section 5.3, please check if it should it be ‘FID’ instead of ‘frame similarity’ in line 431? : “there is a strong negative correlation between the FID and the accuracy”.\n\nReferences:\n\n[R1] Goyal, Raghav, et al. \"The\" something something\" video database for learning and evaluating visual common sense.\" Proceedings of the IEEE international conference on computer vision. 2017.\n\n[R2] Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., ... & Zisserman, A. (2017). The kinetics human action video dataset. arXiv preprint arXiv:1705.06950.\n\n[R3] Schiappa et. al. \"Self-supervised learning for videos: A survey.\" ACM Computing Surveys 55.13s (2023): 1-37.\n\n[R2] Kumar et al. \"A Large-Scale Analysis on Self-Supervised Video Representation Learning.\" arXiv e-prints (2023): arXiv-2306." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The approach is novel as it leverages fully synthetic video data for self-supervised learning of video representations, an area that has not been widely explored for video models.\n\n- Pretraining with synthetic datasets achieves comparable performance to natural video pretraining, closing nearly 97% of the performance gap on UCF101 and even outperforming natural video pretraining on HMDB51. The model demonstrates strong robustness, as it outperforms UCF101-pretrained models on 11 out of 14 datasets in the UCF101-P suite, showing the potential of synthetic pretraining for generalization across challenging datasets.\n\n- The paper includes a detailed analysis of the synthetic dataset’s properties, especially the types of textures and natural image crops that are most beneficial, providing valuable insights for optimizing synthetic datasets for video model pretraining.\n\n- The paper is well-structured, clearly written, and easy to follow, with informative figures and logical progression through the methodology, making it accessible for readers." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a method for pretraining video models using a progression of synthetic datasets, gradually incorporating video properties like motion, shape transformations, and acceleration. Starting with static shapes, the authors progressively introduce complexity in the synthetic datasets, culminating in textures and crops from StyleGAN and natural images. This synthetic-only pretraining approach for VideoMAE fills 97.2% of the performance gap on UCF101 compared to models pretrained on natural videos, and outperforms on HMDB51. The authors also examine performance on UCF101-P for robustness and analyze dataset properties to identify correlations with downstream task success." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- Selection of datasets: this work is mostly focused on two datasets, ucf101 and hmdb51. these datasets are known to have appearance bias, and even image based models have shown very high performance. Therefore conclusions based on just these two datasets are not convincing for learning video representations. The authors should focus on widely used datasets with temporal aspects, such as something-something, diving, etc. [R1]\n\n- Also, the used datasets are small in size and most recent works in video action recognition are focused on large-scale datasets such as Kinetics variants [R2]. \n\n- Selection of approach: The authors mainly focus on VideoMAE, and conclusions based on just one approach can not be generalized. Video SSL is an active area of research and the authors should consider some other recent approaches for video SSL [R3, R4].\n\n- Motivation: Another aspect which should be discussed is; why we need synthetic dataset for pre-training? We already have unlimited natural videos available which can be potentially used; what are the advantages of using synthetic videos? It will be good to cover this aspect, otherwise the motivation of this work is weak." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Why did the authors use a dataset 35-140 times smaller to train the baseline? \n* Why did the authors not follow the K400 baseline numbers in the original VideoMAE paper as the upper-bound? \n* Why is there no discussion on recent contrastive video SSL works?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1) The idea of using purely synthetic data for video SSL in interesting, and if thoroughly investigated could provide valuable insights regarding video SSL techniques. \n2) The synthetic dataset generation process is well described and thorough with clear motivation for each of its variants. This can be useful for other works exploring similar directions.\n3) Statistical analysis of the generated dataset could be useful but is not presented clearly." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors construct a synthetic video dataset focused on various motions and describe the dataset construction process in detail. Then they train a VideoMAE model on this synthetic dataset and compare against a baseline trained on a smaller natural video dataset. They also evaluate two video classification datasets and on an out-of-distribution (augmented) variant of one dataset. Authors claim competitive performance of their method and next analyze which aspects of the synthetic datasets contributed to the strong performance. \n\nIn the dataset analysis, they examine benefits of incorporating natural images and synthetic textures. Larger image datasets and a mix of natural and synthetic textures improve downstream performance. Static synthetic textures outperform dynamic ones, and datasets with higher frame similarity and diversity yield better results. Color and spectrum properties also moderately impact performance. They also visualize PCA of model attention maps to show the model’s ability to capture structural information." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1) Lacking understanding of prior work in video self-supervised learning:**\n\na) “Specifically, in the video domain, although various large-scale datasets exist and have been incorporated via similar self-supervised learning tasks, the improvements in downstream performance on video understanding (e.g. action recognition) are relatively low.” - this is not true, video SSL has lead to clear improvements on action recognition over non-video SSL or supervised pre-training initializations [1, 2, 3]. These improvements are on par with image SSL methods as established in prior work [1, 2, 3]. \n\nb) A large body of contrastive SSL for video is ignored in related work. See related works sections in [1, 2, 3] for relevant papers. \n\n**2) Possibly incorrect problem formulation:**\nIn L204-209, the videoMAE model pre-trained on UCF-101 is viewed as an upper bound (since this is the test data distribution). However, SSL techniques behave differently (i.e. poorly) on smaller datasets as opposed to larger ones, even if the small dataset is the test distribution. See Table 2 in [4] where UCF-101 results are 4-6% points higher when pre-trained on Kinetics-400 dataset instead of UCF-101 itself. This is further validated by later works exploring video SSL focussed on Kinetics-400 to avert such issues [1, 2, 3]. The same behavior can be seen in VideoMAE paper [5] when comparing Table 2 vs Table 3. \nThe authors use this assumption of an upper bound for all comparisons, claiming to “close 97.2% of the gap to this upper bound” (see L052 in intro). However, this upper bound is much lower than the actual best performance of these SSL methods. \n\n**3) Unfair experimental setup:**\nThe upper-bound baseline is trained only on UCF-101 (which as mentioned in point 2 is already a handicapped version of these SSL methods). Adding to this, UCF-101 has only 8.5K images while their synthetic dataset used to train models has 300K-1.3M images (35 to 140 times more data). This makes the numbers reported in Table 1 clearly unfair. In such a case where different datasets are used, the authors should at least report the sizes of the dataset. The current style of reporting these results appears almost intentionally misleading the reader. For a good example, see Table 2 in [5] which the authors themselves use as their baseline where given different pre-training datasets, their sizes are reported. \n\n**4) Linear Probing Baselines:** \nIt is known that linear probing with VideoMAE results in subpar performance. Contrastive methods such as DINO are better for linear probing. See results in [1, 6] where UCF-101 linear probing achieves over 90% accuracy. The 24.8% accuracy with the synthetic dataset reported in Table 1 does not by any means support the authors claim of “that useful video representations can be learned from synthetic videos and natural images, without incorporating natural videos in the training.” \n \n**5) Evaluation Task Mismatch:**\nThe synthetic dataset construction specifically focuses on injecting various motions into the data. However, solving the downstream tasks (UCF / HMDB) do not require much motion awareness. See [6, 7] where single frame classifiers achieve 85% / 49% accuracy on these datasets. In [6], zero-shot CLIP [8] (with no video training) achieves 69% / 46% accuracy with only a single frame on these datasets. \nConsider exploring motion focussed datasets such as SSv2, Diving48, FineGym. The synthetic dataset may actually contribute to mote noticeable improvements in such cases and it is known how mist video SSL methods are relatively weaker at such motion heavy datasets (especially under linear probing settings) [2, 5]. \n\n**6) Insufficient baselines:**\n\na) SSL techniques: The authors cite [9] to support their decision to use only videoMAE as a baseline. However, at the time of writing that paper contrastive methods were clearly dominant in video SSL over other approaches. In contrast, currently these are several contrastive, MAE style, and predictive style SSL techniques all performing competitively on video benchmarks. Therefore it is apt to evaluate this method on at least on more baseline, especially given how the authors chose to evaluate under linear probing settings where MAE approaches are known to perform poorly sometimes. \n\nb) SSL dataset: Even in the referred work [9], their contrastive baseline is trained on multiple datasets (that are large scale and different to the smaller test datasets). It is the same case in the original VideoMAE paper (SSL on several datasets). The authors should train their SSL baseline on at least one large dataset following prior work. \n\n**7) Statistical Analysis of Dataset (minor):**\nThe procedure and efforts are good, but since the accuracy numbers are unreliable, it is unclear if these findings actually make sense (i.e. are they actually helping video SSL). Figure 4 could provide more information clearly: maybe color the variants in different colors?\n\n**8) PCA visualization (minor):**\nIt is unclear what the colors are; is this the distance to each of the top 3 principal components? What is dark vs light? \nAlternately, consider a visualization such as in the DINO paper (where you group using PCA) and show the 3 different masks (maybe as an image overlay). This could clearly show how attention maps align with image structure. \n \n\n\n**References**\n\n[1] Recasens, Adria, et al. \"Broaden your views for self-supervised video learning.\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\n\n[2] Ranasinghe, Kanchana, et al. \"Self-supervised video transformer.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[3] Hu, Kai, et al. \"Contrast and order representations for video self-supervised learning.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n\n[4] Han, Tengda, Weidi Xie, and Andrew Zisserman. \"Self-supervised co-training for video representation learning.\" Advances in neural information processing systems 33 (2020): 5679-5690.\n\n[5] Tong, Zhan, et al. \"Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training.\" Advances in neural information processing systems 35 (2022): 10078-10093.\n\n[6] Ranasinghe, Kanchana, et al. “Language-based Action Concept Spaces Improve Video Self-Supervised Learning.” Advances in Neural Information Processing Systems. 2023. \n\n[7] Li, Junnan, Silvio Savarese, and Steven CH Hoi. \"Masked unsupervised self-training for label-free image classification.” ICLR 2023\n\n[8] Radford, Alec, et al. \"Learning transferable visual models from natural language supervision.\" International conference on machine learning. PMLR, 2021.\n\n[9] Baradad Jurjo, Manel, et al. \"Learning to see by looking at noise.\" Advances in Neural Information Processing Systems 34 (2021): 2556-2569." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We learn robust video representations from synthetic videos and natural images" }, "_bibtex": { "value": "@misc{\nyu2024video,\ntitle={Video Representation Learning Without Natural Videos},\nauthor={Xueyang Yu and Xinlei Chen and Yossi Gandelsman},\nyear={2024},\nurl={https://openreview.net/forum?id=xz3dmxfFva}\n}" }, "abstract": { "value": "In this paper, we show that useful video representations can be learned from synthetic videos and natural images, without incorporating natural videos in the training. We propose a progression of video datasets synthesized by simple generative processes, that model a growing set of natural video properties (e.g. motion, acceleration, and shape transformations). The downstream performance of video models pre-trained on these generated datasets gradually increases with the dataset progression. A VideoMAE model pre-trained on our synthetic videos closes 97.2\\% of the performance gap on UCF101 action classification between training from scratch and self-supervised pre-training from natural videos, and outperforms the pre-trained model on HMDB51. Introducing crops of static images to the pre-training stage results in similar performance to UCF101 pre-training and outperforms the UCF101 pre-trained model on 11 out of 14 out-of-distribution datasets of UCF101-P. Analyzing the low-level properties of the datasets, we identify correlations between frame diversity, frame similarity to natural data, and downstream performance. Our approach provides a more controllable and transparent alternative to video data curation processes for pre-training." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Xueyang_Yu1", "~Xinlei_Chen1", "~Yossi_Gandelsman2" ] }, "authors": { "value": [ "Xueyang Yu", "Xinlei Chen", "Yossi Gandelsman" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "video representation learning", "learning from synthetic data" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "yu|video_representation_learning_without_natural_videos" }, "pdf": { "value": "/pdf/8a00313bef9b8b167c320215d1076d1e040eb836.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/c06aa60b8770ad18fba72b26d7a3a025452e6a1d.zip" }, "title": { "value": "Video Representation Learning Without Natural Videos" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xzKFnsJIXL
Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model
main
Active
Differential Privacy;Privacy Auditing;Machine Learning
alignment, fairness, safety, privacy, and societal considerations
5;5;6;8
3;2;3;4
2;3;4;3
2;3;3;3
3;3;4;3
6
3
3
2.75
3.25
0.866025
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* There might not exist a worst-case datapoint that can saturate the gradient clipping threshold at every iteration. Would it be possible to discuss what it means to breach the privacy of a datapoint that does not exist and may in fact be unrealizable?\n* It seems like an adversary who could control the sequence of gradients would also be able to infer from that something about the intermediate models. Is this OK for the hidden state threat model?\n* Is there any intuition behind privacy amplification only seems to occur for smaller batch sizes?\n* Does Implication 1 say anything beyond what Implication 2 says? If the batch size is as large as possible (the entire dataset), then every datapoint will be included in every optimization step of DP-SGD?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The problem setting and results are really interesting and open up new avenues for future work. Identifying different regimes where the gap (between the new auditing lower bounds and the theoretical upper bounds) vanishes and where the gap remains is a nice contribution.\n* The paper flows nicely and is and enjoyable to read.\n* Simplicity is a virtue and I think the design of the gradient-crafting adversaries for privacy auditing is clever yet also intuitive." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In the \"hidden state\" threat model for DP-SGD, an adversary does not have access to intermediate updates and can see only the final model. This paper proposes to audit the hidden state threat model for DP-SGD by introducing adversaries who select a gradient sequence offline (i.e., ahead of when training starts) in order to maximize the privacy loss of the final model even without access to the intermediate models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper uncovers some interesting empirical results but doesn’t offer up much explanation for them. E.g. for regimes where there is a gap, we don’t really get an explanation as to why this gap exists. I do feel that this work falls short of its goal to “enhance our understanding of privacy leakage” (line 537) by reporting the results without interpretation. I think that including more discussions like the “high-level explanation” starting at line 373 would help round out the paper and provide more concrete directions for future work.\n* Considering this is one of its main contributions, I found that the description of the adversaries and the privacy auditing scheme in the main paper is vague and not presented with full details." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "My question is about the $f$-DP part. In Remark 3, the authors claim that the approximation error from using the central limit theorem (CLT) can be ignored. However, the CLT may underestimate privacy when mixture distributions arise due to shuffling or sub-sampling [A]. For this reason, Nasr et al. (2023) adopt numerical methods, such as FFT, to calculate the privacy profile. For the hidden state model, an \n$f$-DP guarantee is provided by [B] without relying on the CLT.\n\nGiven these considerations, the authors' assertion that the CLT approximation error can be ignored may not be universally applicable, especially in scenarios involving mixture distributions in (shuffled) DP-SGD. A simple explanation in the main part might be beneficial to the readers.\n\n[A] Unified Enhancement of Privacy Bounds for Mixture Mechanisms via f-Differential Privacy. Wang et al., NeurIPS, 2023.\n\n[B] Shifted Interpolation for Differential Privacy. Bok et al., ICML, 2024." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This paper applies GC models to audit various regimes, including both small and over-parameterized models. Additionally, different deep learning architectures (CNN, ResNet, FCNN) are evaluated in the experiments. Overall, the empirical results are convincing." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a tighter privacy auditing bound by extending gradient-crafting (GC) models to the hidden state regime, where only the last iteration is released. By combining the GC technique with advanced privacy auditing methods, such as privacy auditing using \n$f$-DP, the authors achieve a refined privacy auditing bound for the hidden state model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weakness of this paper is that it does not introduce a new privacy auditing method; rather, it extends the existing gradient-crafting method to the hidden state regime. The primary technical contribution appears to be constructing a sequence of gradients without requiring knowledge of the intermediate gradients at each iteration." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- It seems that the privacy accounting upper bounds remain the same for various k (Figures 2 and 3). Now, is this because you have actually accounted only the 250 iterations the canary was inserted? Or do you display the auditing results only up to $T=250$ iteration?\n- I don't think there is any reason to use PRV as there is no subsampling. You could just use analytical Gaussian and obtain tight privacy bounds (instead of the upper and lower bounds you get from PRV).\n- Caption of Fig4: \"Figure 4a gives the ...\", I guess you mean 4b?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Auditing DP algorithms is an interesting and important line of work. Previous work, both in auditing and privacy accounting, have suggested that hiding the intermediate iterations of DP-SGD can be beneficial for the privacy guarantees. In this work, authors improve the privacy auditing of non-convex loss functions, by carefully selecting gradient canaries that get inserted for to the DP-SGD gradient sum. The proposed method significantly improves the existing methods, suggesting that the previous methods have not optimally used the threat model allowed by the hidden state setting.\n\nAuthors also study the amplification by iteration effect, by proposing a novel technique that introduces a crafted gradient only for the first step of DP-SGD. The empirical results for this study suggest that there might not be any amplification by iteration for certain non-convex losses, and that only the noise introduced by subsampling the data makes distinguishing the crafted gradient more difficult." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper studies the auditing of last iterate privacy guarantees of DP-SGD for non-convex loss functions. Authors propose an auditing method where a canary gradient is introduced to the DP-SGD steps. Two methods for crafting this \"worst-case\" canary gradients is proposed: 1. a random direction in the parameter space is selected and a gradient of norm C is inserted to the mnibatch of gradients and 2. the adversary simulates the DP-SGD run and picks as the canary gradient the least updated dimension, again with norm C. Authors demonstrate empirically that if the canary is inserted at every iteration, the auditing bounds correspond to the DP accounting upper bounds, suggesting that the hidden state model does not necessarily provide additional privacy protection. When the canary is inserted less frequently, the adversaries power decreases. Finally, authors propose an auditing setting where the adversary inserts the crafted gradient only in the first iteration and controls the loss landscape. In this setting the auditing bounds match closely the DP bounds when minibatch used in DP-SGD is sufficiently large." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In general the paper is very well written and the arguments are easy to follow. However, I get a bit confused on the discussion regarding accounting in the section 5.1. Since the threat model studied does not benefit from the subsampling amplification, I don't see any reason of using PRV accounting. When there is no subsampling, the privacy analysis could be performed tightly with Gaussian DP or Analytical Gaussian accounting (Balle et al 2018).\n\nThe proposed method assumes that the crafted gradient is possible under the particular loss function. While I do believe that this might be the case for highly over-parametrized models, it would be great if authors could discuss this further in the paper. Would it be possible to somehow trace back the worst-case sample from the worst-case crafted gradient? Similarly the assumption that the adversary can craft the loss landscape to make the crafted gradient the most distinguishable could warrant further discussion." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "The simulated biased dimension method appears quite similar to gradient crafting at each step in terms of implementation [Nasr et al., 2023]. What are the specific differences between these two methods?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper is clearly written and easy to follow.\n2. The proposed method is simple yet empirically effective.\n3. The empirical results provide insights into the settings (batch size and $\\epsilon$) where DP-SGD may exhibit privacy amplification by iteration for non-convex problems." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a privacy auditing method for DP-SGD under a hidden state threat model. Assuming an arbitrary non-convex loss function, the authors abstract away the specific design of neighboring datasets and directly construct gradient sequences for auditing. Their main difference from previous work, which audits DP-SGD with gradient-crafting adversaries at each step, is that they ensures that the gradient sequence is predetermined and not influenced by intermediate outputs, i.e., the sequence is decided offline before training.\n\nThe authors considers two types of gradient-crafting:\n\n1. **Random-biased dimension:** randomly selects a dimension and crafts the gradient with the largest possible magnitude in that dimension.\n \n2. **Simulated-biased dimension:** simulates the training algorithm, identifies the least updated dimension, and crafts gradients with the largest possible magnitude there.\n \n\nExperimentally, they demonstrate that when a canary gradient is inserted at each step, their empirical privacy estimates are tight for common model architectures, including ConvNet, ResNet, and FCNN. The auditing also remains tight when the canary is periodically added with a periodicity of 5. However, at a periodicity of 25, the auditing for some settings may not be tight, suggesting possible privacy amplification by iteration. To investigate this, the authors conducted experiments, showing that their auditing is tight when the batch size and $\\epsilon$ are large. This supports prior findings which shows DP-SGD on non-convex problems does not exhibit privacy amplification by iteration. Thus, relaxing the threat model to a hidden state model does not offer better privacy guarantees for certain non-convex problems." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Firstly, the conclusion of this work is not new: the main takeaway—that for some non-convex problems, the hidden state threat model does not lead to better privacy analysis for DP-SGD (i.e., no privacy amplification by iteration)—is the same as the conclusion in previous work [Annamalai, 2024]. However, the methodologies differ: [Annamalai, 2024] constructs a worst-case non-convex loss function for DP-SGD where information from all previous iterates is encoded in the final iterates, while this work directly constructs gradient sequences.\n\nMy other concern with this work is that its methodology is also quite limited, though it may be slightly more general than previous approaches (e.g. [Annamalai, 2024]). This work abstracts away the specifics of the loss function and model architecture, focusing directly on gradient crafting. While this simplification aids in designing canary gradients, it resembles a worst-case analysis across all non-convex problems. From a specific gradient sequence, it is not possible to deduce the corresponding loss function and model architecture on which DP-SGD is performed. As a result, the findings do not clarify which specific architectures and loss functions (i.e., types of non-convex problems) may or may not exhibit privacy amplification by iteration." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We investigate the privacy guarantees of the hidden state threat model via auditing with gradient-crafting adversaries." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024tighter,\ntitle={Tighter Privacy Auditing of {DP}-{SGD} in the Hidden State Threat Model},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xzKFnsJIXL},\nnote={under review}\n}" }, "abstract": { "value": "Machine learning models can be trained with formal privacy guarantees via differentially private optimizers such as DP-SGD. In this work, we focus on a threat model where the adversary has access only to the final model, with no visibility into intermediate updates. In the literature, this ``hidden state'' threat model exhibits a significant gap between the lower bound from empirical privacy auditing and the theoretical upper bound provided by privacy accounting. To challenge this gap, we propose to audit this threat model with adversaries that craft a gradient sequence designed to maximize the privacy loss of the final model without relying on intermediate updates. Our experiments show that this approach consistently outperforms previous attempts at auditing the hidden state model. Furthermore, our results advance the understanding of achievable privacy guarantees within this threat model. Specifically, when the crafted gradient is inserted at every optimization step, we show that concealing the intermediate model updates in DP-SGD does not amplify privacy. The situation is more complex when the crafted gradient is not inserted at every step: our auditing lower bound matches the privacy upper bound only for an adversarially-chosen loss landscape and a sufficiently large batch size. This suggests that existing privacy upper bounds can be improved in certain regimes." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Differential Privacy", "Privacy Auditing", "Machine Learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/5c62abed3343ae47f66c5ac8c72739ba0a53ffc7.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ce99971ec55c7c908c2213b04b3ac7ce01ef5c6b.zip" }, "title": { "value": "Tighter Privacy Auditing of DP-SGD in the Hidden State Threat Model" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
xzSUdw6s76
PALMBENCH: A COMPREHENSIVE BENCHMARK OF COMPRESSED LARGE LANGUAGE MODELS ON MOBILE PLATFORMS
main
Active
Mobile Platforms;Large Language Models;Quantization;Benchmark
datasets and benchmarks
5;5;5;6;8
3;4;5;4;4
2;3;2;3;4
2;2;2;2;3
3;3;3;3;4
5.8
4
2.8
2.2
3.2
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "### Motivation\n\n* The authors suggest in the introduction that \"running LLMs locally\" can lead to \"increased reliability\". Personally and from experience, I would not be that confident on making such claims, i.e. that cloud deployments are less reliable that deploying locally, especially given the infancy of the evaluated frameworks. I would appreciate some more substantiation here from the authors.\n\n### Methodology\n\n* Wrt methodology and experimental setup, the paper misses substantial information that hurt the overall reproducibility potential. Such omissions include:\n - Operating system and framework versions\n - Automation of iOS interface\n - How different components in Figure 1 actually coordinate and communicate.\n* Would the authors be willing to open-source their benchmarking system?\n* It is unclear whether the authors have converted the models themselves or have used the pre-compiled versions offered in GGUF and MLC repositories.\n* How do the authors ensure that the profiling methodology does not interfere with the behavior of the experiment? Also, how do the authors isolate the power lanes from the USB communication?\n* The authors claim that they are using a \"professional USB power meter for accurate power consumption measurements\". However, it is not clear how this yields the information needed, as devices are battery powered and not USB-powered. As such, the power draw from the USB does not yield any information related to energy consumption from a workload on device.\n\n### Evaluation\n\n* The evaluation does not integrate variance metrics. Should it be assumed that the experiments have run once?\n* What is the decoding strategy and hyperparameters that the authors use during evaluation?\n* With respect to temperature, does the average represent \"average over time\" or \"average over surface\"?\n* A missing dimension that is worth exploring in such a paper is the tradeoff between size due to parameters and due to quantization (or another compression method). For example, is it better using a lower-bitwidth llama-3.1-8b model on 4bits or a llama-3.2-3b model on 8 bits?\n* What is the bitwidth used in Figure 2 across models?\n* In Figure 3, the GPU utilization across phones is quite large, which comes in contrast with the \"memory bounded\" workload claim of the authors. I would appreciate some more insights here.\n* §4.2: \"3-bit quantization results in lower CPU and GPU usage [...] decreased data transfers [...] reduced inference workload\": I am not sure this claim is correct or substantiated properly.\n* A great source of information to the reader, also for showcasing the memory-boundedness of the workload would be to plot a roofline model for devices/models.\n* The authors claim that iPhones provide significantly higher throughputs compared to other devices. This may not be painting the whole picture, as it is not clear whether this throughput can be sustained compared to the actively cooled Jetson Nano for instance.\n\n### Related work\n\n* The authors claim that prior work (MELT) has examined resource utilization and energy efficiency in a limited manner, and did not explore GPU workloads. However, this is not true. \n * Additionally, the authors do not run llama.cpp on iOS, which prior work has done.\n * Palmbench does not measure power consumption via hardware probes on phones and do not measure it at all on edge devices.\n * Palmbench does not report on prefill rates, which prior work does.\n * Palmbench does not integrate high-end edge devices (e.g. Jetson AGX) and different power profiles, which prior work does.\n* Moreover, the authors have unfortunately missed other related work in the domain of on-device LLM benchmarking [a-c].\n\n[a] Murthy, R., Yang, L., Tan, J., Awalgaonkar, T. M., Zhou, Y., Heinecke, S., ... & Savarese, S. (2024). MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases. arXiv preprint arXiv:2406.10290. \n[b] Lu, Z., Li, X., Cai, D., Yi, R., Liu, F., Zhang, X., ... & Xu, M. (2024). Small Language Models: Survey, Measurements, and Insights. arXiv preprint arXiv:2409.15790. \n[c] Xu, J., Li, Z., Chen, W., Wang, Q., Gao, X., Cai, Q., & Ling, Z. (2024). On-device language models: A comprehensive review. arXiv preprint arXiv:2409.00088. \n\n### Presentation and Other Nitpicking\n\n* Table 1, 2: Please put the captions above.\n* Llama-3/3.1: Missing reference\n* Throughput in Table 2 refers to generation throughput.\n* MELTing: I believe the framework is called MELT.\n* Figure 2: A boxplot presentation would be significantly richer in information, also showing peak memory usage (which can be the main factor for OOM errors).\n* Figure 4: The overlay of the two workloads suggests that they are running simultaneously. An alternative visualization which also annotates what's happening at each time step would be more informational to the reader.\n* §4.2: \"To keep the models in suitable size [...] evaluate the total memory sage of models\": I am not sure about what the authors mean here.\n* §4.2: \"[...] higher quantization escalates GPU memory [...]\": Escalates might not be the correct word here." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "* The paper quantifies the side-effects of quantization in various dimensions in language modelling on downstream tasks, including hallucinations and toxicity. This a valuable insight to the community.\n* The multitude of devices that the authors have integrated are welcome, but lack the highest performing tier on Android (e.g. Snapdragon 8 Gen 2) and Linux (e.g. Jetson Orin AGX).\n* I also greatly appreciate the integration of profiling measurements for reporting low-level CPU, GPU and memory utilization of the LLM inference workload." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper describes the benchmarking results of several quantized Large Language Models on smartphones and edge devices. Specifically, it quantifies the CPU, GPU, memory and energy consumption of running inference on device, along with the accuracy and performance degradation across various dimensions (hallucinations, toxicity) as a consequence of quantization." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The novel contributions of this paper are significantly fewer than originally stated and have been documented in prior work(s). Such include the evaluation of performance, energy and accuracy degradation of compressed LLMs on various devices, across similar frameworks. While I do agree that the downstream task performance quantification, along with the hallucination/alignment dimension is an important one, it seems to be the main novel contribution.\n* The energy measurement methodology followed by the paper is largely software-based and thus depends on the vendor's implementation. Moreover, comparisons across ecosystems can be quite deceptive.\n* The paper would benefit from a more rounded background/related work section, which describes prior work more faithfully and includes background information about the models and evaluated quantization methods." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "I'm not familiar with the 0-bit Llama model you mention on page 6. Isn't the bit count referring to quantization bits? How can that be zero? Is it a typo or just my ignorance?\n\nWhy FLIR in addition to power? There is a strong, deterministic relationship between power and temperature so the FLIR data should be useful primarily when spatial heterogeneity is important. Is this for user discomfort due to contact with the phone or some other purpose? What was the approximate thermal time constant for the system spanning power dissipation location (mostly transistors) to smartphone surface? Did the benchmarks run long enough for the temperature distribution to reach steady state?\n\nWhy did higher bit width models often have lower quality measures than similar lower bit width models? Is this noise or something fundamental do to the influence of bit width on inductive biases or learning dynamics?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "This paper is in a good area. The efficient execution of LLMs on edge devices is important, both due to practical impact and because these platforms have tight resource constraints that can motivate new ideas that are more broadly applicable.\n\nThe \"comprehensive\" in the title is accurate regarding the range of measures and will set a high bar for others characterizing LLMs running on mobile and edge devices.\n\nThe paper has good tutorial value. Even though the focus is on benchmarking, it tersely and clearly describes several approaches to making LLMs more efficient.\n\nThe benchmarking findings are likely to be of interest to designers and researchers working on using LLMs on resource-constrained platforms." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper describes a benchmarking process for compressed LLMs on mobile devices. It characterizes GPU and CPU utilization, speed, latency, memory requirements, power consumption, temperature, accuracies, and rates of undesirable behavior for several models on mobile and edge devices." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper has several findings regarding relative performance of different compression/quantization schemes. Some of these are counter-intuitive. The paper points these out but does not explain them. This holds true for findings on pages 9 and 10. The paper would probably be more interesting if it went deeper into the reasons for the surprising findings.\n\nBenchmarking papers are generally not as interesting (I am thinking about conference attendees, not reviewers) as papers with novel design approaches or surprising and important scientific findings, and that holds true for this paper in my opinion. However, benchmarking is important so I view this as a small weakness. This work needs to be done and the authors did it well." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "What does complentary mean in Section 3.3.7: two complementary devices ?\nI didn't get this statement in Sect. 4.2. CPU and GPU Utilization. Additionally, the iPhones exhibited ..., indicating the potential for optimization .... I though lower utilization is a good thing. In that case, why more optimization?\nI wonder whether the inconsistencies, such as in Sect. 4.7, 3 bit is worse than 2 bit GWQ and 4-bit, is due to unrepeated experiments." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The authors considers a large collection of LLMs, mobile platforms, and quatization schemes. The findings can be helpful for future researchers.\nThe organization of the paper is logical, which makes it easy to follow the author's arugments. \nThe authors give some valuable insights based on the experimental results, such as the usefulness of 2-bit quantization for certain tasks in Section 4.5." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper conducted extensive measurements of various on-device metrics and accuracy metrics for several popular LLMs on mobile devices to evaluate the effects of model size, quantization, hardware, inference engine, etc." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Given it was a heavily empirical paper, the authors did not clearly state whether the experiments were repeated multiple times to validate the results, or better, verified on multiple devices of the same family. It could be a challenge to gather all the devices, but the experimental results can be strengthened if the authors clearly stated that the devices were factory reset and repeated several times.\nEven though the large collections of experiment is comprensive, it can be overwhelming to make sense of the results, especially without the author's interpretation. For example, Sect. 4.2. It is probably more meaningful to emphasize on some comparisons and highlight the difference and take-aways. \nSimilar comments apply to Section 4.4. Any insights why 5-bit and 3-bit quantziation are worse?\nEven though the authors claim it to be a framework, it is not readily reproducible by other labs, hence hard to be benchmarked by external teams." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Thank you for submitting to ICLR 2025! \nI enjoy reading the paper. \nHelping developers better understand the tradeoff between model and system metrics for deploying LLMs on mobile devices is an important task.\nI have a few comments and questions about the paper and it would be great if the authors could address them.\n\nFirst, I think the paper should explain how automated testing or profiling of LLMs on the mobile device is conducted end-to-end at a high level.\nThe current Section 3 explains different metrics, LLMs, inference engines and so on in a quite detailed fashion.\nHowever, it is not clear how a user or developer could use the benchmark to perform profiling.\nIt is also unclear how the framework could be extended to profile new mobile devices or new LLMs.\nAlso, do users need external equipment to fully use the benchmark?\nFor example, in Section 3.3.7, the paper mentions that thermal behavior analysis is done using a FLIR C5 thermal imaging camera and a professional USB power meter.\n\nDespite a comprehensive analysis presented in the evaluation, I feel some of them sound trivial.\nFor example, for memory utilization, higher quantization levels consume more memory, while lower quantization than 4-bit reduces memory needs.\nFor CPU and GPU utilization, models with 4-bit quantization utilize more GPU duty cycles than those with 3-bit quantization.\nCould the user get more insights from the results presented by the benchmark?\nFor example, for memory utilization, other than model weight, how much additional memory overhead is present in each setting and what may be the reasons?\nFor CPU and GPU utilization, could we obtain detailed layer-by-layer profiles and analyze what is the bottleneck operation?\nWhat are some important implications or design decisions we can make from utilizing the benchmark?\n\nOther questions:\n1. In Section 4.2, it says \"LLM inference is inherently memory-bound, and its memory utilization can be reduced significantly by memory bandwidth quantization\". What is memory bandwidth quantization?\n2. Can we evaluate the LLMs on mobile devices using CPU-only mode, or does the framework require the devices to have GPUs?\n3. Do you have evaluation numbers for prefill and decode throughput respectively?\n4. What is the batch size used in different settings when evaluating the throughput of LLMs?\n5. How is the exact match and F1 score calculated when comparing quantized and non-quantized models? How to determine if the score is good or bad?\n6. In Section 4.4, it says \"Interestingly, the 5-bit and 3-bit models underperformed slightly. K-quant (ggml) 3-bit model produced more hallucinations and toxic content than the 2-bit model.\" How are hallucination and toxicity observed in Figure 8?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is written in good quality and easy to follow.\n2. The framework supports a range of devices and LLMs.\n3. The experiments are presented in detail with an analysis of the key metrics supported by the framework." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Serving LLMs on mobile devices is becoming increasingly important due to privacy and security requirements.\nDeploying LLMs on mobile platforms is challenging as the developer may need to trade-off between model quality, inference performance and power efficiency.\nCurrent LLM benchmarks primarily target cloud platforms, with only a few for limited mobile devices.\nThis paper proposes PalmBench, a comprehensive benchmarking framework to evaluate the user experience of LLMs on mobile devices.\nThe framework evaluates LLM on metrics including efficiency, accuracy and toxicity.\nThe framework supports multiple quantized LLMs and multiple mobile devices with different types of OSs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Lack of explanation on how the automated testing is performed end-to-end.\n2. Unclear how easy it is to support new mobile devices or new LLMs.\n3. Some of the analysis and results seem to be trivial." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- What's the control flow (hierarchy) of the framework and how do different components interact? For instance, the paper lacks crucial details about how user queries are processed and evaluated using their framework, particularly in relation to AGI\n\n- The Fig.1 shows a linear flow from models to automation framework to metrics, but how do different quantization methods (Group-wise, GGML, GPTQ, AWQ) integrate into this flow? Shouldn't the quantization block be positioned between Models and Frameworks since models need to be quantized before being deployed through MLC or llama.cpp?\n\n- I was trying to reproduce the supplement materials, and trying to perform some benchmark tasks. However, I am unable to reproduce the experiments shown in the paper. I believe the supplementary materials is not complete. It looks like it's based on the NVTOP and processes the returning string. I'm trying to perform some benchmarking, could you please provide some details how to run your pipeline, ?\n\n- Could you provide a detailed description showing how AGI, custom apps, and LLM frameworks interact in your automated system? (e.g.,What's the exact interface between your custom apps and the LLM frameworks (MLC/llama.cpp)?\n\n- How you process user's query to evaluate the model? should it must have llama.cpp support, what if my model is not in llama.cpp model zoo?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- A benchmark for evaluating LLM for edge computing scenarios is necessary, and this study focuses on this.\n- Experimenting covering iOS, Android, and edge devices with detailed hardware specifications" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces PalmBench, a benchmark framework for evaluating LLM on edge computing scenarios, and also conducted evaluation on various models under different benchmark datasets. \nThe key goal of this paper is to complement the lacking of an LLM benchmark at the edge.\nThe proposed framework allows automated testing of various LLMs with different compression settings across multiple mobile devices. This paper also evaluates various popular LLMs with different benchmarks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The overall system design of PalmBench is vague to me and not elaborate in the paper; I am not able to get information on how each of the components collaborates in Figure 1 and what's the hierarchy of the system. \n\n- The Benchmark Automation Implementation details are lacking, it's hard to know how PalmBench to process different hardware platform benchmark query.\n\n- There is already have Benchmark [1] for evaluating LLM on Edge-Device, which should be properly discussed in this paper.\n\n\n[1] https://github.com/TianjinYellow/EdgeDeviceLLMCompetition-Starting-Kit?tab=readme-ov-file#submission-requirements" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Benchmarking Large Language Models User Experience for Mobile Deployment" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024palmbench,\ntitle={{PALMBENCH}: A {COMPREHENSIVE} {BENCHMARK} {OF} {COMPRESSED} {LARGE} {LANGUAGE} {MODELS} {ON} {MOBILE} {PLATFORMS}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=xzSUdw6s76},\nnote={under review}\n}" }, "abstract": { "value": "Deploying large language models (LLMs) locally on mobile devices is advantageous in scenarios where transmitting data to remote cloud servers is either undesirable due to privacy concerns or impractical due to network connection. Recent advancements have facilitated the local deployment of LLMs. However, local deployment also presents challenges, particularly in balancing quality (generative performance), latency, and throughput within the hardware constraints of mobile devices. In this paper, we introduce our lightweight, all-in-one automated benchmarking framework that allows users to evaluate LLMs on mobile devices. We provide a comprehensive benchmark of various popular LLMs with different quantization configurations (both weights and activations) across multiple mobile platforms with varying hardware capabilities. Unlike traditional benchmarks that assess full-scale models on high-end GPU clusters, we focus on evaluating resource efficiency (memory and power consumption) and harmful output for compressed models on mobile devices. Our key observations include: i) differences in energy efficiency and throughput across mobile platforms; ii) the impact of quantization on memory usage, GPU execution time, and power consumption; and iii) accuracy and performance degradation of quantized models compared to their non-quantized counterparts; and iv) the frequency of hallucinations and toxic content generated by compressed LLMs on\nmobile devices." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Mobile Platforms", "Large Language Models", "Quantization", "Benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/1490248593752944c41bba0142a7bcd92ceb6490.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/3559be4916a6a542076f8ad686fc767cfbdaf8d7.zip" }, "title": { "value": "PALMBENCH: A COMPREHENSIVE BENCHMARK OF COMPRESSED LARGE LANGUAGE MODELS ON MOBILE PLATFORMS" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
y10AP0BkID
Towards Realistic Example-based Modeling via 3D Gaussian Stitching
main
Withdraw
Gaussian splatting;Composition;Example-based Modeling
applications to computer vision, audio, language, and other modalities
Xinyu Gao;Ziyi Yang;Bingchen Gong;Xiaoguang Han;Sipeng Yang;Xiaogang Jin
~Xinyu_Gao1;~Ziyi_Yang4;~Bingchen_Gong1;~Xiaoguang_Han2;~Sipeng_Yang1;~Xiaogang_Jin1
3;3;5;6
4;2;4;3
3;3;3;3
2;2;2;3
2;2;2;4
4.25
3.25
3
2.25
2.5
0.174078
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": null, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": null, "primary_area": null, "questions": null, "rating": null, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": null, "summary": null, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": null, "withdrawal_confirmation": { "value": "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors." } }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "When does the method break? Can you characterize the (or a) class of shapes where the assumptions made work well and what happens if the data does not match these assumptions?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The idea of using differential coordinates (as in Poisson image editing, or, put alternatively, editing in higher-frequency bands of the appearance signal only) in the context of editing \"Gaussian Splatting\" scenes is quite appealing; if it wasn't for other reservations, I would consider this a sufficient argument for accepting the paper (I should note, however, that I am not an active researcher in this area and thus might overlook prior work; but if this is new, I would think it is a really nice and useful idea for interactive editing of such data).\n\nThe results obtained are convincing (see below for qualifications) and the interactive demo of the system shown in the supplementary material shows that this could be practically applied.\n\nFurther, the general area of making use of scene representations obtained from matching \"real-world\" photos by making them editable is certainly important for making the whole research area useful to artistic applications in computer graphics." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method for interactively editing radiance fields encoded in point clouds (\"Guassian Splatting\"). It consists of two components: First, point clouds have to be segmented, which seems to be done mostly manually with seam-detection supported by nearest-neighbor detection across two point clouds to be stitched. It appears to me that this is not the main focus of the paper. The second component is an adaptation of Poisson image editing to radiance fields, where local differences in radiance information (encoded as spherical harmonics) is propagated along with boundary constraints. To improve realism, additional constraints are added, such as attracting the computed colors towards clusters of \"colors\" (actually, appearance / radiance information) in the target shape (to which a new piece is stitched), look-ups of nearby colors in the target point cloud for smoother transitions, and ad-hoc mid-frequency \"noise\" (function $\\Phi$) for increased variability.\n\nThe resulting merged objects look quite convincing and appear more natural than competing results in the examples selected in the main paper. The appendix provides a more detailed evaluation, including quantitative results.\n\nLimitations are not discussed (aside from the obvious limitation to merging rigid pieces, to which the paper limits itself a priori)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "First of all, I have the impression that this paper is out of scope of ICLR. In terms of its approach and methodology as well as the problem it addresses, I would see this firmly within \"computer graphics and interactive techniques\"; it has only tangential impact on machine learning and representation learning. It might be close enough to warrant consideration, but I would see a graphics venue as a far better fit (for example, in computer graphics people would be much more interested in how this could lead to better software systems for creating better content, rather than evaluating the suitability and performance of the algorithms and data structures proposed for fitting and representing data in general).\n\nThat said, I would also seem room for improvement in positioning the paper. It seems to me that the stitching algorithm (differential appearance merging) is the main idea, while segmentation does not offer much novelty over the state of the art (see for example the long line of work of direct point cloud editing methods starting with Pauly et al.'s PointShop3D [Siggraph 2002, 2003]. More sophisticated segmentation algorithms, such as using graph-cut methods on point clouds are also known and for example already available in open source software (PCL/point cloud library). While these do not directly handle radiance data, it would be not a big step to integrate such ideas to reduce manual effort. Similarly, a long line of automatic outlier detection and removal methods could also improve this step easily (see for example \"tensor voting\" as a very basic approach). My point here is that the contribution of this paper is rather moderate, so it would make sense to position the paper with more emphasis on the second step (differential merging) which I personally found to be really neat.\n\nAnother issue I see (and again, probably more in writing/positioning than conceptually) is that the method mixes basic ideas (such as differential coordinates for appearance) with ad-hoc heuristics such as sinusoidal perturbations of color look-ups to create more appealing results. Differentiating between artistic and foundational ideas would make the paper stronger, in particular in the context of a rather technical venue aiming at fundamental machine learning research rather than graphics and interaction.\n\nFinally, I am also a bit disappointed by the very short discussion of limitations in the main paper, which basically just states that the paper only considers rigid pieces; I would think that there are certainly trade-offs and mismatches in prior assumptions in the presented approach, independent of whether the geometry to be merged was also deformed prior to merging, that could be discussed more clearly. Understanding \"when it breaks\" is very important in order to put such ideas to use in new context.\n\nOverall, I like the key idea, and if it is really novel, this is a strong point, but limitations in presentation and evaluation (in particular, discussion of limitations) make me skeptical; for the context of ICLR I would thus at this point be hesitant in terms of making a positive recommendation for this submission." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "see the weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written, logically clear, and readable.\n\nThe proposed method is reasonable and improves the Nerf-based method by seamlessly integrating 3DGS with the real-world model.\nThe paper provides experimental results to support its claims. Visualization results demonstrate the effectiveness of the method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a novel method for realistic example-based modeling using 3D Gaussian Splatting (3DGS). It addresses the limitations of current methods like SeamlessNeRF by enabling interactive editing and seamless stitching of 3D models from real-world scenes. Key innovations include a real-time GUI for model segmentation and transformation, KNN analysis for boundary detection, and a two-phase optimization strategy for texture blending." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. In Figure 13, the color propagation results are not clear. Could the authors provide examples with stronger color contrast or offer a detailed explanation of the color propagation, especially how the method handles various texture complexities?\n2. The appendix briefly mentions time consumption comparisons, but could the authors elaborate on the computational demands of their method in the main text, perhaps comparing it to other existing methods in more detailed scenarios?\n3. The paper mentions a GUI editor that facilitates the interactive modeling process. Could the authors provide more details or a demonstration of how the GUI editor is used, especially any features that help users manage complex scenes or models?\n4. The success of the method heavily relies on the quality and complexity of the initial 3D models. Poor initial models might lead to suboptimal results, which needs further discussion in the paper.\n5. The discussion of limitations is insufficient. It is recommended to add detailed pictures and discussion details of the failure of the paper's method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Using color as a consistency supervision is sufficient to some extent, but it only achieves local statistical alignment. Have the authors considered incorporating semantic labels, such as SAM or DINO, for additional stylization supervision or evaluation?\n\n2. Regarding the S-phase, it resembles a stylization process. Why didn't the authors follow a conventional stylization pipeline instead of using a statistical method, which may lack generalizability?\n\n3. as for Fig.15, why implement SA3D instead of Segmeng any 3D Gaussians / Gaussian Grouping for segmentation since the latter are newer than former." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper proposes an effective method for composing pretrained 3D Gaussians, ensuring geometric accuracy and successfully addressing stylization between different objects.\n\n2. A novel sampling-based optimization strategy is introduced to maintain the consistency of texture color between two 3D Gaussians, demonstrating the authors’ deep insight into object composition.\n\n3. Authors develop a user-friendly GUI for composition and editing, which seems like a useful technical contribution for the proposed method." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method for stitching 3D Gaussians, introducing KNN, T-phase, and S-phase to effectively compose and stylize 3D Gaussians. Experimental results in certain cases demonstrate that this approach outperforms baseline methods, such as seamless NeRF." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Please use the same cases as those in seamless NeRF (e.g., Figures 4, 5, 6, and 7 from seamless NeRF) to ensure a fair comparison.\n\n2. Providing only VQA scores and images/videos is not sufficient for a convincing evaluation. I recommend that the authors introduce a more robust evaluation metric, rather than relying solely on videos/results.\n\n3. A user-friendly GUI is primarily a technical contribution rather than a theoretical one, even though it is emphasized in Section 1. While there is text supporting the novelty of the GUI, more emphasis is needed. Additionally, please include more detailed guidance in the supplementary material." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "* For larger scenes with multiple objects, how does the runtime performance scale? Are there specific optimizations that could be applied to maintain the method's real-time processing speed in more complex compositions?\n* How could the proposed method be adapted for objects with highly specular or translucent materials? The experiments in the paper primarily focus on diffuse materials—would adjustments be needed to handle complex reflectance and transparency properties?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The proposed approach addresses a key gap in example-based modeling by enabling realistic and seamless stitching of 3D objects from real-world scenes, which previous techniques like SeamlessNeRF struggle with. This makes the method highly applicable for real-world applications that require detailed, cohesive compositions.\n* Extensive experimentation shows that this approach achieves high-quality, harmonious blends even in challenging real-world cases, where traditional neural field blending methods fail." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work proposes a new method for realistic 3D object composition by combining parts of existing models. Unlike prior approaches that focus on shape composition and struggle with real-world 3D objects, this method uses 3D Gaussian Splatting to achieve seamless blending of textures and structures between objects. A novel sample-guided synthesis approach allows for real-time segmentation and transformation of these objects. Additionally, a two-phase optimization process—sampling-based cloning and clustering-based tuning—ensures both local texture harmonization and global appearance consistency. Extensive experiments demonstrate that this method outperforms existing techniques like SeamlessNeRF, offering more realistic synthesis and interactive editing capabilities in real-world scenes." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Although the paper’s experiments on real-world data show improvements over SeamlessNeRF, it could further benefit from a more diverse dataset, as the current scope primarily includes simple object compositions without significant lighting variation or occlusion. Expanding the dataset to more complex scenes or settings with varied lighting conditions and object types could better showcase the generalizability of the approach.\n* The paper primarily demonstrates its effectiveness on objects with diffuse materials, which may limit its applicability to scenes involving highly specular or translucent objects. Complex materials with reflective or transparent properties could introduce additional challenges for the proposed method." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A novel framework for example-based modeling via 3D Gaussian stitching." }, "_bibtex": { "value": "@misc{\ngao2024towards,\ntitle={Towards Realistic Example-based Modeling via 3D Gaussian Stitching},\nauthor={Xinyu Gao and Ziyi Yang and Bingchen Gong and Xiaoguang Han and Sipeng Yang and Xiaogang Jin},\nyear={2024},\nurl={https://openreview.net/forum?id=y10AP0BkID}\n}" }, "abstract": { "value": "Using parts of existing models to rebuild new models, commonly termed as example-based modeling, is a classical methodology in the realm of computer graphics. Previous works mostly focus on shape composition, making them very hard to use for realistic composition of 3D objects captured from real-world scenes. This leads to combining multiple NeRFs into a single 3D scene to achieve seamless appearance blending. However, the current SeamlessNeRF method struggles to achieve interactive editing and harmonious stitching for real-world scenes due to its gradient-based strategy and grid-based representation.\n\nTo this end, we present an example-based modeling method that combines multiple Gaussian fields in a point-based representation using sample-guided synthesis. Specifically, as for composition, we create a GUI to segment and transform multiple fields in real time, easily obtaining a semantically meaningful composition of models represented by 3D Gaussian Splatting (3DGS). For texture blending, due to the discrete and irregular nature of 3DGS, straightforwardly applying gradient propagation as SeamlssNeRF is not supported. Thus, a novel sampling-based cloning method is proposed to harmonize the blending while preserving the original rich texture and content. Our workflow consists of three steps: 1) real-time segmentation and transformation of a Gaussian model using a well-tailored GUI, 2) KNN analysis to identify boundary points in the intersecting area between the source and target models, and 3) two-phase optimization of the target model using sampling-based cloning and gradient constraints. Extensive experimental results validate that our approach significantly outperforms previous works in terms of realistic synthesis, demonstrating its practicality." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": { "value": [ "~Xinyu_Gao1", "~Ziyi_Yang4", "~Bingchen_Gong1", "~Xiaoguang_Han2", "~Sipeng_Yang1", "~Xiaogang_Jin1" ] }, "authors": { "value": [ "Xinyu Gao", "Ziyi Yang", "Bingchen Gong", "Xiaoguang Han", "Sipeng Yang", "Xiaogang Jin" ] }, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Gaussian splatting", "Composition", "Example-based Modeling" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": { "value": "gao|towards_realistic_examplebased_modeling_via_3d_gaussian_stitching" }, "pdf": { "value": "/pdf/3fdd04472b63ff46f0ebbd3a0c1be5e445578ea5.pdf" }, "presentation": null, "primary_area": { "value": "applications to computer vision, audio, language, and other modalities" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b24c17b68f4dfc61d6ca85e0f14fc57a39c438e8.zip" }, "title": { "value": "Towards Realistic Example-based Modeling via 3D Gaussian Stitching" }, "venue": { "value": "ICLR 2025 Conference Withdrawn Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Withdrawn_Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
y15LAM4u0A
EmbodiedCity: A Benchmark Platform for Embodied Agent in Real-world City Environment
main
Active
Embodied intelligence;real-world city environment;large language model agent;benchmark
datasets and benchmarks
3;3;3;6
4;5;4;3
3;2;2;3
1;2;2;2
2;3;2;3
3.75
4
2.5
1.75
2.5
-0.816497
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The platform's integration with Unreal Engine and AirSim, along with the provision of a Python SDK, significantly lowers the barrier for use and promotes flexible, scalable experimentation for researchers.\n- The benchmark includes evaluations of popular large language models (e.g., GPT-4, Claude 3) across tasks, providing a well-rounded quantitative baseline for the embodied intelligence community.\n- The open structure allows future expansions, such as multi-agent collaboration and adaptability, fostering an extensible environment for advanced research in embodied AI." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a comprehensive benchmark platform aimed at assessing the performance of embodied agents in a realistic urban setting. Unlike previous benchmarks limited to indoor or fictional settings, this platform features a highly realistic 3D simulation of an actual city district in Beijing. The benchmark includes five core tasks for evaluating embodied capabilities: scene understanding, question answering, dialogue, visual language navigation, and task planning. These tasks are designed to capture the core embodied AI abilities of perception, reasoning, and decision-making. The platform supports multiple agents, offers an interface for real-time control, and provides a SDK for easy access, along with a dataset for training and evaluation." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While the paper addresses the city layout aspect of the sim-to-real gap, it does not extend to other critical factors impacting real-world applicability. Additionally, no experiments are conducted to quantify the sim-to-real benefits derived from using a real-world city layout, leaving the practical advantages of this choice unclear.\n2. The shadows and lighting in Figure 3 appear less realistic, which may limit the benchmark's effectiveness in simulating real-world visual conditions.\n3. The benchmark predominantly focuses on drone-related tasks, with limited discussion on tasks relevant to autonomous vehicle planning. Definitions, metrics, and methodologies for evaluating embodied tasks in autonomous driving contexts, particularly for planning, are not included.\n4. The tasks are largely oriented toward language-based interactions, with an emphasis on using large language models. Metrics like BLEU and ROUGE, which primarily measure text quality, may not fully capture the performance of embodied AI tasks, raising questions about the suitability of these metrics for this benchmark.\n5. The paper does not specify a license for the assets used. Given that some assets are sourced from Unreal Engine, Baidu Maps, and Amap, it remains unclear whether these assets are freely distributable under their original licenses. Clarification on the licensing terms for these assets would strengthen the transparency and accessibility of the benchmark." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The authors introduce a new urban simulator for simulating pedestrians and traffic states of a city.\n2. This work provides the resources of a large digital city district, which is quite scarce in this field.\n3. This study evaluates several state-of-the-art large multimodal models (LMMs) against the proposed benchmark to assess their effectiveness in addressing embodied tasks from multiple perspectives. The results largely align with findings from other LMM benchmarks, which partially support the validity of the proposed benchmark." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, the authors construct a benchmark platform for embodied intelligence evaluation in real-world city environments. They create a highly realistic 3D simulation environment based on real city elements and conduct high-fidelity simulations of pedestrian and vehicle flows. The platform has a set of evaluation tasks and provides input and output interfaces. The quantitative evaluation is performed over popular large language models on this platform." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Some metrics presented in Table 1 appear to be subjective and potentially incorrect. For instance, regarding visual realism, the rendering quality in Figure 1 is noticeably less convincing compared to GRUtopia. The images appear to be produced by a rasterization renderer rather than a ray tracing or path tracing renderer, revealing a significant disparity between the quality of human-crafted assets and actual buildings. Furthermore, from an embodiment perspective, the platform seems to primarily incorporate drones and vehicles, lacking support for widely-used embodiments such as humanoid and quadruped robots, despite the authors' claim in Table 1 that all these embodiments are supported.\n2. The diversity of the QA templates illustrated in Figures 8 and 9 appears to be quite limited. A broader range of templates would enhance the comprehensiveness of the evaluation.\n3. While the authors assert that the scene is crafted from real city maps, they do not clarify the benefits of this approach. The quality of the assets and rendered images does not seem realistic enough to justify this claim. Additionally, the authors have not demonstrated the sim-to-real potential of the proposed dataset, which is crucial for its application.\n4. Although the report includes scores based on several metrics, there is a lack of intuitive illustrations to showcase what the large multimodal models (LMM)-agents excel at solving. The results presented do not clearly reveal the main challenges of the proposed tasks.\n5. The rationale for incorporating dynamic pedestrians and vehicles into this platform is not clearly articulated. There appears to be no strong connection between the proposed tasks and the roles of pedestrians and vehicles, which raises questions about their necessity in the framework.\n6. Details regarding the LMM agents are insufficiently described. It remains unclear how these agents handle sequential egocentric observations, which is essential for understanding their operational effectiveness.\n7. The usefulness of the proposed benchmark is not adequately established. The absence of learnable baselines to validate the dataset’s rationale potentially limits the significance and impact of this work.\n8. The authors do not justify the running efficiency of the platform, which is critical for scaling training within the environment. A discussion of performance metrics or benchmarks would be beneficial.\n9. The authors have not conducted experiments to explore the impact of different embodiments on task performance. Such investigations could provide valuable insights into the effectiveness of various embodiment strategies.\n10. The metrics for Evaluative Question Answering (EQA) rely on conventional reference-based NLP metrics, which may not directly demonstrate the correctness of the answers provided. It would be more effective for the authors to utilize a large language model (LLM) to assess the correctness of answers in relation to the ground truth.\n\nTypos:\n1. In the caption of Table 6, \"vision-and-navigation\" should be corrected to \"vision-and-language navigation.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "How much navigation, if any, is required for the Embodied QA, Embodied Dialogue, and Embodied Task Planning tasks?\n\nWhat are the mean lengths of the \"Short\" and \"Long\" paths for the VLN task?\n\nWhat is the performance of the simulator like?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The proposed simulator and environment covers a large area.\n\nThe authors create various tasks in the simulator.\n\nThe authors evaluate various current VLMs on their proposed tasks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposed an open-world simulator for embodied agents. The simulator is based on the city Beijing. To evaluate agents in this simulator, the authors propose 5 tasks. Embodied Scene Understanding, Embodied Question Answering, Embodied Dialogue, Embodied action (navigation), and Embodied Task Planning.\n\nThey evaluate 4 current VLMs on these tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Visuals. The paper advertises high quality visuals, and rates their visuals 3 out of 3 stars. To the reviewers, the visuals do not look better than things rated 2 out of 3 stars, such as CARLA.\n\nEvaluation metrics. Evaluating Embodied QA, Embodied Dialogue, and Embodied Task Planning with captioning and translation metrics, BLUE, CIDEr, etc, seems like a poor choice. I encourage the authors to define a notion of success for each task that evaluates if the agent did the task correctly. Such as, for the Embodied QA and Dialogue tasks, making questions with ambiguous answers multiple choice, or using something like LLM-Match (https://open-eqa.github.io). Questions without ambiguous answers can be evaluated directly. This would lead to a more meaningful and interpretable metric.\n\nMissing References. This paper is missing a very large number of references. For example, the authors mention, by name, the tasks Vision-and-Language Navigation (VLN) (https://arxiv.org/abs/1711.07280) and Embodied QA (https://arxiv.org/abs/1711.11543), but do not cite either work. They also do not cite the paper that proposed SPL (https://arxiv.org/abs/1807.06757). Overall, the space of EmbodiedAI has seen considerable interest and work but the paper cites very little of the work in this area." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses section." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper constructs a detailed 3D environment based on real-world urban settings in Beijing, improving on previous fictional models.\n2. The paper establishes a diverse set of evaluation tasks that assess various dimensions of embodied intelligence.\n3. The paper provides accessible input and output interfaces for easy interaction and performance evaluation of embodied agents." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a benchmark platform for evaluating embodied artificial intelligence in realistic urban environments, addressing gaps in open-world scenarios. It features a detailed 3D simulation, diverse evaluation tasks, and user-friendly interfaces, enhancing embodied intelligence capabilities and supporting practical applications in artificial general intelligence." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation behind this paper aligns with the principles of ELM [1], focusing on embodied understanding in driving scenarios. A detailed explanation of the differences between the two approaches is necessary.\n2. Most of the evaluation tasks already exist in current literature. Providing a detailed explanation to distinguish these tasks from those in other works is important.\n\n[1] Embodied Understanding of Driving Scenarios" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We construct a benchmark platform for embodied intelligence evaluation in real-world city environments." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024embodiedcity,\ntitle={EmbodiedCity: A Benchmark Platform for Embodied Agent in Real-world City Environment},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=y15LAM4u0A},\nnote={under review}\n}" }, "abstract": { "value": "Embodied artificial intelligence (EmbodiedAI) emphasizes the role of an agent's body in generating human-like behaviors. The recent efforts on EmbodiedAI pay a lot of attention to building up machine learning models to possess perceiving, planning, and acting abilities, thereby enabling real-time interaction with the world. However, most works focus on bounded indoor environments, such as navigation in a room or manipulating a device, with limited exploration of embodying the agents in open-world scenarios. That is, embodied intelligence in the open and outdoor environment is less explored, for which one potential reason is the lack of high-quality simulators, benchmarks, and datasets. To address it, in this paper, we construct a benchmark platform for embodied intelligence evaluation in real-world city environments. Specifically, we first construct a highly realistic 3D simulation environment based on the real buildings, roads, and other elements in a real city. In this environment, we combine historically collected data and simulation algorithms to conduct simulations of pedestrian and vehicle flows with high fidelity. Further, we designed a set of evaluation tasks covering different EmbodiedAI abilities. Moreover, we provide a complete set of input and output interfaces for access, enabling embodied agents to easily take task requirements and current environmental observations as input and then make decisions and obtain performance evaluations. On the one hand, it expands the capability of existing embodied intelligence to higher levels. On the other hand, it has a higher practical value in the real world and can support more potential applications for artificial general intelligence. Based on this platform, we evaluate some popular large language models for embodied intelligence capabilities of different dimensions and difficulties. The executable program of this platform is available for download, and we have also released an easy-to-use Python library and detailed tutorial documents. All of the software, Python library, codes, datasets, tutorials, and real-time online service are available on this anonymous website: https://embodied-ai.city." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Embodied intelligence", "real-world city environment", "large language model agent", "benchmark" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6c3df24b6df2c7608015f96a86714e223b60acd1.pdf" }, "presentation": null, "primary_area": { "value": "datasets and benchmarks" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "EmbodiedCity: A Benchmark Platform for Embodied Agent in Real-world City Environment" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
y1UHa9sl2w
OntoFAR: Hierarchical Multi-Ontology Fusion Better Augments EHR Representation
main
Active
Health Informatics;EHR;Diagnosis Prediction;Healthcare Representation
other topics in machine learning (i.e., none of the above)
3;5;5;5;5
3;4;4;2;5
3;3;3;3;3
2;3;2;3;2
2;3;2;3;2
4.6
3.6
3
2.4
2.4
0.294174
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Related Work: Additional discussion on how the proposed model distinctly differs from related studies would be beneficial. Clarifying the relationship between prior work and the proposed model would enhance the understanding of OntoFAR's unique contributions.\n\n2. HGIP in Table 3: In Table 3, HGIP appears to exclude VMP. Does this mean that message passing was not applied at all, and were using the embeddings initialized by LLM used without further updates?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Incorporation of Heterogeneous Ontology Motivation: The paper provides a strong motivation for utilizing heterogeneous ontologies, emphasizing the need for integrating multiple medical ontology systems to improve healthcare predictive models.\n\n2. Demonstration of Model Strength through Comparative Experiments: Through extensive experiments comparing OntoFAR with existing methods, the paper demonstrates the model's superiority in predictive performance, particularly on tasks using real-world medical datasets like MIMIC-III and MIMIC-IV.\n\n3. Evidence of Medical Ontology as a Key Feature in Predictive Models: The results indicate the significance of medical ontology in enhancing predictive accuracy, showing that it can serve as a critical feature in medical concept representation, effectively boosting the model's robustness and interpretability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces OntoFAR, a framework designed to enhance medical concept representations by leveraging multiple ontology graphs to enrich electronic health record (EHR) data. OntoFAR aims to address limitations in existing EHR models that often treat different ontologies (e.g., conditions, drugs) in isolation, thereby missing out on potentially valuable cross-ontology relationships. \n\nThis framework achieves improvement through a dual-d\birectional message passing mechanism: Vertical Message Passing (VMP) within individual ontology hierarchies and Horizontal Message Passing (HMP) across co-occurring medical codes in EHR visits.\n\nHowever, the proposed methodology was evaluated on a limited set of datasets and prediction tasks, showing only marginal performance gains. This raises concerns about its model generalizability. Also, the fact that it needs LLM (embeddings initialization) for training relatively small predictive models could raise the efficiency issues.\n\nAdditionally, the methodological approach lacks substantial novelty from a machine learning perspective. It primarily relies on modeling co-occurrences of codes within visits, which aligns closely with techniques already present in existing graph-based EHR predictive models, offering limited new contributions to the field." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Proposed model novelty and contributions\n(a) Effectiveness of LLM-based Initial Embedding: Results in Table 3 indicate that initializing embeddings with an LLM significantly improves performance. Further clarification is needed on how this initialization contributes to the overall performance of OntoFAR, as well as details on the LLM prompt design strategy. Additionally, it would be informative to assess the impact of initializing embeddings with Clinical-BERT, which is specifically trained on MIMIC medical concepts.\n\n(b) VMP and HMP Design and Interpretation: VMP applies an established concept, and HMP seems to rely on a co-occurrence-based graph attention mechanism, which is a pre-existing technique. Although combining these two approaches appears to be a central contribution of the paper, it is unclear if HMP’s co-occurrence-based construction fully leverages inter heterogeneous medical ontologies. A more explicit discussion is needed on whether combining various ontologies in this way can genuinely contribute to model performance. (Author mentioned co-occurrence based model is utilization of \"inter ontology\".) \n\n(c) Co-occurrence and Predictive Model: At the visit level, co-occurrence information may already be incorporated within the predictive model itself. If this is the case, it is unclear what additional benefits the medical concept encoder provides, even in Table 3 where HMP is highlighted. This rationale could benefit from further elaboration.\n\n2. Limitations in Experiment Setup and Dataset Diversity\n(a) Lack of Diversity in Predictive Models: Table 2 evaluates different medical concept encoders with transformer as the predictive model, yet same experiment results for RETAIN and TCN, which are shown in Table 1, are not included. Including similar results for RETAIN and TCN would provide a more comprehensive assessment of the model’s generalization capabilities.\n\n(b) Limited Task Scope: This paper primarily focuses on a single task, sequential diagnosis prediction for the next visit. Expanding the evaluation to additional tasks would better demonstrate the generalizability and broader applicability of the proposed representations.\n\n(c) Dataset Diversity: The experiments are conducted solely on the MIMIC dataset, which limits insights into the model’s robustness across datasets. Testing the model on additional datasets would strengthen evidence of its generalizability.\n\n3. Model Evaluation and Performance\n(a) Usage of Embeddings: Clarification is needed on whether the embeddings generated by the medical concept encoder are fixed or serve solely as initial embeddings. For example, GRAM, which serves as a baseline, uses an end-to-end approach with predictive model. Is OntoFAR primarily used to provide only initial embeddings for code representation?\n\n(b) Marginal Improvement in Performance: The proposed model demonstrates only marginal performance gains, which makes it difficult to establish a clear advantage over existing approaches. This is especially evident when LLM embedding initialization is excluded, where the performance improvement seems negligible." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. What is the difference between the proposed work and the KAMPNet [2]?\n2. The last tested baseline is HAP (2020) and there are many following works during the last four years why they are not included.\n\n\n**References**:\n1. Yang K, Xu Y, Zou P, et al. *KerPrint: local-global knowledge graph enhanced diagnosis prediction for retrospective and prospective interpretations*. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37(4): 5357-5365.\n2. An Y, Tang H, Jin B, et al. *KAMPNet: multi-source medical knowledge augmented medication prediction network with multi-level graph contrastive learning*. BMC Medical Informatics and Decision Making, 2023, 23(1): 243.\n3. Ye M, Cui S, Wang Y, et al. *Medpath: Augmenting health risk prediction via medical knowledge paths*. Proceedings of the Web Conference 2021. 2021: 1397-1409." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. **Novel Idea with Cross-Ontology Integration**: \n\nThe proposed new method is able to capture relationships between different medical code types, enhancing representation.\n\n2. **Robust Embedding Initialization**: \n\nIt is interesting and effective to leverage LLMs for enhanced concept embedding with external knowledge.\n\n3. **Data Insufficiency Resilience**: \n\nThe author also performs additional experiments to prove that the proposed method maintains strong performance even with limited data availability." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes OntoFAR, a framework that enhances EHR predictive modeling by integrating multiple medical ontologies. It introduces dual-dimensional message passing (vertical and horizontal) to enrich medical concept representations and uses LLMs for embedding initialization. The approach is validated on MIMIC-III and MIMIC-IV datasets, demonstrating superior performance over baselines and robustness in data-limited scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**Substantive Assessment of Weaknesses**\n\n1. **Insufficient Justification of Improvements and Potential Gaps in Related Work**:\n\nWhile the paper proposes a multi-ontology framework to enhance EHR predictions, which is a well-explored domain, the authors' claims regarding the uniqueness and superiority of their approach are not convincingly substantiated. Integrating knowledge graphs (KGs) to improve EHR prediction is an established area with significant recent advancements. While the paper notes some limitations in current methods, it does not ensure that these criticisms translate into performance gains over state-of-the-art approaches. Notably, the comparison set lacks recent, relevant KG-based EHR prediction methods such as KerPrint [1], KAMPNet [2], and MedPath [3]. \n\nIn particular, KAMPNet [2] presents a multi-source and multi-level graph framework similar in concept to the proposed OntoFAR, suggesting an overlap that should be clarified. To strengthen the paper, I recommend including these contemporary works as baselines to provide a comprehensive comparison. Additionally, a detailed discussion explaining how OntoFAR differs from and advances beyond KAMPNet’s multi-level graph strategy would be essential to highlight its distinct contributions.\n\n**References**:\n1. Yang K, Xu Y, Zou P, et al. *KerPrint: local-global knowledge graph enhanced diagnosis prediction for retrospective and prospective interpretations*. Proceedings of the AAAI Conference on Artificial Intelligence, 2023, 37(4): 5357-5365.\n2. An Y, Tang H, Jin B, et al. *KAMPNet: multi-source medical knowledge augmented medication prediction network with multi-level graph contrastive learning*. BMC Medical Informatics and Decision Making, 2023, 23(1): 243.\n3. Ye M, Cui S, Wang Y, et al. *Medpath: Augmenting health risk prediction via medical knowledge paths*. Proceedings of the Web Conference 2021. 2021: 1397-1409." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. What's the base model in Figure 3?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper addresses the crucial real-world issue of diagnosis prediction. They effectively integrate multiple ontologies to enhance predictions and resolve alignment challenges among different ontologies.\n2. The authors present comprehensive experimental results from various perspectives, including many ablation studies and additional analyses that deepen the understanding of their findings.\n3. The figures and tables in the paper are well-designed and contribute significantly to clarifying the framework and interpreting the results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose a framework named OntoFAR for enhancing medical concept representation in EHRs by fusing multiple medical ontologies. It enables message passing both vertically and horizontally to capture richer cross-ontology relationships. OntoFAR constructs a unified Meta-KG initialized with embeddings from pre-trained language models, and effectively integrates medical concept information across ontologies. Evaluations on MIMIC-III and MIMIC-IV datasets demonstrate OntoFAR’s superior predictive performance and robustness, especially in data-limited scenarios, over existing EHR representation methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The comparison with the baselines does not seem to be fair. The proposed framework OntoFAR utilizes the GPT text embedding model for the embeddings of medical concepts, which is a much more powerful model than those used by the baselines. The ablation studies presented in Table 3 show that removing this part (w/o LLMs) results in performance on par with the baselines. This somehow suggests that OntoFAR might not truly outperform the baselines without the advantage of using this more powerful embedding model for a fair comparison.\n2. The baselines used in the paper are ranging from 2017 to 2020, which are kind of outdated. It's better for the authors to consider more recent baselines, such as SeqCare [1] and other studies mentioned in the related works section (e.g., GraphCare, MedPath, RAM-EHR).\n3. The notations in the method section are overly complex and could be much simplified. Many of the notations currently used are not essential.\n\nTypos and formats:\n- The referencing style throughout the paper is not correct; it should use parentheses (i.e., \\citep{} instead of \\cite{}).\n- \"Figure 3\" in line 442 should be \"Table 3\"\n\n[1] Xu, Yongxin, et al. \"Seqcare: Sequential training with external medical knowledge graph for diagnosis prediction in healthcare data.\" Proceedings of the ACM Web Conference 2023. 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Why are OpenAI off-the-shelf LLMs used when there are many other open-source LLMs available? How sensitive is the performance to the quality of the embeddings from the LLM?\n* How does your model compare against ADORE?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* A unified framework for incorporating multiple ontologies where each ontology can have different hierarchical structures.\n* Extensive experiments using different graph backbones (GAT and HAT), different diagnosis prediction models (transformer, RETAIN, and TCN), and 2 datasets (although there is some overlap between the two datasets).\n* A case study to demonstrate how the hierarchical and co-occurrence codes can help learn a better embedding representation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new unified framework that incorporates hierarchical information from multiple ontologies to augment patient representation in electronic health records. The idea is by incorporating multiple ontologies, the model can leverage cross-ontology relationships to fully leverage existing medical knowledge bases. The framework is evaluated against various baselines on two different datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper fails to mention and benchmark against ADORE (Adaptive Integration of Categorical and Multi-relationalOntologies with EHR Data for Medical Concept Embedding by Cheong et al. 2023) which incorporates a multi-relational medical ontology, SNOMED-CT which combines medications and diagnoses into a single representation. \n* There are different knowledge bases such as SNOMED-CT, CCS, and several others even for the diagnosis. Is there a reason why only one knowledge base is explored for diagnosis and/or medication (SNOMED-CT works for medication as well)? \n* Only a single downstream task is benchmarked and OntoFAR is introduced as being beneficial for a variety of tasks. How does the embedding perform on other tasks like mortality prediction or readmission prediction for either of the datasets (it does not need to be both)?\n* Given MIMIC-III and MIMIC-IV share the same dataset, it would be helpful to benchmark against something that is likely to have different patients. eICU is a good example of a potential open-source dataset (there is some shared with MIMIC but there is also some outside ones). \n* The methodology section is quite dense and a bit hard to parse especially when trying to ascertain how information is shared across the different ontologies. It would be helpful to provide an example using ATC and ICD-9 hierarchy. Based on Figure 1, GAT or HAT are the mechanisms for sharing information across the same concept level across ontologies but this isn't made explicit.\n* The citation style is incorrect, you should swap it to \\citep as the default instead of \\cite." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Authors need to clarify the meaning of “Pr” in equation 1.\n\n- Additional clarification on how to aggregate clinical events embeddings from the matrix “Z” to obtain a single visit representation is needed (lines 333-335).\n\n- In line 442, the authors likely mean “Table 3” instead of “Figure 3”. Similarly, in line 500, they might intend to refer to “Right” instead of “Left”.\n\n- The authors should describe the elements in Fig. 6, such as the meaning of the nodes based on their color, to enhance understanding of the figure’s components." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Originality: The OntoFAR framework introduces a novel approach to integrating multiple medical ontology graphs, enabling both horizontal and vertical message passing across these ontologies. This method of cross-ontology relationship exploitation is innovative in its bidirectional propagation mechanism, which is distinct from existing works that typically focus on single ontology systems or unidirectional information flow.\n\nQuality: The paper is methodologically rigorous, presenting a clear and systematic approach to the multi-ontology integration problem. The proposed framework, OntoFAR, is well-constructed with detailed descriptions of its components.\n\nSignificance: The significance of this work lies in its potential to enhance the accuracy and robustness of predictive healthcare models by leveraging a richer representation of medical concepts derived from multiple ontologies." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces OntoFAR, an innovative framework designed to enhance the representation of medical concepts by integrating multiple medical ontology graphs. These graphs typically structure medical knowledge hierarchically and relate it to the medical codes used in electronic health records (EHRs). Current methods are limited by their inability to cross-reference information across different ontological and are restricted to using relationships within the same ontology. OntoFAR overcomes the limitations of previous approaches by fusing multiple ontologies. This is achieved through both vertical and horizontal message passing—vertical across different levels of the ontology hierarchy and horizontal across co-occurring concepts within EHR visits." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The main weakness of the paper lies in its evaluation:\n\n- The approach is evaluated solely on the task of sequential diagnosis prediction, which restricts the assessment of OntoFAR's effectiveness. To explore the benefits of integrating multiple medical ontology graphs for different types of medical concepts, the approach should be tested on a variety of tasks that utilize diverse concepts.\n\n- The experiments are conducted on MIMIC-III and MIMIC-IV, which originate from the same healthcare provider (Beth Israel Deaconess Medical Center in Boston). Additionally, both databases contain overlapping admissions from 2008 to 2012, which limits the diversity of the data used in the study. Evaluating the approach in other datasets is essential for understanding its applicability and robustness across different settings.\n\n- When comparing the performance of OntoFAR with other existing medical ontology structure encoders, authors should also indicate the size of each model (number of trainable weights). The observed performance increase for OntoFAR may be due to an enlargement of the model relative to other methods, rather than its effectiveness in learning superior medical concept representations.\n\n2) In general, the writing requires significant improvements. For example, line 171: “work depicted Figure 1” should be corrected to “work depicted in Figure 1”; additionally, there are multiple typographical errors, such as the use of “massage” instead of “message” multiple times." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduced OntoFAR, a multi-ontology fusion framework to enhance medical concept representation learning in EHR models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024ontofar,\ntitle={Onto{FAR}: Hierarchical Multi-Ontology Fusion Better Augments {EHR} Representation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=y1UHa9sl2w},\nnote={under review}\n}" }, "abstract": { "value": "Medical ontology graphs, which typically organize and relate comprehensive medical concepts in a hierarchical structure, are able to map a rich set of external knowledge onto the specific medical codes observed in electronic health records (EHRs). Through the connectivity in ontologies, healthcare predictive models can utilize the ancestor, descendant, or sibling information to add supplementary contexts on medical codes, thereby augmenting expressiveness of EHR representations. However, existing approaches are limited by the heterogeneous isolation of different ontology systems (e.g., conditions vs. drugs), that different types of ontology concepts have to be learned individually, and only the homogeneous ontology relationships can be exploited. This limitation restricts the existing methods from fully leveraging the cross-ontology relationships which could substantially enhance healthcare representations. \nIn this paper, we propose OntoFAR, a framework that fuse multiple ontology graphs, utilizing the collaboration across ontologies to enhance medical concept representation. Our method jointly represents medical concepts cross multiple ontology structures by performing message passing in two dimensions: (1) vertical propagation over levels of ontology hierarchy, and (2) horizontal propagation over co-occurring concepts in EHR visits. Additionally, OntoFAR leverages the large language models (LLMs) pre-trained on massive open world information to understand each target concept with its ontology relationships, providing enhanced embedding initialization for concepts. Through extensive experimental studies on two public datasets, MIMIC-III and MIMIC-IV, we validate the superior performance of OntoFAR over the state-of-the-art baselines. Beyond accuracy, our model also exhibits the add-on compatibility to boost existing healthcare prediction models, and demonstrate a good robustness in scenarios with limited data availability. The implementation code is available at [https://anonymous.4open.science/r/OntoFAR-35D4](https://anonymous.4open.science/r/OntoFAR-35D4)" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Health Informatics", "EHR", "Diagnosis Prediction", "Healthcare Representation" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/baca4505f91312b7e6a1fd2013b60fa9410583fb.pdf" }, "presentation": null, "primary_area": { "value": "other topics in machine learning (i.e., none of the above)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "OntoFAR: Hierarchical Multi-Ontology Fusion Better Augments EHR Representation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
y1iU5czYpE
Auxiliary-Loss-Free Load Balancing Strategy for Mixture-of-Experts
main
Active
mixture of experts;load balancing;auxiliary-loss-free
foundation or frontier models, including LLMs
3;3;3;5
4;4;4;2
1;2;2;3
1;2;2;2
2;1;2;3
3.5
3.5
2
1.75
2
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The load-balancing loss brings the advantage of load balancing to the inference stage by directly affecting the routing weights? How does the proposed method behave during inference." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper is well-written and the method is clearly explained\n- Good visualizations\n- Simple approach that should be very easy to test and cheaper to compute than the conventionally used load-balancing loss" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors propose an alternative loss-free method for load balancing of experts during MoE training. Load imbalance is a critical issue in MoE training as it can lead to expert collapse or increased utilization of some experts over the others. The proposed method achieves load balancing by dynamically applying expert-wise biases on routing scores according to their recent load, avoiding interfering gradients. The added bias is designed to only affect the top-k selection without changing the routing weights for combining the selected experts.\n\nThe loss-free load balancing approach is applied to DeepSeekMoE models with sizes 1B and 3B. The authors report perplexity on the validation set vs Maximal Violation scores." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I am concerned over the validity of the claims. The empirical evaluations are very limited constrained to two DeepSeekMoE models and perplexity differences among the models and the baselines are at the level of 0.05 difference. Is this difference in perplexity significant?\n\n- The evaluation is limited to language modelling and perplexity values. It would be better to see the actual effect of the loss-free load balancing on other downstream tasks such as MMLU or GLUE.\n\n- The proposed Max Violation (MaxVio) score subtracts the mean load from the maximum load an expert receives, which highlights the worst-case scenario of load imbalance. Since MaxVio focuses on the maximum load, it can be highly sensitive to outliers. A single batch with unusually high token routing to one expert can disproportionately affect the MaxVio calculation, making it appear as though there is a significant imbalance even if most batches are well-balanced.\n\n- To make the comparison fair, it would be useful to also see the load balancing (loss) values (without back propagating) and see how the loss-free variant behaves on that score as the training continues (e.g., a plot of load balancing loss over training steps for both methods)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed.", "Yes, Responsible research practice (e.g., human subjects, data release)" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "Please see my questions in Weakness column." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper has the following strengths:\n\n1. The proposed Loss-Free Balancing method eliminates the need for auxiliary loss, which traditionally adds undesirable interference gradients. This results in a cleaner training signal focused solely on the primary language modeling objective, potentially enhancing overall model performance.\n\n2. By dynamically adjusting biases for each expert based on recent load data, the method ensures a balanced expert load without compromising model efficiency.\n\n3. The strategy is compatible with expert parallelism, a key feature for training extremely large MoE models across multiple devices. This makes it highly suitable for scaling up model sizes while maintaining efficient load balancing.\n\n4. Unlike the Expert Choice (EC) method, which risks future token leakage, Loss-Free Balancing maintains causal constraints in language modeling, thus preserving the integrity of model generalization." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Loss-Free Balancing, a new load balancing strategy for MoEs that avoids auxiliary loss, which traditionally introduces interference gradients and hampers model performance. The motivation stems from the need to balance expert loads in MoE models to prevent routing collapse and computational overhead, issues that current auxiliary-loss methods attempt to address but with performance trade-offs. The major research question is whether a balancing strategy can maintain expert load distribution without harming model performance. The proposed method adjusts each expert's routing score using a dynamically updated bias based on recent loads, promoting balanced expert utilization without adding interference gradients. Experiments on 1B and 3B parameter MoE models trained on abundant datasets show that Loss-Free Balancing achieves better load balance and improved validation perplexity compared to traditional methods, making it a promising approach for scaling large language models while preserving efficiency and performance." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "This paper has the following weaknesses:\n\n1. I am at first astonished by the short reference list of this paper, as the authors only cited 10 papers. Clearly, this paper did a very bad job on surveying the related work, including the various auxilliary-loss-based balancing methods, the major improvement of MoEs, the current MoE-based LLMs. Normally, I would list a few of the works for your reference, but the authors missed too many so I do not know where to start. I would strongly suggest the authors to check other MoE-LLM papers to improve the related work part.\n\n2. The authors seemed to enlarge the figures in this paper in order to make the paper length reach 9 pages, which result in a disproportionate paper layout, and the figures look abrupt. \n\n3. The architecture design of the MoE-based LLMs are versatile. This paper only demonstrates its effectiveness on a small DeepSeek MoE model, while leaving other MoE models unvisited, such as Mistral, QWen, LLaMA-MoE, OpenMoE.\n\n4. The improvement in performance seem trivial, which challenges the motivation of this study: why do we need a loss-free balancing at all." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. On Page 4, in Formula 3, the selection and initialization process for b_i remains unclear. It would be helpful to clarify whether b_i is influenced by K or independent of it. Additionally, in the adjustment process—where each bias b_i is iteratively modified based on the load of the corresponding expert—it is unclear by what amount b_i should be increased or decreased and according to which theoretical basis this adjustment is made.\n2. On Page 8, it is stated that 'the load balance of our Loss-Free Balancing consistently improves as the computation batch size increases.' Is there an upper bound to this improvement? \n3. Is there a specific reason for selecting 9 MoE layers for testing?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This is an interesting research problem and the author aims to develop an efficient solution approach" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a Logg-free balancing strategy to solve the mixture of expert models' unbalanced expert load problem. The advantage of the approach is that it does not produce any interference gradients.\nthis paper proved that Loss-Free Balancing achieves better performance and better load balance compared with traditional auxiliary-loss-controlled load balancing strategies" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The motivation and underlying intuition for the proposed approach could be clarified further to enhance understanding.\n2. Additional experiments are recommended to demonstrate the robustness of this approach when applied across varying numbers of expert mixtures. A scalable analysis would also be beneficial.\n3. The approach would be strengthened with theoretical justification to substantiate its effectiveness." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Theoretically, can it be shown that an auxiliary loss produces interference gradients that impact model performance?\n\n2. Please clarify further how expert choice leads to future token leakage. \n\n3. Can this method be applied to a more diverse range of MoE models and for computer vision as well. The improvement over the baseline seems marginal. I have doubts about the impact of the method in improving performance." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. There is some originality in viewing the problem of load balancing from the perspective of interfering gradients. \n\n2. The problem and solution proposed are simple and easy to understand." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper aims to replace the standard auxiliary load-balancing loss in mixture of experts (MoE) models with a new load balancing strategy that does not involve additional losses. This is to ensure tokens are being routed evenly to each expert without introducing gradients that do not directly contribute to achieving the language model objective. The authors also introduce a metric to quantify the degree of load balance in a MoE model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It is unclear if or how the gradients due to the auxiliary load balancing loss results in the model not achieving its training objective. \n\n2. It is unconvincing that there is a future token leakage when using expert choice. The explanation was brief and relied on Fig. 6 which does not explain how there is a break in the causal constraint of the language modeling. While some empirical evidence is reported, the evidence assumes that \"an abnormal loss drop\" must be due to a future token leakage. \n\n3. The experimental results are severely lacking. Only one model is used, DeepSeekMoE, instead of the more commonly used Mixtral or traditional Switch transformer. The improvements are marginal even if there is a significant improvement in load balancing (according to their proposed metric). \n\nMinor: in the definition for MaxVio, the $\\max_i$ is included in the numerator of fraction which does not make sense as the index i is also involved in $\\bar{Load_i}$" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose Loss-Free Balancing, a novel MoE load balancing method that dynamically adjusts expert biases based on its recent load without relying on auxiliary losses, thereby avoiding interference gradients and achieving improved model performance." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024auxiliarylossfree,\ntitle={Auxiliary-Loss-Free Load Balancing Strategy for Mixture-of-Experts},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=y1iU5czYpE},\nnote={under review}\n}" }, "abstract": { "value": "For Mixture-of-Experts (MoE) models, an unbalanced expert load will lead to routing collapse or increased computational overhead. Existing methods commonly employ an auxiliary loss to encourage load balance, but a large auxiliary loss will introduce non-negligible interference gradients into training and thus impair the model performance. In order to control load balance while not producing undesired gradients during training, we propose **Loss-Free Balancing**, a new load balancing strategy that operates without auxiliary losses. To be specific, before the top-K routing decision, Loss-Free Balancing will first apply an expert-wise bias to the routing scores of each expert. By dynamically updating the bias of each expert according to its recent load, Loss-Free Balancing can consistently maintain a balanced distribution of expert load. In addition, since Loss-Free Balancing does not produce any interference gradients, it also elevates the upper bound of model performance gained from MoE training. We validate the performance of Loss-Free Balancing on MoE models with up to 3B parameters trained on up to 200B tokens. Experimental results show that Loss-Free Balancing achieves both better performance and better load balance compared with traditional auxiliary-loss-controlled load balancing strategies." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "mixture of experts", "load balancing", "auxiliary-loss-free" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/138f19eedd33952236974ad6aac9a9dcd545d462.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/43fb95f544598928a908a34dff32ffe8ef028e3d.zip" }, "title": { "value": "Auxiliary-Loss-Free Load Balancing Strategy for Mixture-of-Experts" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
y2ch7iQSJu
Budget-constrained Active Learning to De-censor Survival Data
main
Active
Active Learning;Survival Analysis;Budgeted Constraints;Bayesian Model;Mutual Information;De-censoring Data
learning theory
1;3;3;8
4;4;4;2
2;1;2;4
2;2;2;4
1;2;2;3
3.75
3.5
2.25
2.5
2
-0.948847
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) Is there a typo on the lines 211 - 215? In particular, can authors elaborate in more detail why the second equality ( between lines 212-213) holds? \n2) What does authors mean by statement: “ by omitting the first two lines we can reduce the computational complexity …” (lines 252 - 253)\n3) Can authors provide more insights into how the prior and likelihood within the BALD are selected? \n4) Can you clarify how the proposed “approach provides bounds and time complexity are asymptomatically equivalent to standard active learning methods”? (Lines 29-31)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The selected problem is very relevant and interesting for real life data applications. The paper well states the problem and aspire to provide a solution.\n- Authors propose three alternative methods to demonstrate the strength of their approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- Presented work focuses on the problem of Bayesian active learning with the learner trained on survival analysis data. \n- The number of query steps is constrained by the given budget. In particular: \n - the underlying data are right censored with the underlying latent classifier which indicates whether the patient died during the observation period;\n - the censoring corresponds to a patient dropping out of study and so after a particular time period we lose the access to the information whether the patient survived, the censoring bound is known. \n - the learner can query an unlabeled dataset and obtain partially labelled censored instances, where some time interval can be queried, e.g. data are censored up to 3 years and we query information about additional two more years about the patient. This corresponds to following up with the patient after some period; \n - the budget constraint indicates that different labels have associated different costs and information content. \n\nAuthors propose a solution based on the Bayesian Active learning by Disagreement (BALD) applied to querying a batch of given size. Authors provide a budgeted greedy algorithm and incorporate mutual information scores from the BALD-like estimate. To budget constrain problem, authors try to optimize the cost coverage over the batch. \n\nAuthors use 3 real life survival datasets to demonstrate their proposed method." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) Paper contains multiple typos, see e.g line 210. \n2) Terms are not clearly defined, e.g. Algorithm 1, line 275, what is a_BB_surv ? \n4) The paper is not very clearly written, there is missing description of the learner, there is missing debate on the distributional properties of the BALD model. \n5) The results in the main section seems cherry-picked, e.g. it would be good to include more points in the budget constraints rather than 0, 1, 5, 20.\n6) It would be good to provide study on the synthetic dataset to demonstrate how the proposed methods perform in the controlled setting." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. The requirement of true times as well as censoring times for all the subjects in the training data is very restrictive in survival analysis. You may argue that in active learning you just use \"complete data\" as the initial step, then the observations are biased, and the model learned from the data does not reflect the true survival model, and thus any further results based on the model are problematic.\n\n2. It seems y denotes the true survival time, but the test data have {x_Bi, y_Bi}, but not ctime_Bi cevent_Bi. It is not clear which are known and which are unknown. The English is hard to understand, too. For example, the meaning of \"the model sees only when the data points are uncensored\" (line 140, page 3).\n\n3. Survival time is often a continuous random variable, in the defined model (or procedure) in line 160, the previous defined time y is assumed to take values from {1, ..., t}. It is not clear what the notations y and t stand for.\n\n4. The notation \"I\" has multiple meaning in the context. I-oracle (line 156), unit of time (line 158, ctime_i +I), and mutual information (line 197). The other notations like the second term on the right hand side of line 200 with the one one the left side of line 207, what the w_j on the right hand of line 207 stands for, and why we have the summation of of j from 1 to k here. You may not discard or add terms arbitrarily to confuse the readers.\n\n5. The BatchBALD method was developed in Kirsch et al (2019) already. It is not clear the information entropy in line 212 is from Kirsch et al (2019) or proposed by the author. \n\n6. For survival data, the authors tried to obtain the probabilities p(y|x, w D_train) by assigning zero for those value y which below the censoring time c_b, then normalized the probabilities of those values larger then c_b (conditional on the survival time being larger than c_b). This is obvious from the conditional probability rule. From line 297, if the instance is censored at ctime_b, why p(y|x,w,D_train)=0 for all y greater than ctime_i+I. How is the interpretation of the formula in line 301?" }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The use of active learning in survival analysis is an interesting idea." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors consider the learning of survival model with budget constrained active learning. The formulation of the problem (Section 3) requires that the training dataset has the complete information of event times and censoring times, which is not a typical survival problem in which the survival times cannot be all observed for training data. The active learning is only applied to the case where the budget is used to extend the study (expend the observation period) in order to observed possible more events. This scenario is not the typical consideration in survival analysis in medical or health studies, though in reliability study in industry, in which type II or Type I right censoring are often used to collect data, it is possible to observe more events by increasing the study time. In clinical trial random right censoring is mostly the observational scheme and extending study time may not be able to \"acquire\" the information assumed in the article." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The use of de-censoring as the label in survival analysis does not have much practical usage. It could be used in \"reliability\" in industry, though. The written English is not clear. The use of words and punctuation like \"and\" and \",\" make the sentences hard to understand. For example, see lines 269 and 282." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Assumptions: \n- for clarity, I think it would be better to present the assumptions in a formal manner: Assumption 1: (...), Assumption 2: (...)\n- for the assumption that censoring is independent of the features, it is common in survival analysis but can they the authors discuss this more in details what it implies in practice by taking an example ? \n2. The measure MAE-PO should be described in the main text. \n3. Line 388, the authors describe clearly how the real dataset is \"censored\". However, I think it's a pity that the authors don't discuss enough the realistic aspect of having the selected datasets censored (and partially labeled), perhaps a realistic situation for a dataset could be put forward. \n\nMinor comments: \n- line 139: **, where** L is the size of the training data.\n- line 165: 3.1 INITIAL ASSUMPTIONS -> there is no 3.2 section, maybe the authors could use \\paragraph instead of \\subsection\n- it may be personal but I don't like the notations $ctime_i$ and $cevent_i$, maybe the authors can use a shorter version like $c^t_i$ and $c^e_i$ (it is just a proposition)" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- This paper is clear, domains are well introduced (whereas the paper is at the intersection of survival analysis and active learning)\n- The results are convincing, and the authors have made the effort to compare themselves with “naive” methods (sanity checks part), even though there is no work dealing with this case." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on a problem of great practical interest, when the dataset is partially labeled and the data is censored. It proposes a budgeted learning (a subfield of active learning) method in the context of survival analysis.\nIt's a niche subject, but the method is very new and original, I am convinced it can be used for practical purposes. \n\nI am not familiar with either active learning or survival analysis (so I'll put 2 on the confidence score)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- I have some minor comments, it's more a question of presentation than content." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1) Can you please clearly list the main claimed contributions of the paper? \n2) There are other works in the literature looking at applying BALD for right-censored data (see e.g., https://arxiv.org/pdf/2402.11973). How does the proposed approach relate to these works? \n3) Please clarify the theoretical ground for \"omitting the first two lines\" of the algorithm of (Khuller et al., 1999) and for setting the probabilities to zero and renormalizing in the adaptation to BatchBALD to survival data? \n4) Are the censored times assumed to be known a priori before querying? If so, how limiting is that assumption in terms of the applicability of the proposed approach?\n5) How can the proposed approach be generalized beyond discretized time bins? \n6) The results in Table 1 seem to be inconsistent with Figures 1 and 2. If one considered a vertical line at Budget = 10 (or 20, whichever it is), shouldn't the values and relative order of the approaches be consistent with Table 1?\n7) It is unclear how the authors transition from the eq. of the mutual information at the beginning of Section 4.1 to the computation of its right-most term 2 equations below. The right term is an expected entropy, while 2 equations below only the entropy is considered. Can you clarify?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1) The **problem addressed in this paper is relevant** and the existing research on the topic of AL for survival data is indeed limited. \n2) The proposed approach is **an original combination of existing ideas**, namely budgeted learning and BatchBALD for AL, with the latter being \"tweaked\" to work with survival data." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an active learning (AL) approach for survival analysis data, where the AL is constrained to a budget. The proposed approach is essentially an extension/adaptation of the BatchBALD algorithm (Kirsch et al., 2019) to account for survival data and a limited budget. The main motivation and application comes from the medical domain, where the proposed approach aims to identify which patients to follow up with to maximize the mutual information criterium of BatchBALD. The empirical results based on 3 real-world datasets in the medical domain show that the proposed approach can outperform standard BatchBALD, as well as 5 additional naive baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1) The **contributions are not very clear**. The authors should clearly state which contributions they are claiming. Is the extension of BatchBALD to a budget-constrained setting the central contribution? If so, why is the theoretical analysis included in the appendix rather than in the main text? Is the adaptation of BatchBALD to survival data the central contribution? Both? The narration is ambiguous with respect to this, thus unnecessarily increasing the difficulty of assessing novelty.\n2) The **novelty is unfortunately low**. It is not clear how the proposed approach for accounting for the budget constraint is different from the existing approaches for budgeted learning from the literature. The adaptation of BatchBALD proposed for survival data is also very incremental and not adequately grounded. The authors also mention that \"so far no method has been developed looking at\nbudgeted learning with survival data\" - however, there are other works in the literature looking at applying BALD for right-censored data (see e.g., https://arxiv.org/pdf/2402.11973). How does the proposed approach relate to these works? \n3) There are **several concerns regarding the soundness** of the proposed approach. The extension of BatchBALD to a budget-constrained setting is not clearly described and is not adequately grounded. The proposed approach is an adaptation of the algorithm proposed in (Khuller et al., 1999) \"by omitting the first two lines\". However, it is not clear what lines the authors refer to and, most importantly, what are the theoretical grounds for doing so. Similarly, the adaptions of BatchBALD for survival data are essentially presented as \"algorithmic tweaks\" (e.g., setting probabilities to zero and renormalizing) without a proper theoretical justification. Lastly, it is not clear how the authors handle the fact that the queried instances can still be censored - are the censored times assumed to be known a priori before querying? If so, how limiting is that assumption in terms of the applicability of the proposed approach? \n4) The **limited application scope** of the proposed approach is also a topic of concern. Although I agree with the authors that AL for survival data is a topic worth investigating, and the literature is indeed lacking on this, the application scope is, in my opinion, broader than the medical domain application presented in the paper. This limits the target audience of the paper quite a lot. I would encourage the authors to consider revising their presentation to broaden the scope of application to other domains as well. For example, the authors mention applications in finance and engineering in the abstract - it would be interesting to discuss those as well. Lastly, the authors consider only approaches where time is discretized into bins (they use Multi-Task Logistic Regression as the underlying survival model), and the proposed adaptation of BatchBALD to handle survival data in Section 5 relies on this assumption, which can be quite limiting. This should be discussed in the paper. \n5) The presented **claims are not adequately supported by empirical evidence**. Looking at the results in Table 1, the improvements over standard BatchBALD are very marginal. However, the text claims that \"BBsurv outperforms other algorithms when budget is equal to 20 across all 3 real world datasets\", which does not seem to always be the case. In fact, even \"Entropy\" seems to perform quite comparably - did the authors perform a statistical significance test? Confusingly, the Table 1 caption states that \"Budget = 10\", which is inconsistent with the text. Moreover, the results in Table 1 seem to be inconsistent with Figures 1 and 2. If one considered a vertical line at Budget = 10 (or 20, whichever it is), shouldn't the values and relative order of the approaches be consistent with Table 1?\n6) The **presentation can be significantly improved**. There are several typos, poorly constructed sentences, incorrect grammar usage, sentences that are too informal for scientific paper, etc. I strongly encourage the authors to carefully revise the writing and overall presentation of the paper. Other aspects of the presentation, such as mathematical derivations, can be significantly improved. For example, the authors should include numbers for the equations. Similarly, it is unclear how the authors transition from the eq. of the mutual information at the beginning of Section 4.1 to the computation of its right-most term 2 equations below. The right term is an expected entropy, while 2 equations below only the entropy is considered. Can you clarify?" }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We develop a method for doing a more general form of active learning which accounts for the budget given and the amount of label information we can get, on survival datasets and explore theoretical and experimental results in this domain." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024budgetconstrained,\ntitle={Budget-constrained Active Learning to De-censor Survival Data},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=y2ch7iQSJu},\nnote={under review}\n}" }, "abstract": { "value": "Standard supervised learners attempt to learn a model from a labeled dataset. Given a small set of labeled instances, and a pool of unlabeled instances, a budgeted learner can use its given budget to pay to acquire the labels of some unlabeled instances, which it can then use to produce a model. Here, we explore budgeted learning in the context of survival datasets, which include (right) censored instances, where we know only a lower bound c_i on that instance’s time-to-event t_i. Here, that learner can pay to (partially) label a censored instance – eg, to acquire the actual time t_i for an instance [eg, go from (3yr, censor) to (7.2yr, uncensored)], or other variants [eg, learn about 1 more year, so go from (3yr, censor) to either (3.2yr, uncensored) or (4yr, censor)]. This serves as a model of real world data collection, where followup with censored patients does not always lead to complete uncensoring, and how much information is given to the learner model during data collection is a function of the budget and the nature of the data itself. Many fields, such as medicine, finance, and engineering contain survival datasets with a large number of censored instance, and also operate under budget constraints with respect to the learning process, thus making it important to be able to apply this budgeted learning approach. Despite this importance; to our knowledge no other work has looked into doing this. We provide both experimental and theoretical results for how to apply state-of-the-art budgeted learning algorithms to survival data and the respective limitations that exist in doing so. Our approach provides bounds and time complexity theoretically equivalent to standard active learning methods. Moreover, empirical analysis on several survival tasks show that our model performs better than other potential approaches that might be considered on several benchmarks." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Active Learning", "Survival Analysis", "Budgeted Constraints", "Bayesian Model", "Mutual Information", "De-censoring Data" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/6dc6c98e195c0c119f1be212c2cc4a0ece30d58d.pdf" }, "presentation": null, "primary_area": { "value": "learning theory" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Budget-constrained Active Learning to De-censor Survival Data" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
y3CdSwREZl
MINER: Mining the Underlying Pattern of Modality-Specific Neurons in Multimodal Large Language Models
main
Active
MLLMs;neuron analysis;interpretability
foundation or frontier models, including LLMs
3;5;5;5;6
4;3;4;3;3
3;3;2;3;3
2;3;3;2;3
1;3;3;2;3
4.8
3.4
2.8
2.6
2.4
-0.666667
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "1. Whether the analysis in this paper depends on specific models or not? If so, is it accurate to claim that it is the characteristic of MLLM? I wonder if the performance of FFN in different MLLMs plays a similar role." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The motivation is meaningful and interesting. With the increasing focus on MLLM, the underlying logic and pattern of each component of MLLM is important, and it is also under-explored in current research.\n2. The proposed pipeline is clear and easy to follow. As shown in this paper, the framework mainly consists of 4 components or steps: separate modalities, calculate importance scores, aggregate importance scores, and select modality-specific neurons.\n3. The experiments seem solid. Extent experiments are conducted to answer the four questions mentioned in this paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a framework named MINER to understand the underlying pattern of modality-specific neurons in multimodal large language models (MLLM). The framework mainly consists of 4 components or steps: separate modalities, calculate importance scores, aggregate importance scores, and select modality-specific neurons. Experiments are conducted to analyze the existence and effectiveness of the modality-specific neurons." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The writing of this paper is very confusing, even with very low-level mistakes. It seems the authors do not understand what is important and the meaning of the formulation. For instance, in the lines 203 and 204, what does H mean? There is even no H in the equation. Besides, there are some inconsistent formats, such as fig. xx, and Figure xxx, which makes this paper not well-prepared for submission.\n2. Since this paper focuses on FFN of MLLM, it is better to show the structure of the focused FFN, making it clearer to understand what the purpose of this paper is.\n3. The analysis in the experiments is confusing. For example, in line 403 and 404, MMLU (0.31)? Also, in line 407, mentioned in Ob3? I do not understand what the paper means.\n4. Whether the analysis in this paper depends on specific models or not? If so, is it accurate to claim that it is the characteristic of MLLM? I wonder if the performance of FFN in MLLM plays a similar role." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "**Questions and Suggestions for the Authors:**\n\n1. Could you add a definition for \\( H_L \\) and clarify what is meant by the “hidden activation function”? This addition would enhance the readability of the paper. Additionally, Table 3 currently appears a page before Table 2, which disrupts the logical flow.\n\n2. Please clarify the implementation details for Stage I (modality separation) in your proposed method. Refer to the outlined weaknesses for specific questions and concerns. Additionally, is there supporting empirical evidence for this assumption in the existing literature?\n\n3. Please consider adding a section detailing the computational requirements of your approach. Address Weakness 4 by including specifics such as GPU hours, memory demands, and a comparison with other commonly available explainability methods. This will provide the readers with a clearer understanding of the computational resources needed for your method.\n\n4. Please conduct an ablation study using datasets of varying sizes to analyze whether the neurons are a subset of the original set. Refer to Weakness 3." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The proposed methodology has several notable strengths:\n\n1. It explores the relatively unexplored area of identifying modality-specific neurons in Multimodal Large Language Models (MLLMs), offering insights into the internal workings of these complex models.\n\n2. The authors provide comprehensive results across multiple MLLM architectures and datasets, accompanied by extensive ablation studies and in-depth discussions, demonstrating the robustness and applicability of their approach." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors introduce a novel method, named MINER, to identify Modality-Specific Neurons in Multimodal Large Language Models (MLLMs). This approach comprises four main stages: 1. **Modality Separation**: The method begins by assuming that information within each modality's token set predominantly stays within that set, implying that cross-attention between modalities plays a minor role in MLLM tasks. 2. **Importance Score Calculation**: Instead of assessing the importance of individual activations for each sample, the method aggregates activations across tokens sets within a modality. The authors propose five distinct aggregation techniques for this step. 3. **Sample-Level Aggregation**: These aggregated importance scores are combined to generate a sample-level importance score. 4. **Neuron Selection**: Finally, neurons are ranked based on their importance scores, allowing for the selection of modality-specific neurons. This method offers a systematic approach to pinpoint neurons that are critical to individual modalities within MLLMs, advancing our understanding of how these models handle multimodal information.\n\nThe authors should the efficacy for their method on multiple tasks and multiple datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "In my view, there are several weaknesses in the paper's approach:\n\n1. **Modality Separation Assumption**: The assumption of modality separation, as defined in the paper, is overly generalized and lacks robustness.\n\n- **Dataset and Task Specificity**: Modality separation can only be reasonably argued for certain datasets and tasks. For example, it might be valid for specific questions in Visual Question Answering (VQA) but is unlikely to hold across all question types. Moreover, it seems especially problematic to assume modality separation for captioning tasks and datasets, where contextual understanding of the entire image is essential.\n\n- **Dependence on Output Context**: Another critical consideration is whether modality separation should be treated as a function of the output. In captioning tasks, the entire image often correlates with the output, as generating captions requires broader contextual information. In contrast, the input (or prompt) for caption generation may hold less significance, further challenging the assumption of modality separation. Could you, if possible, conduct an ablation study to assess the significance of the output modality in the importance calculation?\n\n- **Question on Importance Calculation**: This brings up a key question regarding the calculation of neuron importance: is the importance score determined solely based on the input, or does it also consider the model's output? If not, would including the model’s output in this calculation lead to different importance scores?\n\n2. *Missing Definition*: H_L is not defined in section 3.2. No reference is present in any of the equation 2 or 3 on the same page.\n\n\n3. **Dataset Size Impact**: Table 2 illustrates that different datasets yield different neuron importance scores. However, it's not clear if varying the quantity of data within a single dataset also impacts these scores. For example, if using 100% of the dataset identifies important neurons \\(N = \\{n_1, n_2, n_3, n_4\\}\\), would using only 50% or 33% of the dataset result in a different set of important neurons, or would it merely be a subset of \\(N\\)? If the importance neurons change, are they still importance neurons, or is there any underlying cause which we are not able to see.\n\n\n4. **Computation Requirements**: The paper lacks an analysis of the computational resources needed to determine neuron importance. Since high-performance compute resources can be limited, an estimate of the required computational effort is essential to evaluate the feasibility of this explainability method." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "### **Detailed Comments and Suggestions for Improvement**\n\n**Major Comments**\n\n1. **Insufficient Validation of Modality-Specific Neurons (MSNs)** \n The deactivation experiments indicate that identified neurons are crucial for model performance, but this does not confirm they are modality-specific. If a neuron is equally essential across all modalities, its deactivation would similarly result in performance degradation without necessarily being MSN. Additional experiments should be introduced to verify the modality specificity of these neurons.\n\n2. **Need for Cross-Modality MSN Overlap Analysis** \n Given the inherent interactions between modalities, it would be valuable to analyze and present the extent of overlap among MSNs across different modalities. Showing the percentage of MSNs significant to multiple modalities and their distribution across layers would clarify the degree of cross-modality dependency.\n\n3. **Lack of Practical Implications for MSN Discovery** \n The discovery of MSNs is conceptually valuable, but the paper lacks a discussion on how these findings could practically benefit MLLM design or solve current challenges. Possible applications would increase the paper’s practical relevance.\n\n4. **Potential Plagiarism in Figure 1** \n Figure 1 (left) closely resembles Figure 1 of (Huo et al., 2024) but lacks proper citation. This raises concerns about originality.\n\n5. **Explanation Gaps for Key Concepts** \n Terms like **semantic probing** and **semantic telomeres** are introduced in the introduction without clear definitions, which can confuse readers. Providing definitions for these terms at the outset would improve clarity.\n\n6. **Issues with Experiment Design and Metrics** \n - **Typo**: There’s a typographical error on page 3, line 155 (“={text.”), which likely should be “={text, image}.”\n - **t-SNE vs. UMAP**: The paper utilizes t-SNE to compare embeddings across layers, but given t-SNE's stochasticity, UMAP could offer more consistent comparisons across layers, maintaining better global structure.\n - **Inconsistent Table Order**: Table 3 appears before Table 2, disrupting the flow and causing potential confusion.\n\n**Minor Comments**\n\n1. **Explanation for Confirmatory Experiment 5.5 (Hypothesis 1)** \n Experiment 5.5 is referenced as supporting Hypothesis 1, but the link between the experiment’s findings and the hypothesis lacks clarity. A more detailed explanation would strengthen this connection.\n\n2. **Typographical Issues in Figures and Captions** \n - **Figure 6 Text Size**: The text in Figure 6 is too small, making it difficult to interpret. Larger font sizes would enhance readability.\n - **Axis Titles Missing in Figure 6(a)**: Figure 6(a) lacks x-axis titles, complicating interpretation. Clear labeling of axes would improve understanding.\n - **Overcrowded Plot in Figure 6(a)**: Multiple experimental results are displayed within a single figure, which is unconventional and may confuse readers. Separating these plots or including distinct legends for each result would improve clarity.\n - **Inadequate Captions for Figures 6(c), 6(d), and 6(e)**: These figures lack proper captions, making it challenging to understand the displayed results without referring back to the main text.\n\n3. **Potential Typographical Error in Table 4** \n In Table 4, the result for “Adaptive” under “Mean 1/2 and Max 1/2” is listed as 0.17, which appears unusually low and could be a typo. Clarifying this result would help in understanding the significance of the presented values.\n\n4. **Experiment Suggestion: Varying MSN Deactivation Levels** \n It would be informative to present results for different levels of MSN deactivation (from 0.5% to 5% at 0.5% intervals). This could reveal the relationship between neuron deactivation levels and performance, providing further insights into the impact of MSNs." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- **Clarity and Organization**: The paper is well-structured, with a clear flow that makes complex concepts accessible.\n- **Novelty**: The introduction of a framework specifically for modality-specific neuron detection in multimodal models is innovative.\n- **Interesting Observations**: The identified phenomena, particularly semantic probing and semantic telomeres, add valuable insights to MLLM research.\n- **Thorough Ablation studies**: The ablation studies showcase the impact of different strategies when various importance metrics are employed." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents **MINER**, a novel framework designed to identify Modality-Specific Neurons (MSNs) within multimodal large language models (MLLMs). Through a structured, four-stage process, MINER attempts to localize neurons specific to each modality, aiming to uncover patterns in how different modalities interact within the MLLM framework. Key claims include the framework’s ability to select MSNs that, when deactivated, result in substantial performance drops, suggesting their importance in maintaining multimodal functionality." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Validation of MSN Specificity**: The evidence provided to confirm that the identified neurons are genuinely modality-specific (i.e., MSNs) is insufficient.\n- **Implications Remain Unclear**: The practical implications and potential applications of identifying MSNs are not well-discussed, limiting the paper’s impact.\n- **Data Presentation and Clarity**: Several figures, particularly Figure 6, are challenging to interpret due to their small size and lack of clear labels." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could you expand some of the terminological definitions, especially \"semantic probing\" and \"semantic telomeres”?\n- Could you compare MINER to other neuron-selection frameworks (including from unimodal domains) and clarify where your method reduces perceived gaps in benchmarking or explainability?\n- Would it be possible to improve the legibility of all figures (especially Figs 5 and 6)" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The MINER framework is relatively new, although information theoretic (e.g., mutual information-based) methods for probing arbitrary multimodal networks already exists so this is not as fundamentally revolutionary as it is evolutionary.\n- The experiments are extensive and detailed, with Qwen2-VL and Qwen2-Audio models and multiple datasets. \n- The identification of \"semantic probing\" is not entirely new but “semantic telomeres” may be, potentially opening new avenues in XAI methods and may be generalizable across different modalities. It is highly advised to rename ‘telomeres’, though." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a method (MINER) for identifying modality-specific neurons (MSNs) in multimodal LLMs. There are four stages to this process, and it is meant to improve the explainability (or transparency) of these large models. As few as 2% of neurons seem to play a key role in the multimodal connections. The authors invoke telomeres (!) to explain how modality-specific neurons interact across layers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The model should apply to modalities other than vision and audio, but the evaluation does not extend beyond these modalities. This is relatively minor, of course, since a paper that introduces a new method is not required to exhaust all empirical possibilities.\n- The \"modality separation\" approach assumes minimal cross-modal information flow, which is a major oversimplification. It is not clear from an initial review to what extent removing this brick from the base of the theoretical structure collapses the rest. To a large extent this weakness is touched on in the first limitation in Sec 6.1, but it is not _addressed_. The natural interdependence between modalities, throughout the layers, should be better explained or explored. This is a more major limitation.\n- It would be preferable to connect the concepts of ‘semantic telomeres’, for example, to real-world problems. What is the practical impact of this work?\n- Other interpretability techniques for multimodal LLMs are not evaluated whatsoever." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper is the first to investigate the concept of modality-specific neurons (MSNs) in MLLMs. This is a significant and novel contribution that opens up a new avenue of research in the field of interpretability and explainability for multimodal models.   \n\n2. This approach is shown to be effective in identifying MSNs that significantly impact model performance. For example, deactivating just 2% of the MSNs identified by MINER significantly reduce MLLMs performance (0.56 ∼ 0.24 ↓ for Qwen2-VL, 0.69 ∼ 0.31 ↓ for Qwen2-Audio).\n\n3. The authors uncover some impactful trends such as finding that different modalities tend to converge in the lower layers of the MLLM. They also identify two phenomena, \"semantic probing\" and \"semantic telomeres\", which describe how audio and special modality tokens, respectively, interact with text modality tokens." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel concept of modality-specific neurons (MSNs) within multimodal large language models (MLLMs). These MSNs are specialized neurons that play a crucial role in processing information from a particular modality, such as image or text. The authors propose a framework named MINER to identify and analyze these MSNs, utilizing a unique token-level analysis pipeline to gauge the importance of neurons for specific modalities. This approach diverges from traditional sample-level methods and offers a more fine-grained understanding of how MLLMs process multimodal information.   \n\nThrough extensive experimentation on a variety of MLLMs and multimodal benchmarks, the authors demonstrate the significant impact of MSNs on model performance. Notably, they show that even a small reduction in MSNs can lead to noticeable performance drops, highlighting the critical role these neurons play. The paper also reveals intriguing phenomena such as \"semantic probing,\" where audio modalities seem to reach out to text modalities, and \"semantic telomeres,\" where special tokens anchor themselves to text modalities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. A potential weakness of the paper lies in its approach to modality separation. The authors acknowledge the challenge of completely separating modalities due to the attention mechanism blending information across modalities. They hypothesize that information largely remains within its modality set as it passes through the LLM layers, with minimal transfer between modalities. They assume the attention mechanism functions only within distinct token sets, preventing information exchange between modalities. However, this assumption oversimplifies the attention mechanism, which is designed to capture dependencies regardless of modality. By limiting information flow, the authors may artificially constrain the model.\n\n2. The authors restrict their analysis to neurons in the FFN module of the MLLMs, based on prior work highlighting the role of FFN in knowledge encoding. However, other modules, such as the attention layers, might also contain modality-specific neurons or contribute to modality-specific processing in ways not captured by this analysis.\n\n3. The experiments primarily focus on image and text or audio and text combinations. Exploring more diverse combinations, such as image and audio or all three modalities together, could reveal a more complete picture of MSN interactions." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024miner,\ntitle={{MINER}: Mining the Underlying Pattern of Modality-Specific Neurons in Multimodal Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=y3CdSwREZl},\nnote={under review}\n}" }, "abstract": { "value": "In recent years, multimodal large language models (MLLMs) have significantly advanced, integrating more modalities into diverse applications. However, the lack of explainability remains a major barrier to their use in scenarios requiring decision transparency. Current neuron-level explanation paradigms mainly focus on knowledge localization or language- and domain-specific analyses, leaving the exploration of multimodality largely unaddressed. To tackle these challenges, we propose MINER, a transferable framework for mining modality-specific neurons (MSNs) in MLLMs, which comprises four stages: (1) modality separation, (2) importance score calculation, (3) importance score aggregation, (4) modality-specific neuron selection. Extensive experiments across six benchmarks and two MLLMs show that (1) deactivating ONLY 2% of MSNs significantly reduce MLLMs performance (0.56 to 0.24 for Qwen2-VL, 0.69 to 0.31 for Qwen2-Audio), (2) different modalities mainly converge in the lower layers, (3) MSNs influence how key information from various modalities converges to the last token, (4) We observed two intriguing phenomena, semantic probing and semantic telomeres." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "MLLMs", "neuron analysis", "interpretability" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/aeb9ea764665b672b9265173bb463ac2d9468c0b.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/ceaf38745ec3df6aa593665bb8705cb7bb30a73a.zip" }, "title": { "value": "MINER: Mining the Underlying Pattern of Modality-Specific Neurons in Multimodal Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
y3jJmrKWQ4
Judging the Judges: A Systematic Investigation of Position Bias in Pairwise Comparative Assessments by LLMs
main
Active
LLM-as-a-Judge;LLM evaluators;position bias;length bias;verbosity bias;pairwise comparison;repetition stability;position consistency;preference fairness
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5
4;4;2;4
3;2;2;3
2;1;2;2
3;2;3;3
4
3.5
2.5
1.75
2.75
-0.57735
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Regarding measuring preference fairness, why are normalized primacy/recency-count-numbers overly sensitive to dataset size? Can you provide an intuitive example of this? Do the normalized metrics for repetition stability and position consistency have similar issues? How are $S_{min}^{-}$ and $S_{max}^{+}$ computed?\n- Are these three metrics (repetition stability, position consistency, preference fairness) comparable across datasets? If so, is there a significant difference in the frequency with which LLM-based judges exhibit position bias between MTBench and DevBench? If so, can you draw any conclusions as to which factors contribute to the difference?\n- How does the computation of the answer quality gap change in a two-mode setting (e.g., no ties like on DevBench)? Is $C_t$ just 0? Are there other important considerations for a two-mode vs three-mode setting, e.g., presumably it’s easier to have higher repetition stability in the two-mode setting vs. the three-mode setting since there are less options?\n- Can you provide insight into why the Gemini-1.5-flash exhibits a near 1 error rate on DevBench, but an error rate of 0 on MTBench?\n\n- Other comments:\nSec. 3.2 line 353: “arean -> arena”.\nSec. 4 line 374: “Table 4 -> Table 2”.\nSec. 4 fig. 3b is seemingly missing.\nSec. 4 line 473: “Fig. 1(b)” -> “Fig. 4(b)”" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The evaluation framework, particularly decoupling the repetition stability and preference fairness metrics from position consistency, provides a robust and consistent framework for future research on the reliability of LLM-based judges.\n- The empirical results are comprehensive, covering 12 judge models across more than 100,000 instances.\n- Based on the experimental results, the authors draw reasonable and intuitive insights (e.g., LLM-based judges are more likely to exhibit position bias when the candidate answers are of similar quality)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a systematic framework for examining the position bias of LLM-based judges. Using this framework, the authors analyze the position bias across various state-of-the-art LLMs on two benchmarks: MTBench and DevBench. In doing so, the authors find that position bias varies by judge/task, and that it is strongly correlated with the gap in quality between answers and weakly correlated with average task output length." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- It’s quite strange to consider the position bias of LLM-based judges in isolation of their accuracy (e.g., the rate at which the judge selects the objectively better answer, or at least the consensus answer based on crowdsourcing). In doing so, the authors tend to over-generalize claims based on the characteristics of the LLM-based judges with respect to position bias (e.g., that GPT-3.5-Turbo “may be employed as a cost-effective alternative to coding evaluations” since it achieves a high preference fairness score and a comparable position consistency metric to GPT-4 and GPT-4o) that are likely unfounded given a more holistic view of LLM-based judges.\n\n- Seemingly the key insight drawn is that LLM-based judges tend to exhibit position bias when the answer quality gap is small, which is interesting but not surprising, nor is very actionable. It would have been nice had the authors studied some more actionable factors related to the design of the judge such as prompt template, option modes, etc." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "NA" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The paper addresses an important problem of biases in LLM-as-a-judge style evaluators. While this work focuses on pairwise evaluations, given the broad applicability of LLMs in this manner, this is an area of significance. Some ideas on preference fairness appear to be new (although see the Weakness section)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the problem biases of LLM-as-a-judge in making pairwise evaluations. Specifically, they focus on position bias, where the change in order of options in the prompt, changes the judgement and \"preference fairness\" where change in option label can yield different judgements. The claimed contributions are in formulating these concepts, identifying factors that impact position bias, and insights for future benchmark design.\n\nIn terms of factors, the main presents a taxonomy of factors and how the relate to various levels. In terms of insights for benchmark design, the authors suggest \"reducing hard to-evaluate, highly subjective instances\" and \"minimizing trivial cases\". The paper does an empirical analysis of several popular models to support the main findings." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Key terms used in the presentation should be defined. What do you mean by \"preference fairness\" or \"primacy preferred\"? Why does this preference have \"direction\"? These concepts should be ideally be formalized and exposition around it improved so that the definitions are clear. Section 2.2. on factors impacting position bias should be improved for clarity. All figures have very small text that are unreadable.\n \nOverall, the contributions are rather limited.\n\nOn contribution related to identification of biases: The topics described by this paper has been covered by others. Positional bias has been reported in several works. Previous work at ICLR by Zheng et al. on \"Large Language Models Are Not Robust Multiple Choice Selectors\" study similar themes are report on the aforementioned biases and ways to mitigate them. Therefore evaluation alone does not merit significance. The two other contributions on are somewhat speculative.\n\nThe error rates reported for LLama are surprisingly poor and contradict some of the findings in the literature. A 100% error rate for DevBench suggests something is amiss. Please see papers that use LLama as a baseline, e.g. \"Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators\" and \"Fairer Preferences Elicit Improved Human-Aligned LLM Judgments\" which report on other datasets. I would encourage a recheck of results here." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "As well as addressing the weaknesses mentioned above the authors could consider the following: \n\n- Appendices better referred to as \"Appendix D\" rather than \"Appendix. Sec. D.\"\n\n- On Line 307 a short introduction to DevBench would really help the reader. \n\n- Text in Fig 1 is very small and hard to read. \n\n- Plots in Figure 3 are too small to read, making them uninformative. Figure 3 also needs better labelling and explanation. E.g. what does the radar plot actually show?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The main strengths of the paper are:\n\n- The paper addresses an interesting niche problem that has not been extensively studied. \n- A comprehensive set of evaluations are presented in the appendices included in the paper.\n- Some interesting recommendations for other evaluations are presented." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes an evaluation framework for measuring the degree of position bias exhibited in an LLM-as-a-judge scenario for pairwise comparison problems. This framework is used to examine the position bias displayed by a set of LLM-as-a-judge models for two benchmark problems. The results show that for most models position bias is not a significant issue, but that there are variations - some models exhibit more than others. Detailed analysis is presented in a set of appendices and some recommendations for evaluation design are presented." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The main weaknesses of the paper are:\n\n- The statement of the main finding of the paper \"Position Bias of Capable Judges are not Mere Random Variations \" uses a slightly unusual phrasing. Would it not be better to simply state something like \"Capable Judges are not Found to exhibit Strong Position Bias\"? The main message does not come across very clearly. \n\n- The paper would benefit from careful review and revision as at times it is difficult to follow. \n\n- The findings form the paper are interesting, but the amount of actionable advice that emerges from it (Lines 526 - 529) is somewhat limited. Readers may struggle to significantly redesign evaluations based on this. The authors could provide more detailed guidelines or examples of how their findings could be applied to redesign evaluations - perhaps some of the materiel form Appendix A?\n\n- A lot of the content is included in appendices which limits the value of the main paper without these appendices. the authors could better balance the material - for example some of the material from Appendix A could be moved to the conclusions section of the paper and some of the results from Appendix G could be integrated into the main paper." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Perhaps it would be useful if some actionable insights were paired with this study. Is there a way to phrase the prompt so that is does not show as much position bias?\n\n2. Do variations in the system prompt affect the result? Does prompting the LLM to make position bias aware decisions in the system prompt help?\n\n3. Shouldn't a fairness metric like Preference Fairness have equal value for all completely Position Incosistent situations?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The work addresses an important issue - effective use of LLMs as judges. \n2. The illustrations used by the authors to explain concepts are nice.\n3. The experiments seem sane and the claims made in the initial sections of the paper seem to have been verified." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The work introduces a framework to study bias encountered when using LLMs as effective judges in relation-based pairwise assessments. Given two solutions, LLM judges are prompted to choose the better candidate to the task in question. The authors focus on three aspects of the position bias : Repetition Stability (RS), Position Consistency (PC) and Preference Fairness (PF). Of this, Repetition Stability (RS) evaluates the reliability of the LLM judges when presented with the same queries multiple times. Position Consistency (PC) computes how frequently a LLM judge prioritizes the same candidate when the order of candidates is swapped. Preference Fairness (PF) measures the extent to which judge models favor certain solution positions. They also provide a list of Judge-level, Candidate-level and Task-level factors that affect position bias namely : Familial Property, Answer-quality gap, Task Input/Output Length and Prompt length.\n\nNote : There is a typo in the first word of the title in the paper." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. This work is a behavioral exploration of how LLMs perform in judging pairwise relational assessments. In my humble opinion, this type of work does not have much novel ML contribution to be included in conferences like ICLR. It also does not significantly improve our understanding of why LLMs exhibit primacy/recency bias. Perhaps it is better suited as an advanced technical blog for end users or in an NLP conference?\n\n2. The kind of analysis done is quite similar to works that have studied position bias, repetition etc in single and multi-document summarization. The authors could perhaps cite or build upon works like [1,2,3], if they wish. \n\n3. Parts of the paper can be written to make them more intuitive for readers. For example the section explaining Preference Fairness.\n\n[1] Dey, Alvin, et al. \"Corpora evaluation and system bias detection in multi-document summarization.\" arXiv preprint arXiv:2010.01786 (2020).\n\n[2] Kryściński, Wojciech, et al. \"Neural text summarization: A critical evaluation.\" arXiv preprint arXiv:1908.08960 (2019).\n\n[3] Jung, Taehee, et al. \"Earlier Isn't Always Better: Sub-aspect Analysis on Corpus and System Biases in Summarization.\" arXiv preprint arXiv:1908.11723 (2019)." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "A systematic investigation of position bias in pairwise comparative LLM-as-a-Judge in terms of repetition stability, position consistency, and preference fairness" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024judging,\ntitle={Judging the Judges: A Systematic Investigation of Position Bias in Pairwise Comparative Assessments by {LLM}s},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=y3jJmrKWQ4},\nnote={under review}\n}" }, "abstract": { "value": "LLM-as-a-Judge presents a promising alternative to human evaluators across various tasks, but inherent biases, especially position bias — a tendency to favor solutions based on their position in the prompt — have compromised its effectiveness. Our study introduces a systematic framework to examine position bias in pairwise comparisons, focusing on repetition stability, position consistency, and preference fairness. This research significantly contributes to the field by introducing new concepts for understanding position bias and providing a multi-dimensional framework for evaluations. We conducted experiments with 12 LLM judges across MTBench and DevBench, covering 22 tasks and approximately 40 solution-generating models — candidates, resulting in over 100,000 evaluation instances. Our findings confirm that position bias in capable LLM judges is not due to random chances, along with notable variations observed across judges and tasks. Moreover, position bias is weakly influenced by the length of prompt components but significantly impacted by the quality gap between solutions. These insights can help optimize judge model selections, improve benchmark design, and inform future research on debiasing strategies, ultimately enhancing the reliability of LLM judges." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "LLM-as-a-Judge", "LLM evaluators", "position bias", "length bias", "verbosity bias", "pairwise comparison", "repetition stability", "position consistency", "preference fairness" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/65c296100723f63767c9a5d076a4084a67c1d5a9.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Judging the Judges: A Systematic Investigation of Position Bias in Pairwise Comparative Assessments by LLMs" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
y3zswp3gek
HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models
main
Active
knowledge distillation;safety guard
foundation or frontier models, including LLMs
5;6;6;8
2;4;4;4
3;4;4;4
2;3;3;3
4;4;3;4
6.25
3.5
3.75
2.75
3.75
0.662266
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Referring to lines 221 - 223, could you explain how the second LLM is finetuned using few-shot adversarial examples in order to generate harmful responses. This is not clear. Also, there are not many details available on the `boyiwei/pure_bad_100-7b-full` LLM. \n\n1. Lines 265 - 267: is the knowledge-distillation based training sensitive to this configuration of optimization hyper-parameters? \n\n1. Is knowledge distillation used for the baselines EDA and GFN?\n\n1. In Eqn (5), it should be $y^{(j)} \\sim p_{target}(y | x^{(i)})$.\n\n1. In Eqn (4), it is not so clear what the trajectory balancing objective is? Is it to be minimized?\n\n1. The prompt format in page 4 specifies that the instruction should be a single sentence. However, there are examples in Table 11 in Appendix D that have multiple sentence instructions. Any idea why this happens with the LLM generation?\n\n1. Referring to the qualitative results in section 4.1, are you embedding only the instruction or the combination of instruction + response for the clustering?\n\n1. Referring to the experimental setup paragraph on line 408: how big is the CipherChat dataset used for fine-tuning? Are the metrics reported in Figure 5 on a test split of CipherChat?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. The paper is well written and the method and results are presented clearly.\n\n1. Tackles an important problem since the detection of malicious queries and jailbreaks is important for the safe deployment of LLMs. Moreover, making the safety guard models smaller (sub-billion parameters) allows for their efficient deployment in low-resource environments (reduced latency, memory, and cost), and allows for faster red teaming, as well as adaptation to new attacks.\n \n1. The experiments are comprehensive. In addition to the detection performance of safety guard models on multiple benchmarks, it covers the benefits to red teaming due to reduced computational cost and runtime, and efficient fine-tuning of the safety guard model against new/evolving jailbreak attacks. Then it presents a number of ablation experiments to justify different design choices. Finally, the paper is careful to report average metrics with error bars. \n\n1. The authors release their code, models, and synthetic datasets allowing reproducibility and further research." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Safety guard models (SGMs) are often deployed along with a target large language model in order to detect malicious and unsafe queries. They are also used for red teaming LLMs to detect vulnerabilities to attacks such as jailbreaks.\n\nThe paper proposes to address the challenges associated with large SGMs such as high compute and memory requirement, latency, and operational costs (which are impractical e.g. on mobile devices). It proposes to perform knowledge distillation of a large SGM into a smaller one (sub-billion parameters) using a labeled dataset of instruction-response pairs with harmfulness labels. \n\nTo address the limited diversity of harmful instructions in the labeled dataset, the paper proposes HarmAug, a simple but effective data augmentation method that generates synthetic harmful instructions using a second LLM by jailbreaking it (to bypass its refusal). Specifically, HarmAug designs a special jailbreak prompt that asks the LLM to generate a harmful instruction by including an affirmative prefix at the end of the prompt `“I have an idea for a prompt”`. This encourages the LLM to continue generating a harmful instruction instead of refusing. \n\nBy generating diverse harmful instructions in this way, the method augments the training set of knowledge distillation and creates a student SGM that generalizes better than existing augmentation methods. The experimental results show that knowledge distillation with HarmAug can train much smaller student SGM models that have comparable (and sometimes better) detection metrics to the large teacher SGM. The smaller SGMs have the benefit of much lower computation, memory, latency, and operational costs. They also enable faster/cheaper red teaming and more efficient adaptation to evolving threats." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The proposed data augmentation method relies on jailbreaking an LLM in order to generate a diverse set of malicious instructions. This is based on a simple idea of using a jailbreak prompt asking an LLM to generate a harmful instruction, but with an affirmative prefix “I have an idea for a prompt.” at the end which encourages the LLM to continue with the generation, thus bypassing its safety alignment driven refusal. \nWhile this special jailbreak prompt is shown to be effective against a variety of safety-aligned LLMs, it is possible for future iterations of these LLMs to circumvent this jailbreak through careful alignment via RLHF or a similar method. I wonder if this decreases the long-term value of the proposed augmentation method for generating diverse harmful instructions? \n\n2. Related to the previous question, I wonder if one can use an LLM that has *not* been safety aligned (via SFT and RLHF) for the generation of harmful instructions? That way, one does not need to use a special jailbreak based on the affirmative prefix to bypass the LLMs guardrails. Is there any practical reason not to do this?\n\n3. The paper performs ablation experiments on the size of the student model and the choice of student model backbone (Section 4.4). While this is informative, I wonder if there is some bias introduced in the main results by reporting with the best student model backbone and size (DeBERTa-v3-large with 435M parameters)? In a practical deployment, we may have to make these choices without access to such results. Hence, the performance of the student safety guard model may not be as optimistic." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 2 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. Prevalence of NULL Responses in the Dataset:\n- Your synthetic dataset contains a significant number of responses that are NULL. Could you explain why there are so many NULL responses? Is this an intentional aspect of your data generation process, or does it indicate an issue with the response generation step?\n\n2. Impact on Model Performance: \n- Have you analyzed how the inclusion of NULL responses with varying harm scores affects the student model's performance? Does it improve the model's ability to detect harmful content, or could it introduce confusion during training?\n\n3. Impact on Diverse Harmful Instructions:\n- While you provide evidence of increased diversity in the training data, have you evaluated the model's ability to generalize to completely new types of harmful content not represented in your synthetic dataset? How does the model perform on real-world examples or emerging threats?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper tackles an important and practical problem—how to deploy effective safety guard models on devices with limited computational resources. It is a good motivation and this is increasingly relevant as AI applications become more prevalent on mobile platforms.\n2. The authors present experimental results showing that a 435-million-parameter model trained with HarmAug achieves performance comparable to larger models (over 7 billion parameters) on several benchmark datasets.\n3. The paper is well-written and organized. The methodology is explained in detail, making it easy to follow. Figures and tables are effectively used to illustrate key points and results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces *HarmAug*, a data augmentation technique designed to improve the performance of small safety guard models through knowledge distillation from larger models. The authors aim to address the challenge of deploying large safety guard models (with billions of parameters) on resource-constrained devices like mobile phones. To achieve this, they propose generating synthetic harmful instructions by \"jailbreaking\" a safety-aligned LLM. This is done by adding an affirmative prefix (e.g., \"I have an idea for a prompt:\") to the prompts, encouraging the LLM to produce harmful instructions it would normally refuse to generate due to its safety constraints. These synthetic instructions, along with their corresponding responses labeled by the large teacher model, are used to augment the training data for a smaller student model. The authors claim that this method enhances the diversity of harmful instructions, allowing the smaller model to perform comparably to larger models while significantly reducing computational costs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Dataset Quality and Inclusion of NULL Responses**: Upon inspecting the provided dataset (https://huggingface.co/datasets/AnonHB/HarmAug_generated_dataset), I noticed that a significant number of responses are NULL (empty). While it's acceptable for some responses to be NULL—representing appropriate refusals or lack of response—the dataset contains a large proportion of such responses without explicit justification in the paper. Moreover, some NULL responses receive high harm scores, while others receive low scores. Since the harm score is based on both the prompt and the response, it's possible for a harmful prompt with a NULL response to still receive a high harm score. However, the paper does not explicitly explain this aspect, leaving the rationale unclear.\n2. **Potential Impact on Model Training**: The inclusion of numerous NULL responses with varying harm scores could affect the learning process of the student model. Without a clear explanation, it's difficult to assess whether these data points contribute positively or negatively to the model's ability to detect harmful content.\n3. **Limited Methodological Novelty**: While the practical application is important, the methodological contribution is relatively incremental. The use of LLMs for data augmentation is a common practice, and adding an affirmative prefix is a modest modification. The paper does not introduce significant new insights beyond this." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "See weaknesses" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. Overall, this is a well-written paper. The motivation and setting are clearly stated.\n1. The problem of distilling large safeguard models into small ones is novel and has good applications.\n1. The proposed HarmAug method of using ICL examples is simple and easy to understand.\n1. The experiment is very comprehensive.\n1. Code and other resources are provided." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper focuses on distilling the safeguarding ability of LLMs into a smaller one, particularly through the data augmentation perspective." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. While not explicitly instructed in the Call for papers, the Reproducibility statement and Ethics statement should be in \\section*{} format (following previous years).\n1. The overall prompt design shares similar notions with 2 existing works, which may should be cited in Section 3.2: \n - using prefixes to bypass the safety guardrails of the LLMs is similar to Prefix Injection [1]\n - using ICL examples to bypass the safety guardrails of the LLMs is similar to in-context attack [2]\n1. Are there other applications of HarmAug beyond model distillation? This point was not well-discussed in the paper.\n\n[1] Jailbroken: How Does LLM Safety Training Fail? https://arxiv.org/pdf/2307.02483\n\n[2] Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations https://arxiv.org/pdf/2310.06387" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "* For the baseline LLMs (LlamaGuards and Aegis adapter-finetuned LlamaGuard) did you use their default safety policy or did you replace it with the test set specific information ? Because a lot of these datasets (e.g in Table 1) already have categories of safety, toxicity or content moderation that can replace the \"Should not\" and \"Can\" behaviors of LlamaGuard and I'd imagine this would make the baselines perform better compared to their default template. \n\n* Why not consider 3rd party APIs ? It would be interesting to see other providers such as Azure Content Safety, OpenAI Moderation, gpt 3.5-turbo/mini/4/ or 4o for example. I understand some of these require payment but worth considering if feasible.\n\n* What's the tradeoff in terms of performance (e.g F1) when guardrailing prompts compared to llm responses with HarmAug ?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "Originality:\n1. The main novelty is HarmAug, the data augmentation method for generating harmful instructions to train safety guard models.\n2. They also propose an effective \"prefix attack\" to bypass safety guardrails of language models when generating harmful content for training data.\n3. Results show the feasibility of distilling large safety guard models into much smaller, more efficient models without significant performance loss.\n\nQuality:\n1. The results section is mostlycomprehensive in the experiments across multiple benchmark datasets.\n2. Provides detailed ablation studies to analyze the impact of different components and design choices.\n3. Compares the proposed method against several relevant baselines and existing safety guard models.\n4. Evaluates both performance (F1 score, AUPRC) and efficiency metrics (FLOPs, latency, memory usage).\n\nClarity:\n1. Clearly explains the motivation, methodology, and experimental setup.\n2. Uses effective visualizations to illustrate key results and comparisons.\n3. Provides a detailed breakdown of the proposed method, making it easier for others to reproduce or build upon.\n\nSignificance:\n1. Addresses an important practical challenge in deploying safety guard models on resource-constrained devices.\n2. Demonstrates that smaller models can achieve comparable or better performance than much larger models for this task.\n3. Shows potential for improving the efficiency of red-teaming and fine-tuning processes for language model safety.\n4. Provides open-source code, models, and datasets to facilitate further research in this area." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces HarmAug, a method for creating efficient safety guard models for LLMs. They propose to distill large safety models into smaller ones where the output is binary label indicating whether the input is safe or not. To overcome the limited diversity of harmful instructions in existing datasets, they introduce HarmAug, a data augmentation method that involves:\n a. Prompting an LLM to generate harmful instructions\n b. Using an affirmative prefix to encourage the LLM to complete the harmful instruction\n c. Generating responses to these instructions using another LLM\n d. Labeling the instruction-response pairs using the teacher model\n\nBased on this they are able to outperform much large LLMs (7B compare to ~500Mb), showing the importance of synthetic data generation and knowledge distillation to create small robust safety guardrail classifiers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* Limited Analysis of Jailbreaking Technique: While the prefix attack is effective, the paper doesn't provide an in-depth analysis of why this particular method works or explore alternative jailbreaking techniques. \n* Generally, if the novelty lies in HarmAug in diversifying generated data it would be nice to see comparison to other instructions and their failure modes w.r.t to HarmAug. \n* Table 4 could also include other prefixes as baselines to further motivate that particular string of texts effectiveness. \n* the utility of these models on mobile devices is mentioned a few times but never experimented with on-device. Arguably outside of the paper but would definitely be a stronger set of results to have." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose a data augmentation for knowledge distillation of large safety guard models." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024harmaug,\ntitle={HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=y3zswp3gek},\nnote={under review}\n}" }, "abstract": { "value": "Safety guard models that detect malicious queries aimed at large language models (LLMs) are essential for ensuring the secure and responsible deployment of LLMs in real-world applications.\nHowever, deploying existing safety guard models with billions of parameters alongside LLMs on mobile devices is impractical due to substantial memory requirements and latency.\nTo reduce this cost, we distill a large teacher safety guard model into a smaller one using a labeled dataset of instruction-response pairs with binary harmfulness labels. Due to the limited diversity of harmful instructions in the existing labeled dataset, naively distilled models tend to underperform compared to larger models. To bridge the gap between small and large models, we propose **HarmAug**, a simple yet effective data augmentation method that involves jailbreaking an LLM and prompting it to generate harmful instructions. Given a prompt such as, \"Make a single harmful instruction prompt that would elicit offensive content\", we add an affirmative prefix (e.g., \"I have an idea for a prompt:\") to the LLM's response. This encourages the LLM to continue generating the rest of the response, leading to sampling harmful instructions. Another LLM generates a response to the harmful instruction, and the teacher model labels the instruction-response pair. We empirically show that our HarmAug outperforms other relevant baselines. Moreover, a 435-million-parameter safety guard model trained with HarmAug achieves an F1 score comparable to larger models with over 7 billion parameters, and even outperforms them in AUPRC, while operating at less than 25\\% of their computational cost. Our [code](https://anonymous.4open.science/r/HarmAug/), [safety guard model](https://huggingface.co/AnonHB/HarmAug_Guard_Model_deberta_v3_large_finetuned), and [synthetic dataset](https://huggingface.co/datasets/AnonHB/HarmAug_generated_dataset) are publicly available." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "knowledge distillation", "safety guard" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/7a7569f0dd42077b6926710baef0f754d4ffe99b.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
y4DtzADzd1
Boosting Latent Diffusion with Perceptual Objectives
main
Active
diffusion;flows;latent diffusion;LDM;latent generative models;T2I;image generation;generative models.
generative models
3;5;5;6
4;3;4;4
2;3;3;3
1;2;2;3
2;3;3;3
4.75
3.75
2.75
2
2.75
-0.132453
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "See weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The motivation is clear and straightforward. And the proposed method is simple and can be easily applied to the training of other diffusion models.\n2. Under the same training iterations during the post-training stage, the method can improve the FID over the baseline method which only adopts the MSE loss." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studies the perceptual loss function in the training of diffusion models and proposes to compare the features between $z_0$ and $\\hat{z}_0$ by sending them into the latent autoencoder's decoder model. The loss is only applied during the post-training stage and only applied in the time steps that have a SNR higher than the preset threshold. The results on CC3M, ImageNet-1k and S320M show that the introduction of this training strategy could help improve the generation quality of the model." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper only shows the performance increase over the baseline model. I feel like it's better to clearly demonstrate the effectiveness and performance gain over the previous state-of-the-art methods, to show that the perceptual loss can achieve what the widely used MSE loss cannot achieve.\n2. The introduction of the perceptual loss would increase the computation cost during the training stage. Could the authors provide a clear comparison on this? \n3. Following the last one, what would be the performance comparison for the same training time instead of the same training iterations?\n4. The authors mention the outliers in the features of the autoencoder's decoder which is not ideal for the computation of the perceptual loss. I'm wondering if the authors have tried other ways instead of simply masking those features out as this might cause information lost. Or has the authors tried using some other models to compute the perceptual loss to avoid those outliers?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In addition to the addressing the weaknesses, I'd appreciate the authors answering the following questions:\n\nWhat is the difference in training FLOPS per step of using LPL vs. just the normal objective? I assume it is a minor difference, but if it is not, just comparing to normal objectives at the same step count would not be fair. Similarly, the effect of the substantial increase in VRAM usage on the training speed (via the maximum achievable local batch size) should be quantified properly.\n\nWhy do you use a CFG scale of 2? Standard choices for latent diffusion are typically much lower for ImageNet class-conditional generation and much higher for text-to-image.\n\nWhy is the frequency cut off at 360 in Fig. 6?\n\nSuggestion: while not technically required according to ICLR's rules, as it hasn't been published yet (and therefore also not taking an influence on my rating of this paper), I think a discussion of the almost one year old seminal work [https://arxiv.org/abs/2401.00110] in the area of applying perceptual losses to diffusion models would help improve the paper, especially if this work had any influence on the submitted paper." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The comparisons with baselines are extensive and clearly show that the proposed latent perceptual loss objective improves metrics across different diffusion formulations and datasets. I really appreciate the authors demonstrating the improvements for both eps-pred diffusion and flow matching, demonstrating that the proposed loss potentially has a general significance.\n\nThe paper ablates over/explores a large number of parameters and their influence.\n\nThe paper is generally reasonably well-written and accessible." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes applying perceptual losses to latent diffusion models during training to improve the quality of generated images. Their proposed perceptual loss uses features from the VAE decoder, with additional contributions such as outlier filtering being proposed to enable this setup to work. The authors extensively validate the benefits of their approach over standard training of MMDiTs in different diffusion formulations and datasets." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I think framing the proposed loss as a perceptual loss is likely incorrect. Perceptual losses typically try to incorporate human perception-based invariances into the loss, such as weighting the presence of the correct texture (e.g., grass) as more important than getting every detail of the instance of the texture (e.g., the exact positions of individual blades of grass) right. This is directly opposed to losses such as the MSE in pixel space. This is typically accomplished by taking features of a deep pre-trained discriminative model. In this case, you use features of the VAE latent decoder, which, potentially even puts the features at a lower abstraction level than the original latents. As far as I could see, there was no investigation of whether these features have the qualities of a perceptual loss. In the end, optimizing this loss seems to result in improved FIDs, which makes it a valuable contribution. Still, I think calling it a perceptual loss without further investigation goes against what people expect \"perceptual losses\" to refer to, which could lead to confusion. Could it be that the improvement actually doesn't come from perceptual qualities of the loss but rather from other qualities, such as a different implicit weighting of timesteps, which has previously been shown to substantially improve FIDs as well [https://openaccess.thecvf.com/content/ICCV2023/papers/Hang_Efficient_Diffusion_Training_via_Min-SNR_Weighting_Strategy_ICCV_2023_paper.pdf].\n\nYou also claim that \"the autoencoder’s latent space has a highly irregular structure and is not equally influenced by the different pixels in the latent code\" as an important part of your motivation. However, it has been shown [e.g., in https://discuss.huggingface.co/t/decoding-latents-to-rgb-without-upscaling/23204] that, at least for the very commonly used SD VAEs, the autoencoder latents effectively correspond to downsampled colors, differing fundamentally from that statement. Similarly, noisy latents from intermediate steps of the sampling process often already show a good approximation of the image being generated. A.2 shows an investigation of the effect of spatial interpolation for some (unspecified) AE, but is inherently very limited in its scope. It would be nice to see better backing up of this central claim that goes against common assumptions using standard methods, such as introducing small (random) deviations and showing how their effect on the decoded image is disproportionally large compared to larger ones. General claims should also be verified with standard VAEs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 1 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Line 179, Many SOTA T2I diffusion models use a basic L2 training objective. Since only a VAE citation is given to explain blurriness, could you add citations or evidence that diffusion training leads to blurriness? Typically, adversarial losses are used in diffusion distillation, not in standard diffusion training.\n- Line 185, \"a perceptual loss cannot be used directly on the predicted latents\", there is already accepted literature [2] that train another model which efficiently compute LPIPS losses in the latent space. It would be interesting to compare and contrast to the LatentLPIPS method.\n- Line 193, it would be better if the authors could stick to the canonical DDPM/EDM [3] notations to avoid confusion.\n- Line 289, do you enforce zero-terminal SNR here? It is known that the SD diffusion schedules are flawed [4].\n---\n[2] Kang, M., Zhang, R., Barnes, C., Paris, S., Kwak, S., Park, J., ... & Park, T. (2024). Distilling Diffusion Models into Conditional GANs. In ECCV 2024\n\n[3] Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems, 35, 26565-26577.\n\n[4] Lin, S., Liu, B., Li, J., & Yang, X. (2024). Common diffusion noise schedules and sample steps are flawed. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 5404-5411)." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The author proposed latent perceptual loss (LPL) which shows the efficacy over various tasks datasets.\n- The qualitative results and the quantitative metrics seem promising\n- The frequency analysis showcases that the methods works\n- The ablations are carried out in a systematic way" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The author introduces latent perceptual loss which acts on the decoder's intermediate features to enrich the training signal of LDM. The experiments showcased the effect of the added perceptual loss." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper seems unpolished and rushed towards the deadline\n- Sometimes the notation is a bit confusing, see the third point in the questions section\n- Applying perceptual loss in T2I generation isn’t novel; Lin and Yang [1] calculates the perceptual loss in middle blocks to reduce computation which bypasses the computational constraint, whereas this paper requires passing results to the decoder for intermediate features. If the authors can provide evidence that the method proposed is better and distant themselves to the literature then I would consider raising my score.\n\n---\n[1] Lin, S., & Yang, X. (2023). Diffusion Model with Perceptual Loss. arXiv preprint arXiv:2401.00110." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "**1. Efficiency**\n\n- How does the result compare when using FLOPS as the x-axis? Comparing by iteration seems somewhat unfair. If the performance improvement looks promising when compared in terms of FLOPS, I'm considering raising the score.\n\n**2. Novelty**\n\n- Is there a specific reason why LPL was only used in post-training? I feel that LPL might provide a good gradient signal early in training, but it seems that experiment wasn't conducted, which is unfortunate. I'm curious about the performance changes if LPL is applied from the start of the pretraining." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "**1. Extensive analysis**\n\nThis work provides extensive analysis to demonstrate the validity of LPL. For instance, the major performance gain from LPL arises from generating more precise high-frequency components, albeit at the expense of some low-frequency details.\n\n**2. Ablation study**\n\nIt gives us various ablation study to provide the motivation behind design choices. The ablation study includes various components including feature depth for LPL, SNR threshold value, Reweighting strategy, or etc. This level of extensive ablation study is rare." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This work introduces \"latent perceptual loss\" (LPL) to improve the training of diffusion models. According to the paper, LPL enhances diffusion models performance across various datasets (Table 2) and methods (Table 3). Additionally, extensive analyses, e.g., comparing the generation quality with and without LPL loss, and ablation studies, deepen our understanding of LPL." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "**1. Efficiency**\n\nIn the paper, it is mentioned several times that utilizing the VAE decoder features to compute the LPL loss is costly. This is why recent research has explored alternative perceptual losses, such as latent LPIPS [1]. While the paper claims that LPL incurs minimal computation since it’s only used post-training, this claim is far from acceptable, as the post-training phase in this paper involves iterations amounting to as much as one-third, or at minimum one-fifth, of the pretraining iterations.\n\n[1]: Distilling Diffusion Models into Conditional GANs ([ECCV24](https://mingukkang.github.io/Diffusion2GAN/))\n\n**2. Novelty**\n\nAlthough subjective, I believe the novelty of LPL is somewhat lacking. This loss trick slightly enhances quality during training but seems more heuristic than principled. Additionally, it doesn’t contribute much new knowledge about diffusion models, which weaken the paper's novelty. Furthermore, it does not appear that any non-trivial trick was devised in the process of introducing the perceptual loss to the diffusion models." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "Latent diffusion models do not take into account the structure of the latent space in which they operate, doing so boosts sample quality." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024boosting,\ntitle={Boosting Latent Diffusion with Perceptual Objectives},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=y4DtzADzd1},\nnote={under review}\n}" }, "abstract": { "value": "Latent diffusion models (LDMs) power state-of-the-art high-resolution generative image models. LDMs learn the data distribution in the latent space of an autoencoder (AE) and produce images by mapping the generated latents into RGB image space using the AE decoder. While this approach allows for efficient model training and sampling, it induces a disconnect between the training of the diffusion model and the decoder, resulting in a loss of detail in the generated images. To remediate this disconnect, we propose to leverage the internal features of the decoder to define a latent perceptual loss (LPL). This loss encourages the models to create sharper and more realistic images. Our loss can be seamlessly integrated with common autoencoders used in latent diffusion models, and can be applied to different generative modeling paradigms such as DDPM with epsilon and velocity prediction, as well as flow matching. Extensive experiments with models trained on three datasets at 256 and 512 resolution show improved quantitative -- with boosts between 6% and 20% in FID -- and qualitative results when using our perceptual loss." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "diffusion", "flows", "latent diffusion", "LDM", "latent generative models", "T2I", "image generation", "generative models." ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/05cad0ff99993da56447c6ac8b71aeef5b9613da.pdf" }, "presentation": null, "primary_area": { "value": "generative models" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/52f80d202c86c3e2ed0b15528fcf76eda9e2f916.zip" }, "title": { "value": "Boosting Latent Diffusion with Perceptual Objectives" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
y4F2YZxN9T
Temporal Test-Time Adaptation with State-Space Models
main
Active
test-time adaptation;state-space models;probabilistic modelling;dynamical systems
probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
3;3;5;6
4;4;4;3
3;1;3;3
2;2;2;3
3;2;3;3
4.25
3.75
2.5
2.25
2.75
-0.777778
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "LIne 37 - why can't test-time training operate in this paradigm? what is the precise difference between TTA and TTT?\n46 - Doesn't FMoW have labels? Why is this dataset suitable for the proposed setting, which claims that one of its benefits is not needing labels?\n192 - missing a comma inside the parentheses?\n\nHow important is the linear Gaussian transition model assumption? Could your method make use of labels if they were made available?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The paper is well-written and gives a principled justification for the proposed method. It considers real-world shifts and the proposed method seems to outperform the baselines generally." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper studied test-time adaptation, a setting where model parameters are updated based on incoming test features. It proposes STAD, a method to track gradual distribution shifts which only updates the last layer in a given network, based on the EM algorithm. The authors report results on several tasks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper does not appear to have any major weaknesses, but I am curious about the importance of the subfield (temporal test-time adaptation). Why is this the right setting to study (as opposed to others for distribution shift) and how does it compare to methods involving periodic retraining and unsupervised domain adaptation?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weaknesses section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 1 }, "strengths": { "value": "1. The authors propose a realistic TTA setting and conduct extensive experiments across multiple datasets to validate the method's effectiveness.\n2. A new TTA method, STAD, is introduced, combining state-space models with a solid theoretical foundation." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a novel Test-Time Adaptation (TTA) method, STAD, which leverages a state-space model to learn time-evolving feature distributions. It also proposes a new TTA setting that is more reflective of real-world scenarios." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The method modifies and updates only the linear classifier, raising concerns about effectively handling covariate shifts. As shown in Table 4, adapting only the classifier on CIFAR-10C performs significantly worse than adapting the feature extractor. \n\n2. Experimentally, while a realistic TTA setting is proposed, comparisons with other similar settings and related methods are lacking. For instance, the following related works are not adequately compared:\n\n 1. UniTTA: Unified Benchmark and Versatile Framework Towards Realistic Test-Time Adaptation. arXiv 2024.\n 2. Towards real-world test-time adaptation: Tri-net self-training with balanced normalization. AAAI 2024.\n 3. Robust test-time adaptation in dynamic scenarios. CVPR 2023.\n 4. Universal Test-Time Adaptation Through Weight Ensembling, Diversity Weighting, and Prior Correction. WACV 2024.\n\n Additionally, only a small portion of common TTA datasets (CIFAR-C, ImageNet-C) are addressed, with limited focus on CIFAR-10C." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How does the proposed method compare to self-training algorithms, e.g., Ref [1], on real-world tasks?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "This work conducts experiments on several real-world datasets, enhancing the practical applicability of TTA algorithms in real-world settings.\n\nBy modeling gradual distribution shifts using linear state-space models, this work provides fresh insights in the field of TTA." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper addresses the test-time adaptation problem under gradual distribution shifts. The proposed method models the gradual distribution shift in the representation space using linear state-space models, implemented through the von Mises-Fisher distribution. Experimental results on multiple real-world datasets demonstrate the effectiveness of the proposed algorithm." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper lacks discussion and empirical comparisons with related work, particularly in the field of gradual unsupervised domain adaptation, which addresses very similar problems (e.g., [1, 2]). As a result, the contribution of this paper may be overstated.\n\nThe gradual shift assumption represents a relatively simple form of distribution shift in dynamic environments, as its total variation is small. This simplicity limits the contribution of the work.\n\nThe proposed method appears to combine existing approaches, which limits its technical novelty.\n\nReferences:\n\n[1] Kumar, Ananya, Tengyu Ma, and Percy Liang. \"Understanding self-training for gradual domain adaptation.\" International Conference on Machine Learning. PMLR, 2020.\n[2] Wang, Haoxiang, Bo Li, and Han Zhao. \"Understanding gradual domain adaptation: Improved analysis, optimal path, and beyond.\" International Conference on Machine Learning. PMLR, 2022." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "How can we quantify the gradual distribution shifts?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is generally well-written with clear explanations of both the problem and the proposed solution.\n2. The authors provide a strong theoretical foundation for the use of probabilistic SSMs to address temporal test-time adaptation, with detailed mathematical formulations of the dynamics model.\n3. The empirical analysis is comprehensive, with evaluations on multiple datasets, including both real-world temporal shifts and synthetic corruption shifts." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes a method STAD designed to adapt a model’s classification head during inference to handle temporal distribution shifts, which are underexplored. The paper focuses on test-time adaptation, proposing to track the evolution of hidden features using SSMs. STAD dynamically updates class prototypes to reflect the evolving data distribution. Extensive experiments demonstrate that STAD performs well in temporal distribution shifts, label shifts, and small batch size settings. The paper evaluates its approach on several real-world datasets, showing robustness and performance gains compared to baseline methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper does not clearly describe the sensitivity of STAD to hyperparameters which could impact the method’s robustness.\n2. While STAD adapts the last layer to changing distributions, the reliance on class prototypes could be problematic when distributions are overfitted to the target domains and cause catastrophic forgetting.\n3. The gradual distribution shift assumption is pretty strong especially during test time, which undermines the significance of the method. For example, ATTA [1] also targets distribution shifts during test time, which also includes some similar distribution shifts constrains. Given these constrains, they reason that a few labeled data is necessary to make the setting reasonable in real world scenarios. As what I believe, the gradual distribution shift assumption is a major limitation of this work. While it may be hard to make this method work perfectly on non-trivial domain shift datasets such as PACS and VLCS, there are two important experiments to do. (1) Building boundaries on the ability limitations and applicability of this work on non-trivial domain shift datasets. (2) Validating that it is **not necessary** to use labels in the gradual shift setting by showing that STAD's performance is similar to the methods using labels (as Oracle), such as active online learning and ATTA.\n\n[1] Gui, Shurui, Xiner Li, and Shuiwang Ji. \"Active Test-Time Adaptation: Theoretical Analyses and An Algorithm.\" In The Twelfth International Conference on Learning Representations." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We adapt to distribution shift over time by modelling its dynamics in representation space." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024temporal,\ntitle={Temporal Test-Time Adaptation with State-Space Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=y4F2YZxN9T},\nnote={under review}\n}" }, "abstract": { "value": "Distribution shifts between training and test data are inevitable over the lifecycle of a deployed model, leading to performance decay. Adapting a model on test samples can help mitigate this drop in performance. However, most test-time adaptation methods have focused on synthetic corruption shifts, leaving a variety of distribution shifts underexplored. In this paper, we focus on distribution shifts that evolve gradually over time, which are common in the wild but challenging for existing methods, as we show. To address this, we propose STAD, a probabilistic state-space model that adapts a deployed model to temporal distribution shifts by learning the time-varying dynamics in the last set of hidden features. Without requiring labels, our model infers time-evolving class prototypes that act as a dynamic classification head. Through experiments on real-world temporal distribution shifts, we show that our method excels in handling small batch sizes and label shift." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "test-time adaptation", "state-space models", "probabilistic modelling", "dynamical systems" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/37e380a7b5fe04117b7132d7bc9b15984f464c0b.pdf" }, "presentation": null, "primary_area": { "value": "probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Temporal Test-Time Adaptation with State-Space Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]